One of the great features to come
out of server 2012 and expanded upon here in 2012 R2 is NIC teaming. NIC
teaming gives us the ability to combine multiple network adapters together.
They're presented to the operating system as a single adapter, and it gives us
benefits, such as performance and redundancy. Now prior to server 2012, if you
were to combine multiple network interface cards together, you
would need specific network adapters from the same vendor plus third party
software to make it all happen. And the bottom line is… it just wasn't simple.
Now, with server 2012, it is built into the operating system native to Windows.
It is also vendor and hardware independent. So, it is extremely easy to work with.
It is also supported in both physical and virtual machines. Another nice thing,
it supports up to 32 network interface cards, and this is also known as
LBFO, which stands for
load balancing and failover.
Let's say that we have got a bunch
of physical network adapters, and they are all connected to one switch. Now,
we will call these physical network adapters, PNICS i.e. Physical NICs. Now, what
we'll do is using NIC teaming, we will actually combine all of these into a
single TNIC. We will call that a teaming NIC. So, we've got PNICs, and then, we got TNICs. Now, we'll get into hyper-v where we will also have VNICs, virtual network
interface cards. Now, this
single, logical TNIC, teaming NIC, is what the operating system will see. Now
before, each one of these would have their own IP configuration. Now, we combine
these in the team, we're going to move that IP configuration out to our TNIC.
So, when traffic hits our TNIC, it is going to get distributed across the
network interface cards inside of our NIC team.
Now, this is where things get
interesting. How it distributes that traffic going out and coming in is going
to depend on our load balancing mode, and how we configure our NIC team is
known as a teaming
mode. Let's go over our teaming modes.
This is going to be the first decision you need to make when forming your NIC
team. There are three teaming modes. We have switch-independent,
and we have two switch-dependent modes - static and LACP (link aggregation control
protocol). Now, switch-independent, as the name implies, means we do not need to
configure anything on the switch side. This will work with any switch because
all of that intelligence is handled by Windows Server, and specifically the NIC
teaming feature. Plus, we can have multiple switches involved. We can have our
cards attached to many network switches, which will give us even more
redundancy. Now, with the switch-independent mode, all of our outbound traffic is
going to be load balanced across the physical NICs in our team and how it does
that distribution is going to depend on the load balancing algorithm we choose.
But, for incoming traffic, because our switches are completely unaware of this
team and all that intelligence is handled over here in Windows, our incoming
traffic is not going to be load balanced.
And this is where our
switch-dependent modes come into play. Switch-dependent teaming modes require to configure the switch, so it is aware of our NIC team, and by the way, all of
our cards must be attached to the same switch as well. With static teaming mode,
we configure the individual switch ports themselves and connect the cables to
the correct switch port. If we move these cables around, and that port
is not configured for that card, we have broken our NIC team.
LACP on the other hand, the link
aggregation control protocol, is more dynamic. We configure the switch rather
than the individual ports to make them aware of the cards involved in the team.
You could move your cables around, and it is fine. It will still be completely
aware of the team. And it can load balance incoming network traffic as
well. Now, again, we can control how that distribution occurs by choosing a load balancing algorithm. We have three of them. We have address hash, we have hyper-V port, and
we have a brand new one here in Server 2012 R2, which Microsoft recommends we
always use, known as Dynamic, which gives us the best of both worlds.
The address hash load balancing
algorithm, we'll use attributes of network traffic, such as IP address, the
port, and the Mac address to determine which network card it should send its
traffic to.
The hyper-V port load balancing
algorithm will tie a virtual machine to a specific network card in the team. So
this will work out really well, if you have a hyper-V host with a lot of VMs on
it, because the more VMs you have, the greater chance that your load will be
distributed across many of the cards in your team. If you only have a few VMs,
this won't help you out very much because those VMs will always be tied to the
same network card, and there will be unutilized network cards in the team.
Again, dynamic is the brand
new and now the default option here in Windows Server 2012 R2. It is what
Microsoft recommends because it uses the best of both worlds to use address
hashing for outbound and hyper-V port for inbound. It also has additional
intelligence built into it to rebalance whenever there's a break in the traffic
going through our NIC team.
In short, the teaming modes are as follow:
1. Switch Independent
2. Switch Dependent
|___> Static
|___> LACP (Link Aggregation Control Protocol)
There are three load balancing algorithm.
1. Address Hash
2. Hyper-V port
3. Dynamic
Thank you,
Nirav Soni
Comments
Post a Comment