Now that we’ve gone through Spanning Tree Protocol and we understand the purpose there, to create a loop-free topology, we can go ahead and talk a little bit about etherchannels. Etherchannels create redundancy in our network, and do it effectively so that we’re not just wasting a bunch of links.
If you end up having a situation like the image above, one of these links are going to end up being blocked. As we found out in the Spanning Tree lesson, the link is going to be blocked on the inferior switch side, on the highest port number. Let’s say you want to actually use that extra link there. Well you could put it in a different VLAN, and then both could be forwarding traffic, one on one VLAN and the other on the other. This could work if you have the root priorities set correctly, though is a pretty specific circumstance that doesn’t happen much. What we wanted is to use the additional redundant link on the same VLAN, well this is where an Etherchannel comes into play.
We can take these two interfaces in the picture, and bundle them so that these two physical links appear as a single logical link to spanning tree, such that it will only count this as a single link and not that there’s a loop there, as long as we create that etherchannel. So Etherchannel presents multiple physical interfaces as a single logical port to spanning tree.
This allows for our redundant links to be utilized, instead of blocked. It also allows for more redundancy that’s much more quick to fail over. In the event that an interface falls out of an Etherchannel, the line protocol of the physical interface will go down, but the Etherchannel will just keep forwarding traffic. The logical Etherchannel will stay up as long as the minimum number of links remain up. You can have up to 16 links bundled in an Etherchannel. You can also specify what the minimum number of links needed to keep the Etherchannel up. Say we have 8 links in our Etherchannel with a minimum set to 6. If 2 links go down, the Etherchannel will stay up, if the third link goes down, bringing the active number of links to 5, then the Etherchannel will go down.
Etherchannels can be dynamically negotiated or statically configured. Best practice recommendation is to always dynamically negotiate etherchannels. The protocols that are used to negotiate Etherchannels are LACP and PAgP. These are Link Aggregation Control Protocol, which is the industry standard protocol, and Port Aggregation Protocol, which is the Cisco proprietary protocol. Interestingly, it’s not recommended to use PAgP, only LACP is referenced in the exam topics, so we’re only going over LACP.
In case you’re wondering, the main difference between the two protocols is the terminology. PAgP uses auto desirable, whereas LACP uses active passive as our port types.
So when you go ahead and bundle your links together, we now also have increased throughput. We’ve increased bandwidth available here. We can utilize all of these bundled links, and that’s actually the main benefit, is that you can utilize all of the links there, rather than having one of them blocked.
That load balancing happens in a specific manner which we need to understand since there’s some potential issues that can arise. The algorithm that’s used hashes source and destination information to figure out which link to use. Hashing is a mathematical process of taking an input and providing an output number of a known length. Hashing algorithms are very sensitive to changes in inputs as well, causing very different outputs from a small change of input.
So let’s explore a bit of what this means. Let’s say I’ve got 2 switches with two gigabit links between them that are bundled in an Etherchannel, as shown above, and you’ve got a few workstations connected to one, and a server connected to the other. The frames the server and the workstations send to each other will traverse the Etherchannel. The sending switch is going to hash the source and destination MAC address information by default and will get a number to determine which link that it’s going to use to transmit the frame. This means that if one particular computer always tries to talk to this specific server, generally that traffic will always transmit across the same physical link, because the source and destination MAC address will always be the same.
This starts to expose where the issue can come about with link aggregation. We don’t have a single 2gbps link, although that’s how it appears to spanning tree, we’ve got two one gigabit links between our switches. If one particular workstation keeps trying to hit this server really hard, it’ll only ever see one gigabit of throughput.
The potential issue of utilizing one link in an Etherchannel more than the others is called link polarization. So any one machine is never going to see the benefit here of having that increased throughput, it’s going to be if you have many machines with a lot of different addresses.
As seen above, different inputs are generally available for the load balancing algorithm. This will depend on the model of switch you’re using. Some switches can even use TCP or UDP port numbers. As the network engineer, we’ll need to consider the properties of the likely traffic flows and select the correct inputs accordingly. If, for example, we have a router connected with an Etherchannel to a switch with a single server, then our source and destination MAC address would be a very poor choice, since the MAC address will always be that of the router and that of the server.
The load balancing does work best when you use a number of links that is a power of two. Meaning, 2, 4, 8, 16. Because of the hashing algorithm that’s used, it’s intended for a power of two, if you use 3 links or 5 links or something like that you can end up again with link polarization as well.
Let’s talk about how we configure an Etherchannel and and how it is negotiated. With LACP, our Link Aggregation Control Protocol we have the port modes of active and passive. If both sides are configured as passive, the Etherchannel will not come up. That’s kind of like you have just a wall flower, really shy person, on one side of the room, and a wall flower on the other side. What do they do? They just stare at each other, and they wait for one to do something. Both being passive will not bring up the link.
What happens if both are active though? Well, it will take a little bit longer to come up. They sort of both run into each other, and then they step back for a moment and realize we actually want to negotiate an Etherchannel.
If one side is set to passive, and the other set to active, that is your quickest and best way to get your ether channel to come up. However, to avoid issues in the event of a configuration mistake (ie. setting both side to passive and it doesn’t come up), it’s best practice to always set to active.
There are a handful of prerequisites to configure an Etherchannel:
- All physical ports must be the same speed (100 megabit, gigabit, 10 gigabit, etc.)
- All physical interface configuration must match (speed and duplex)
- The same VLAN configuration must be on all interfaces (if a trunk, the same VLANs must be allowed)
That third item is a bit misleading, because the logical port settings from the first configured port are copied to the others in the channel group. So the allowed VLANs and the Spanning Tree settings like portfast and port security are copied from the first port to the others. Best practice is to set all the ports to be the same configuration to begin with, so that you know for sure what the Etherchannel configuration is going to be.