This may be your first real introduction into Spanning Tree Protocol, or you may have been dealing with it for a long time. Of all of the information that a network engineer can have, being well versed in Spanning Tree Protocol, I feel is one of the most important things. You could have a career focused heavily on routing, and know BGP inside and out. Though if you’re ever in the enterprise or small business environment, then being familiar with Spanning Tree Protocol and its operation, and how to optimize it, is really something that will become invaluable to you.
I want to note that the exam topics for the CCNA only include Rapid Per-VLAN Spanning Tree, so we are only talking about the cisco proprietary Spanning Tree Protocol. There does exist the industry standard Spanning Tree Protocols, and although the CCNA exam topics do not cover that, it will certainly help us to understand our progression just a little bit here.
So, let’s start with just original spanning tree protocol, 802.1D is the standard where it is defined. It was a protocol created in order to provide a loop-free switching topology. Now let’s just talk about what loop-free means and where Spanning Tree Protocol comes into play.
The primary issue that provides the need for a loop-free switching topology, is the fact that there does not exist a TTL value in the layer 2 header. This means if I have a broadcast or unknown unicast frame, a switch will continue forwarding it in a loop forever, if there’s a switching loop. Worse yet, due to the flooding nature of Broadcast and Unknown Unicast traffic (where a switch forwards the frame out all ports in the same broadcast domain except the one it came in on), we have the opportunity to traffic duplication. This could cause our single looping broadcast frame to turn into millions of looping broadcast frames in a very short period of time.
With millions of looping frames, this will cause the switches to become bogged down, and perhaps eventually even crash, just bringing your network down entirely, because we have a switching loop. So this is why 802.1D was created. The problem is that the original 802.1D was very slow to converge, meaning for the network to come to a stable, loop-free, state.
As time went on, and the original 802.1D takes over 1 minute to converge, this became unacceptable. Back in 1990, 1 minute really wasn’t that long for a network outage, it just wasn’t really a big deal. Nowadays though, when everything relies on your network connectivity, 1 minute of downtime can cost companies very large amounts of money.
The later revisions of Spanning Tree protocol are speeding up that convergence time. With Rapid Spanning Tree Protocol, 802.1W, convergence speed was increased. This standard changed the way that Spanning Tree Protocol works just a little bit, and then also significantly decreased the amount of time required for a port to go from blocking to forwarding. In order to create a loop free topology, we block some ports which are the highest cost ports. The cost is calculated based on the sum of the port speed costs to get to the root bridge. The port that is the lowest cost to the root bridge will become the root port, and the election process determines the root bridge. Though if the root port, or another upstream path to the root bridge goes down, say if a link goes down or switch goes down, a blocked port can then transition that port back into forwarding.
so let’s talk a bit about our terminology. I threw some terms out there, like root port, designated port, root bridge, and the election process. let’s take a look at some of these terms here.
First, I’m sure you figured it out a bridge is just another term for a switch, let’s just get that out of the way here in case you weren’t aware that a bridge and a switch are interchangeable.
Our root bridge this is the one that is elected as the root of spanning tree . This is defined by the bridge with the most superior, meaning the best, Bridge ID or BID.
The Bridge ID is defined as our priority number plus the switch’s mac address. So we have our priority, which by default is 32,768. A lower BID is more superior. So if all priority numbers in your topology for all of your switches are all the same, then the switch with the lowest mac address number will be elected. Honestly this is really problematic, because as our vendors have manufactured our switches, they typically started from the beginning of their mac address allocated ranges, so at the bottom, and then continued their way up as they were manufacturing their switches. This is a problem because our older switches then have a tendency to have lower mac addresses.
So if you leave your priority numbers the same and the default across your topology, and you only rely on your mac address, chances are you’re probably going to get an older switch elected as your root bridge in your topology. This could be a good thing, it could be a bad thing. Your older switch might be your big beefy one that you don’t need to replace for a long time, or it could be your oldest switch that’s just a guy that is sitting out on the network that should be replaced any minute here because it’s running on its last leg. Not exactly what we want as a network engineer.
The Bridge Priority is a configurable attribute, in increments of 4096. This is the increment because your priority is really your configured number, plus the Vlan ID, and how many vlans can you have in the extended range? Well, 4096, that’s why you can only do increments of 4096.
Our Bridge Protocol Data Unit is the data unit that is sent around the network so the switches can figure out who the root switch, the core of the network, is. The Root helps them determine where should all of their connections try and get to, in order to be able to have continuous connectivity across the network but without creating loops. The BPDU is what is sent around that includes your BID, and also includes the path cost from the perspective of the advertising switch.