In this section we’ll be talking about the different general topologies you’ll see in your work, both in the Wide Area Network (WAN) and also in your campus and smaller offices, as well as in the data center. Speaking of the data center, I’d like to first talk about the Spine and Leaf topology.
Spine and Leaf
The spine and leaf topology feels a bit unusual when you first take a look. You have your spine up top and your leaves down on the bottom. All of the leaves are connected to all of the spine nodes, however none of the spine nodes are connected together. This is a really fast topology, that makes sure that if we have an endpoint which needs to talk to an endpoint connected to another leaf, the traffic will follow a very predictable pattern. With routed links between our spine and leaf, we would have Equal Cost Multi-Pathing (ECMP) to load balancing the traffic across the links between the spine and leaves.
In the event a spine switch were to go down, then traffic would distribute among the remaining links. Great redundancy, and a really fast topology but it’s mostly common in data centers. This is because it’s usually that your leaves need to talk to each other in data centers, and not so much in an Enterprise Campus environment.
Hub and Spoke
Now, let’s talk about our WAN topologies. First I’d like to talk about a hub and spoke topology. In the hub and spoke, you don’t have that many links, you only have just the bare minimum, enough to get full connectivity between all of your sites. You end up with a cost savings of course, because you have the minimum number of links required, and it’s simplified management.
Hub and spoke is very common currently, and was much more common in the past. For the branch offices out to get to the internet, it would go through our hub and out to the internet from there. Same if the branch office needed to talk to another branch office, then one would go through the hub and over to the destination branch office. That’s still very common because it is much simpler to manage, you have a single point of management at your hub, a single firewall (or perhaps a single cluster), configure one set of content filtering, one set of rules.
Now the problem is, that you do have a bottleneck. Everything goes through the hub, it’s got to be a big, beefy router to really be able to handle this. Especially if you have a really data intensive business where perhaps the branch offices need to talk to each other very often. The hub could get overwhelmed, or worse. In the case of VoIP, if you have a lot of office-to-office extension-to-extension calling going on then the hub hopefully can handle the traffic with very little latency to make sure that your quality is not degraded and your experience is not just terrible.
In this topology, we also have a single point of failure at the hub. Of course if you have a cluster of firewalls at the hub, you’re much more protected against a device failure. Though really what you’re worried about is not the physical failure of the firewall, it’s a failure of your internet connection or your wide area network connection. When you have all of your branch office WAN lines coming out from your hub site, what that really means usually, is all of that fiber is coming in at a single conduit into your hub site office. Coming over a single utility pole. Just a little backhoe just could cut that conduit right there… BAM.. all of your connectivity between all of your locations, they’re all dead in the water. Nobody can talk to anything, your internet connectivity is down and you are grinding to a halt.
That it is a single point of failure, and a little problematic to have just a pure hub and spoke like that. If this is how your company’s connectivity is set up, it would be wise to check with your network service provider and see if you have full path redundancy between your fiber links.
So moving on to the other extreme here, let’s talk about a full mesh topology. With full mesh connectivity, now this isn’t necessarily saying that you wouldn’t have just one internet connection at a core site, you still could and then to get out to the internet they’d still all have to go through this core site. Still a single point of failure for that service (public internet connectivity)
Though, let’s just say each branch office has their own public internet connections, and they’re all connected to each other in a full mesh. This definitely provides for the maximum amount of redundancy, though you have a lot of links. This could be that you have a bunch of VPNs over the public internet to manage, which could become completely unreasonable after a certain number of sites. Alternatively, you’ve got a lot of links that when you’re purchasing from a wide area network service, an MPLS VPLS, what have you from your service provider and they will generally charge you for that kind of connectivity. Meaning it will be very expensive.
From a manageability perspective, similar to managing the VPNs, you might be managing policies at every individual site, and if you have 40, 50, or 100 of these sites, of just separate little branch offices, then you could have just an infeasible amount of management. The administrative overhead there could just be purely prohibitive. Just a consideration when thinking about the amount of decentralization you want in your network.
As a purely academic little bit here, the number of links required for a full mesh connection is:
Links = (N-1) * (N / 2)
So, if I had eight offices, I’d have (8-1) * (8/2) = 7*4 = 28 links in order to have full mesh connectivity for N=8.
Now moving to the nice happy medium between those two, let’s talk about our partial mesh. Now our partial mesh connectivity has kind of the best of both worlds. You could have a hub and spoke with dual hubs for example. You’ll have fewer links and therefore less cost. Though it can be kind of tricky to keep track of how each device is connected and who’s connected to who in a partial mesh. You’re going to have to keep really good records in your excel sheet to make sure that’s all together.
Typically some sites don’t have redundant connectivity in a partial mesh. This usually ends up being a natural progression topology from a hub and spoke. If you end up finding that one of your spokes needed to talk a lot to another spoke and you wanted to get a direct connection between them so that it didn’t have to go through the hub anymore. It is overall more simplified management than a full mesh.
Moving on to our collapsed core design. This is kind of the network that everything wants to try and be. This is a two-tier network, we have our core-distribution layer and our access layer. This is much more typical of small to medium sized businesses to get your collapsed core where your core and distribution layer are in the same collapsed core and you end up having your redundant distribution switches and then going out with redundant connectivity to your access layer switches.
This is resilient and it’s scalable. You can scale to a really big network on a two-tier design. There’s no set size that Cisco has defined as to when you’re too large to use a two-layer network, and it’s still very scalable and it’s still very resilient and redundant. One of the core switches could just burst into flames and it doesn’t matter, all of your access switches still have connectivity through your redundant connections. You could end up losing an access layer switch and it’s just going to end up affecting those individual people on that floor; it really helps limit your fault domain. (that’s a word to remember for the exam)
The a fault domain is what defines how wide of an effect that a specific fault has. Another way to explain is, how many people will be calling you if a particular link, device, port, etc. fails.
Say your organization starts to become really big, and you get to the point where you’re just running out of ports and you don’t really see how you can add in another distribution switch. Well, that’s when we move over to our traditional three-tier design. This is where we go ahead and split out our core and our distribution blocks into discreet pieces. This is the ultimate in scalability. Simply tack on another distribution-access block and you have an additional large site interconnected to your Enterprise LAN. Typically this is what ends up happening when you have multiple buildings. Each distribution-access block is a building, they’ll continue scaling like that, it really is the ultimate in scalability and it adds redundancy. Each of these tiers does have individual roles that they end up fulfilling that we’ll talk about shortly.
The well-organized 3-tier design also helps for ease of understanding, to really simplify your network. Rather than just having a spiderweb or rat’s nest of connectivity. Network engineers can really wrap their head around the three-tier design, it’s been around for a long time and it really helps with understanding and also facilitates good fault isolation. In the event one of the access switches dies, you know that only that floor is affected.
Now in the individual Access, Distribution, and Core layers each one does have its own roles to fulfill. Down in the access layer, this performs services like power over ethernet and port security, this also is where you’re going to do your rate limiting, and this is where you run spanning tree. Configurations from your spanning tree tool kit like portfast, BPDU guard, and root guard would be implemented in the Access Layer. This is also where you would implement 802.1x.
There’s a key design consideration with the Access Layer. Many engineers prefer a routed access design. This is where you have layer 3 links, routed links, between your access layer switches and the distribution switches. This generally wasn’t possible in the past, due to the expense of layer 3 switches, though this feature set is now ubiquitous in business grade switches.
The routed access design removes spanning tree from the network. This is generally a good thing. Spanning tree protocol is a very old protocol, developed back in the 1980’s, and is quite slow to converge. Layer 3 routing protocols are much faster and much more stable. The routed access design however removes the ability for site-wide VLANs. This means if you have your ACCOUNTING VLAN, and your accounting team works in various areas of the building, you would not be able to have each of those groups of accountants in the same VLAN, because VLANs will not span multiple closets. This can be a little difficult for some engineers to deal with and requires a change in the way we think about segregating our network traffic.
The switched access design, however, is the classic design where the connections between the access layer and distribution layer are trunk ports. VLANs can span the entire organization, causing very large broadcast domains. This allows us to have our accounting team always in the one ACCOUNTING VLAN regardless of where they are in the Enterprise Campus, though forces us to rely on spanning tree for creating a loop free topology, and for converging after a failure.
At the distribution layer is where you’ll be aggregating multiple network closets together, likely an individual building or section of your network. This is where you’ll be implementing redistribution between routing protocols, QoS policies, and routing between VLANs is you have a switched access design. The Distribution layer summarizes routes from the building to advertise into the core. This layer also is responsible for packet filtering and policy based access.
Finally, moving on up into the core, the only real thing that we worry about in the core is speed, speed, and more speed. We want to do absolutely minimal packet manipulation, we might run some QoS but that’s really about it. The whole idea is for this to be a highly redundant and fault-tolerant interconnect between all of the distribution blocks. The idea is that we’re going to have many distribution blocks, that we’re adding on more buildings we’re adding more and more sections to go ahead and scale our network out. We’re making it so the distribution blocks don’t need to be all connected in a mesh directly between each other, we have just a central area where you can go ahead and aggregate those distribution block connections and have one central core to connect them all together.
SOHO (Small Office, Home Office)
Moving out into our smaller office away from our campus, let’s talk about our SOHO, the small office, home office. This is typically a single router that’s single or multi-homed. As far as what homing means, there’s 3 different options. You can have a single home, multi-home, or dual homed connection. If you have single homed connection, you’ve got one connection to one ISP. Now dual homed is typically where you have two connections to one ISP, and then multi-homed would be if you have at least one connection to multiple ISPs. For the SOHO, it uses either integrated or external switching for LAN access. Now you can get your integrated switching modules for your ISRs of course and go ahead and connect your phones and your computers all into the network as you would need.