When you look at the exam topics for the CCNA, you’ll notice that this particular topic feels a little out of place. Cisco is asking us to explain the fundamentals of virtualization, specifically as it pertains to virtual machines and containers. This isn’t necessarily talking about network device virtualization and virtual machines in that respect, but just virtual machines in general.
First, let’s talk about what a virtual machine really is, and how it differs from our traditional architecture. Traditionally we have these single physical machines running a single operating system. Generally these machines are single purpose. You may have a single physical server running domain directory services, a separate running DNS, another providing DHCP services. This ends up giving us a lot of sprawl in larger organizations, where you may need several domain controllers, several DHCP servers, and perhaps separate machines for many different applications that your organization is running.
It would be common to deploy only one application on each physical machine, because we want to separate them out to limit the amount of software conflict with each other on the operating system. So we end up with rows and rows of physical machines. Every time we need to spin up a new machine we need to physically send someone out in the datacenter to rack and stack a new physical device and to set up a new operating system. This generally leads to the physical machines often being underutilized. That does not look very good for the efficiency of our IT organization within the business.
Now in physical machines the Operating System (OS) has direct access to the machine hardware. The OS is directly interacting with the processor and with the components on the motherboard, it is interacting with your memory and the OS is handling its own scheduling for that hardware. As far as the operating system is concerned, it has full access to the hardware available on that physical machine.
Step onto the stage, virtualization. With virtualization, we end up still having a single operating system that runs on your hardware but this operating system acts as a hypervisor. Now what does a hypervisor do? A hypervisor will present virtual hardware to a guest operating system. As far as the guest OS is concerned, it has physical hardware available to it and does all of the operating system tasks as it normally would. What it is interacting with is actually this virtualized hardware, this virtual machine that the hypervisor is presenting to it. So the hypervisor actually handles the scheduling and the resource management for the physical hardware, and takes the instructions from the virtual machines and schedules that out so that the physical hardware is able to react appropriately.
This has the ability to significantly increase our utilization of existing hardware, and increase efficiency. We went from separate physical hardware that can only run one single workload in our traditional architecture, and now we have our one physical machine that can run many operating systems that are all individual separated computing areas running their own workloads. The primary problem we saw with running multiple applications on a single machine were software conflicts. With a virtual machine, we can dedicate an entire OS to an application or workload, but we don’t need to dedicate an entire physical machine.
Hypervisors provide virtual machines (VMs) with a virtual switch (vswitch) to connect their network adaptors to. This vswitch acts just like a regular layer 2 switch, it will forward traffic based on the destination MAC address. The virtual network adaptors each have their own MAC addresses generated by the hypervisor so that each one is unique.
Some vendors, just as vmware have elaborate virtual networking solutions that are worth learning more about if you’re looking to support those systems.
You may have considered that although VMs provide better physical hardware utilization, there still seems to be a significant amount of waste. For each application that’s running, an entire instance of an OS needs to run. Not only does this cause inefficiencies with having to run the background processes of the various operating systems; this is also a huge waste of storage.
A separate copy of all operating system files needs to be created for every VM! The Windows Server 2016 installation requires a minimum of 32GB of disk space alone. Mind, many of our enterprise applications won’t necessarily be running on Windows Server, however many do.
This is where containers come in. Containers allow for the core files of operating systems to be shared, while still maintaining a separate execution environment. This provides the protection against software conflicts, increases physical utilization efficiencies, and reduces the waste and bloat seen with VMs. Not everything can be containerized, though when workloads can, it is a superior architecture.