Data Center Design

Example Data Center Layout

Welcome! To Data Center Design.

The Data Center is a really cool place
because this is where all the highest technology, in networking, servers,
storage, cooling and power is all converging together. Because of the high
density of extremely important technology to businesses, these places are
generally guarded with very high security and great care is taken in designing
the data center. The data center is also huge on virtualization, which can
really help us get the most bang for our buck in terms of hardware utilization,
but also can provide better security and manageability.

To give you a quick look at what a data
center looks like, I found this image online that I feel illustrates a real
world data center very well. You see your rows of server racks with cable
management on top. There’s office space for the staff that manage and run the
data center. Over here you see the power UPSs for power redundancy and some
other high security areas. You have your SANs down here in the bottom left; the
racks are all organized with small hot aisles and large cold aisles. Mind you,
this is a smaller data center but a lot of the images out there don’t give you
a realistic look at what your average data center is and I felt this
illustration did a good job of that.

From the network architect’s perspective, we should know how to organize the networking infrastructure in a fashion that allows for maximum port and feature density while maintaining a logical and easy to manage layout. It’s largely considered the network engineer’s responsibility to design and maintain a data center, with the system engineers maintaining the actual servers in the data center. In this video we’ll touch briefly on some design considerations for the data center as a whole, and move into focusing on the logical and physical network infrastructure that connects it all together from one data center to many.

Cisco has defined the enterprise data
center architecture model which has 3 levels, DC foundation, DC services and
user services. The foundation is the raw foundational hardware and cabling that
connects it all. DC services are services that are consumed by the data center
itself, like load balancing and intrusion prevention, and finally after that’s
all in place we can provide user services. Where we’ll be focusing is in the
data center foundation layer.

This layer has 3 components. Virtualization, unified computing and unified fabric. Unified fabric sometimes is a bit misunderstood. The fabric of the data center is the network cabling and interconnects which allows everything to communicate. Not only is it communicating on the LAN, but with the unification of the SAN, that’s storage area network, into the LAN, we now have a unified fabric. Unified computing here references the physical compute resources that our virtualized server environment runs on. Long gone are the days of running a physical server with a single host operating system; all data center servers are virtual machines now and these virtual machines run on unified compute resources. And the thing that allows these two pieces to exist and work effectively is virtualization technology.

Virtualization in Data Center

The data center topology components looks like this. At the bottom we have our virtualized storage pools that exist in storage appliances. We also have our virtualized network devices; either using chassis virtualization with something like VSS or using virtual device contexts to get higher utilization and separation from our existing hardware. The storage appliances connect into the VSAN and communicate with the unified computing resources generally using FCoE as a replacement for FC, or something like ISCSI; providing block level storage control capability over a CNA, or converged network adaptor. The CNA replaced the HBA and can provide both LAN and SAN connectivity to the unified compute resource in a single adaptor. Finally at the top here we have the virtualized server infrastructure that runs on top of our physical hardware. As the network architects, we’re mostly concerned with the bottom 2 layers here and so that’s what we’ll be discussing in further depth.

Challenges for Data Center Design

Being a highly dense and converged
environment, there’s some more unique challenges that the data center faces
that the enterprise campus does not. 2 of the big ones here are power and
cooling. Cooling is the single largest power consuming service in the data
center and can often account for half of the entire power consumption of a data
center. Though of course these are not the only challenges. When you’re
entering a data center, or building your own, you’ll also be given a set of
specs that you’ll need to consider. Maximum load or weight. With towering racks
of dense computer and storage equipment, the floor can only support so much.
Often times there’s actually a separation between the floor and subfloor to
allow for cooling airflow. Then of course the primary thing you’re renting in a
data center that costs its weight in gold and then some is space. This is where
being able to pack as much punch into as small an area as possible is a huge
benefit due to the cost savings. You may be faced where you can buy a server
that has the capabilities you need but it’s 2 rack units tall, or you can spend
twice as much and get a server that’s only 1U tall; you may end up spending
that extra money to get the smaller size server.

In regards to the exam, I’d commit this table to memory. You can probably see it now where a question might ask ‘which of the following are not challenges in the data center’ and it’s multiple choice. So be sure you’re aware of what the physical and technical challenges are to be prepared.

Why Cable Management is Necessary

Hot Aisle – Cold Aisle Design

Now, cooling and cabling might feel like
somewhat unrelated subjects but proper cable management can mean proper cooling
or overheating. It allows for proper airflow, and also reduces the risk of
mistakes and outage caused by unplugging the wrong cable either because the
cable was in the way or because you were unable to properly trace the needed
cable.

Power and cooling should both be highly
redundant systems. Battery and generator backups should be in place with
regular testing schedules. The best backup system is no good at all if it
doesn’t work, and testing during an outage is not a great way to find out if it
works.

Cooling can be one of those things that’s
difficult to fix quickly. Workloads can be routed away from a bad server, or a
bad router. If your HVAC goes down, it’ll likely be hours before you can get an
HVAC technician onsite and possibly more before it’ll be fixed. In this time
your whole data center will have practically melted. These are things to keep
in mind not only if you’re designing your own data center, but to build
question sets to ask potential data center providers when selecting one to host
your company’s equipment. Understanding what physical backup mechanisms are in
place is crucial in the decision to host your business’s critical data in their
data center.

In case you’ve never been in a data center, In the diagram on the right you can see how the cooling is done. The larger aisles you typically walk down to access equipment are cold aisles. All of the equipment’s fans will draw air from this aisle and exhaust will expel into the hot aisle. This is a consideration when purchasing equipment to go into a data center, many manufacturers will sell the same equipment with the exhaust going in both directions, depending on the needed orientation according to your data center.

Enterprise DC Architecture

Only subscribers can view the full content. Join Now!

Scroll to top

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

CiscoLessons will use the information you provide on this form to send occasional (less than 1/wk) updates and marketing.