Virtualization and Its Far-Reaching Affects In the Data Center

Server virtualization has been going strong for more than 10 years now and as the trend continues unabated it increasingly appears that companies will have to rethink their data center infrastructure to keep up.

The goal of server virtualization is to create a pool of computing resources that can be used by applications as necessary. Historically, applications ran on one or more physical servers that were generally dedicated to it. In most cases, that meant companies were running dozens or hundreds of servers that were grossly under-utilized, meaning they were running at 20% to 25% of peak capacity most of the time. If 75% of a computer’s capacity is going unused the majority of the time, that of course represents a tremendous waste of computing resources.

How Virtualization Works

Virtualization is intended to change all that. It works by using a hypervisor to create a layer of abstraction between the server’s operating system and applications and the underlying hardware on which they run, meaning the CPU, memory, I/O cards and the like. Instead of running on physical servers, applications run inside “virtual machines,” or VMs. Because of the abstraction layer, the VMs can be easily moved from one physical server to another and a single application can span multiple VMs.

Virtualization technology makes it easy for companies to spin up new servers to meet business demands. Rather than having to buy a physical server to support a new application, then configure it for whatever the new application requires – a process that takes days if not weeks – companies can now simply spin up a new VM wherever they have some available space on a physical server. And given a single server can run multiple VMs, utilization rates go way up; rates in the neighborhood of 70% are not uncommon.

Virtualization Challenges and Opportunities

As companies continue to virtualize physical servers, they find it changes the nature of data center traffic. In short, more traffic remains within the data center, traveling between VMs on the same or different servers. It’s a trend that the likes of Gartner expects to continue, as it is predicting that by 2014 companies should expect more than 80% of data center traffic will be server-to-server ( “Your Data Center Network Is Heading for Traffic Chaos,” Bjarne Munch, Gartner, Inc., April 27, 2011). What’s more, within four years, Gartner thinks bandwidth (as measured in I/O) per rack will increase 25 times.

What all this adds up to is data centers that require new types of networks, including data center fabric networks. Such mission-critical networks will likewise require mission critical power systems, from UPSs to PDUs, to help ensure they don’t go down.

Highly virtualized data centers also house racks that become increasingly dense, loaded with high-powered servers. That will drive the need for new approaches to data center cooling, such as row- and rack-oriented cooling. Such systems are capable of directing cooling power more precisely to where it’s needed, in a way that traditional room-based systems simply can’t.

To manage all this infrastructure, data center operators are also likely to become increasingly interested in Data Center Infrastructure Management (DCIM) tools to help them better measure, monitor, plan and optimize their data center environments.

APC has products that address each of these challenges, which means virtualization technology presents ample opportunity for its partners.

Leave a Reply

Your email address will not be published. Required fields are marked *

Time limit is exhausted. Please reload CAPTCHA.