White Paper as Manual for the Cloud’s “Engine”

For modified car aficionados, projects aren’t finished just because they’ve increased engine performance. Now they have to “tune” the rest of the car—the suspension, the tires, steering, the body cosmetics—to complement what’s under the hood.

So OK, you may be thinking, what’s the tie to data centers? Well, with IT virtualization—which can be thought of as the “engine” behind cloud computing—we have a parallel to modified car projects—only it’s often been without the careful attention to tuning found among lovers of turbochargers and chromed rims.

IT virtualization—the abstraction of physical network, server, and network resources—harnesses more computing output from IT hardware. Virtualization revs up utilization, increasing it from around 5 percent to 10 percent for a non-virtualized server, to 50 percent or higher for a virtualized one. Yet often times, IT organizations don’t follow through with adjustments to data center physical infrastructure (DCPI) like power and cooling to balance the changes wrought by virtualization, such as denser, more dynamic computing loads.

Most of you know about this “right-sizing” that should coincide with virtualization, but the market still needs education about what’s at stake. To help meet this challenge, Schneider Electric offers a newly revised white paper # 118, “Virtualization and Cloud Computing: Optimized Power, Cooling, and Management Maximizes Benefits.”

This is the ideal white paper to give an understanding about the DCPI improvements that bring optimal efficiency and performance as part of virtualization. Read the paper for all the details, but here are few highlights:

  • Virtualization brings much better energy efficiency from IT assets, but if the DCPI is left untouched, the data center’s power usage effectiveness (PUE) typically degrades. The paper explains why this is tied to unused power and cooling capacity and associated “fixed losses” from those DCPI assets.
  • Virtualized servers tend to be installed and grouped in ways that create high density areas that can lead to hot spots. The paper presents options for minimizing or isolating these hot spots, including the creation of high-density “pods” within a data center that allow for efficient rack or row-based cooling, and/or air containment with optimal, shorter air-flow paths.
  • The paper’s appendix is a case study scenario that fleshes out the benefits of right sizing, based on calculations generated with APC’s Virtualization Energy Cost Calculator TradeOff Tool. The case study shows that for a 1 MW data center which has undergone virtualization but no DCPI adjustments, the electric bill could expected to be cut by 17 percent, but PUE would degrade to 2.25. However, with rightsizing of DCPI for the same virtualized data center, the electric bill would drop by 40 percent, while PUE would improve to 1.63.

Think of this white paper as an educational tool for clients considering virtualization, or who have recently undertaken projects but are underwhelmed by the energy savings. To get the maximum benefit from virtualization, users often need help in balancing DCPI to virtualization’s characteristics. It’s a vitally important challenge as more companies and data centers become part of the cloud computing trend that virtualization is powering—more important than chromed rims, at least!

Leave a Reply

Your email address will not be published. Required fields are marked *

Time limit is exhausted. Please reload CAPTCHA.