Sie sind auf Seite 1von 17

R

A
FT
P A P E R

W H I T E

Computational Fluid Dynamics Modeling


for Operational Data Centers

Computational Fluid Dynamics Modeling for Operational Data Centers

Executive Summary
Improving Effectiveness of CFD Technology in Cooling of Data Centers

R
A
FT

IT managers continue to be challenged by increased demand on the data center to support business strategies. From
an operational perspective, this means more servers and other hardware are often required to deliver additional
computing capacity and bandwidth for growth.
Whereas some facilities can absorb the new technology from a physical space standpoint, the existing cooling
infrastructure may not be suitable or efficient, which is driving a number of infrastructure upgrade projects. In each
case, those involved in facility and data center planning must integrate design and operational priorities to ensure the
long-term viability of their facilities.
For many IT managers, Computational Fluid Dynamics (CFD) modeling provides a more scientific and
comprehensive design approach to power and cooling management. CFD technology can be applied to create a
better understanding of why hot spots are present and illustrate the effects of equipment layout, location of cooling
units, drop ceiling, hot- or cold-aisle containment or under-floor obstructions on airflow and temperature distributions
in the data center.
However, despite the advantages of CFD modeling techniques, many data center managers have been unable to
efficiently run CFD analysis on an operational data center, because maintaining an accurate inventory of equipment
demands tremendous effort. Gathering the necessary data to implement CFD can be a substantial undertaking, and
companies that do not have a comprehensive infrastructure resource management system often struggle to pull
together the amount of relational detail needed for the CFD tool to deliver a complete analysis.

A new software integration promises to make it easier and more beneficial to perform CFD analysis on operational
data centers. Implementing Aperture VISTA, from Emerson Network Power, and TileFlow, from Innovative Research,
Inc. in CFD modeling processes can help data center managers gain significant efficiencies and extend the life of
their data center.

For example, this software integration would enable data center mangers to accurately depict and understand the use
of cooling resources and allow for appropriate infrastructure right-sizing. This could significantly help prevent
organizations from under- or over-provisioning the infrastructure. Instead of buying cooling equipment they do not
need, they might discover that they can simply reposition what they have.

2009 Aperture Technologies, Inc.

Computational Fluid Dynamics Modeling for Operational Data Centers

Situational Overview
The Importance of Cost-Efficient Cooling in Data Centers

R
A
FT

Because computer servers, telecommunications equipment and storage systems housed in data centers dissipate an
enormous amount of heat and the equipment must be maintained at acceptable temperatures to assure reliable
operation IT and facility managers are very familiar with the need to properly cool their data centers to prevent
equipment failure due to overheating.
However, in many cases facility cooling infrastructure suffers from inefficient operation, which leads to decreased
energy efficiency and over-provisioning of cooling equipment. That inefficiency can result from several different
factors. For example, the cooling airflow is not distributed according to the demands of the equipment, or the
discharged hot air is not properly contained or vented.

Part of the problem is that heat load can vary significantly across a computer room, and it changes with every addition
or reconfiguration of hardware. Therefore, just meeting the total cooling capacity or average airflow requirement isnt
enough to provide optimal cooling where its needed.
With continuing increases of heat generation in data centers, plus dramatically rising energy costs, managers are
coming under greater pressure to improve the efficiency of cooling systems and reduce power consumption.

Studies from organizations such as the U.S. Environmental Protection Agency (EPA) point out that energy efficiency
will continue to be a primary objective for IT personnel. An August 2007 EPA report projected that energy
consumption for U.S. data centers will double by 2011. That increase would translate to an estimated cost to the
public and private sectors of approximately $7.4 billion annually.

While the EPA report does not recommend specific directives for enhancing efficiency, it offers guidelines. In Figure
1.0, the tasks described in each scenario have a direct impact on the bottom line. By simply improving operations, the
EPA predicts a 30 percent improvement in infrastructure energy efficiency. Simple configuration changes are the
foundation for future improvements.

Figure 1.0: Three-tiered chart of suggestions for enhancing data center infrastructure energy efficiency.
Source: U.S. Environmental Protection Agency
2009 Aperture Technologies, Inc.

Computational Fluid Dynamics Modeling for Operational Data Centers

The need for smarter energy utilization is clearly growing and most companies are making changes. In a 2008 data
center energy efficiency survey, Cassatt interviewed 215 IT professionals from companies supporting between 500
and 5,000 servers. The majority of those surveyed reported that their organizations are planning to reduce energy
costs by consolidating servers or modifying their physical infrastructure equipment (see Figure 2.0).
All Companies

R
A
FT

Energy- Efficiency Strategy

Server consolidation

69.0%

More power efficient servers

50.8%

Storage consolidation/virtualization

47.7%

Consolidate data centers

34.5%

Improve data center cooling

32.5%

Server power-management software

23.9%

Thin-client systems

21.8%

Simplifying data center cabling

21.8%

PC shutdown software

18.3%

Upgrade server power supplies

9.6%

Figure 2.0: Energy efficiency strategies used by companies supporting between 500 and 5,000 servers.
Source: Cassatt

Its important to understand that the decision to consolidate servers can often come with unexpected consequences.
Companies often install smaller, high-density equipment with more processing power without acknowledging that the
heat output of this new equipment is significant. The smaller equipment will also free up rack space to add more
hardware into the existing infrastructure further increasing power (consumption) densities, energy use and cooling
requirements.

Consolidating and reconfiguring existing servers is only part of the EPAs scenario to improve operations. The second
key component is optimizing placement of the equipment. The EPA notes that by improving airflow, along with server
consolidation, companies can improve efficiency up to 30 percent. Each of the top four data center initiatives (shown
in Figure 2.0) will not yield satisfactory results unless the infrastructure layout is optimized at the same time.

As shown in Figure 3.0, heat densities of data center equipment that will be used in consolidation projects will
continue to rise, making it increasingly important to plan equipment placement at the time of procurement.

2009 Aperture Technologies, Inc.

R
A
FT

Computational Fluid Dynamics Modeling for Operational Data Centers

Figure 3.0: Past and projected heat loads of IT technologies. Source: Uptime Institute

Seeking Practical Solutions

Its currently estimated that, on average, four to seven percent of IT budgets are dedicated to energy costs. This
percentage will inevitably rise. Some industry professionals believe that government mandates will soon force
facilities operators to reduce energy consumption. Many companies are already evaluating and/or implementing
alternative solutions to reduce their energy use and make data center cooling processes more efficient.

According to ASHRAE thermal guidelines, 68 to 77 degrees Fahrenheit is the acceptable inlet air temperature range
for Class I servers. Depending on the facility and amount of equipment, there are many basic methods for improving
the cooling performance of data centers, such as using hot-aisle/cold-aisle arrangements, closing cable cutouts and
other openings to minimize air leakage and adding exhaust ducts over racks. Since each data center is unique in
terms of design, space, budget and flexibility, one needs to examine the various methods carefully.
The complexity of the problem and lack of detailed data will leave the door open for efficiency stakeholders to
determine their respective courses of action. Each IT manager must carefully assess his or her current environment
and business objectives before implementing a strategy, but that assessment can be difficult.

Without a well-defined deployment plan, understanding where to place the equipment to maximize airflow in an
operational data center is typically done through trial and error or empirical guidelines based on limited
measurements. These methods often do not consider the complex fluid dynamics that govern air flow distribution, and
consequently do not produce the expected flow rates. An adjustment in one section of the data center will affect flow
rates throughout the other areas. This kind of design procedure is labor-intensive, time-consuming and expensive
and still seldom yields an optimal, efficient solution.

How CFD Modeling Can Help


For many IT and facility managers, CFD modeling offers a more scientific and comprehensive design approach to
power and cooling management. It provides a weather map view of the data center, identifying hot spots and air flow
requirements in the existing configurations, while also analyzing needs in any proposed modifications.
CFD is an integral part of computer-aided engineering and is used in a variety of industries to study the dynamics of
things that flow, such as air and liquids. In the data center, CFD can be applied to create a better understanding of

2009 Aperture Technologies, Inc.

Computational Fluid Dynamics Modeling for Operational Data Centers

why hot spots are present and illustrate the effects of various factors on airflow and temperature distribution. A CFD
software tool visually depicts heat-related risks in the data center that can interfere with facility performance.

R
A
FT

CFD factors all aspects of the floor layout and the equipment, and then creates three-dimensional renderings based
on actual or projected devices which consider the complex processes controlling the airflow rate distribution. That
information allows engineers to evaluate several design options in a minimal amount of time and quickly explore
what if? scenarios in search of optimal equipment placement.

In case after case, it has been shown that airflow modeling based on CFD can be effectively used to identify and
resolve cooling problems in data centers. By using airflow modeling, the cooling performance of existing data centers
can be improved; in addition, new data centers can be designed to provide better cooling efficiency and lower energy
consumption. After a build is complete, IT managers will also have an operational baseline and can conduct periodic
updates as the environment changes.
CFD modeling for new data centers has a positive impact on the overall cost and outcome; however, in those cases,
design engineers are working from a clean slate. In an operational data center, where obstacles abound, CFD
modeling can be more challenging.

Even though they provide a more challenging environment, operational data centers often can benefit more from CFD
modeling because IT managers must reconfigure their existing environment to improve current efficiency levels. They
may have a limited capital budget (or none at all) and need to carefully consider the purchase and placement of
new/legacy equipment.
When choosing the appropriate CFD tool, IT managers should seek a company with specific knowledge of data
centers and appropriate products to serve them. One such company is Innovative Research, Inc. (IRI), which has
developed TileFlow, a specialized and customizable CFD modeling tool that calculates airflow patterns and
temperature distribution in data centers.

How TileFlow Works

The leading airflow simulation tool for data centers, TileFlow offers a unique scientific approach for meeting facility
cooling challenges. TileFlow performs calculations of airflow patterns and temperature distribution in a data center
and then creates a computer simulation of the cooling performance within the facility.
TileFlow features an intuitive user interface and graphical/table reporting that can be used when designing or
optimizing a data centers layout, either while designing a new facility or upgrading an existing one.

TileFlow calculates:

Airflow pattern and pressure distribution under the raised floor for raised floor data centers

Distribution of airflow rates through perforated tiles and other openings on the raised floor for raised floor
data centers

Airflow pattern and temperature distribution in the room for both raised floor and non-raised floor data
centers
Figure 4.0 shows examples of 3-D renderings available to IT managers to improve air flow and identify real or
potential hot spots.

2009 Aperture Technologies, Inc.

R
A
FT

Computational Fluid Dynamics Modeling for Operational Data Centers

Figure 4.0: Data center airflow modeled in TileFlow.

Enhancing the Value of CFD and TileFlow with Aperture VISTA

Despite the advantages of CFD, many data center managers are unable to efficiently run CFD analysis on an
operational data center, because maintaining an accurate inventory of equipment demands tremendous effort.
Gathering the necessary data to implement CFD can be a serious undertaking, and companies that lack a
comprehensive infrastructure resource management system struggle to pull together the amount of relational detail
needed for the CFD tool to deliver a complete analysis.
Gathering relational data from disparate systems is difficult; and one small mistake or omission casts doubt on the
final results. The time and resources required to gather this information are substantial, and therefore diminish the
return on investment and total cost of ownership.

Those concerns and expenses can be eliminated through the application of VISTA software from Aperture, the
leading global provider of software for managing the physical infrastructure of data centers. VISTA is a
comprehensive resource management and monitoring system that houses data center configuration and operating
information, including racks, servers and power and cooling equipment. Because VISTA always has an accurate map
of a data center and the placement and configuration of its contents, any information pertaining to the facilitys
equipment can be accessed with the push of a button or a single click of a mouse.

By using a newly developed integration, the infrastructure data stored in VISTA can be seamlessly imported into the
TileFlow simulation tool. With VISTAs ability to continually maintain the location and configuration of equipment and
the programs convenient standard export system users can easily and quickly generate detailed CFD analyses to
identify cooling problems in operational data centers.
Once TileFlow creates a computer simulation of the temperature distribution and cooling performance within a facility,
Apertures VISTA technology incorporates the manageability, visibility and control over all the physical elements of a
data center. The IT manager is then given an unprecedented view of data center assets, making it easier to
understand the sites current configuration and efficiently identify or predict cooling problems.
Integrating VISTA and TileFlow also facilitates CFD processes for any future design changes, avoiding potential
disruptions and saving both resources and time. When IT managers import data into the modeling tool, they are

2009 Aperture Technologies, Inc.

Computational Fluid Dynamics Modeling for Operational Data Centers

presented with a color-coded view that allows them to run what if scenarios. For example, the color of proposed
racks is different from existing racks located in the room, so they can be tracked through various modeling scenarios.

R
A
FT

Combining VISTA and TileFlow offers a strong value proposition to data center managers by enabling them to plan
and manage cooling on an operational basis for the first time, and helping them readily achieve the solutions they
need for improved energy efficiency.

2009 Aperture Technologies, Inc.

Computational Fluid Dynamics Modeling for Operational Data Centers

Conclusion
The ongoing need to provide adequate cooling of data center equipment while simultaneously lowering energy
consumption through more efficient airflow distribution is a vital issue that will continue to challenge IT managers.

R
A
FT

It has been clearly shown that airflow modeling based on the CFD technique can be used to identify and resolve
cooling problems and inefficiencies in data centers; it is also documented that optimization of cooling system
performance can significantly reduce cooling-related energy consumption and subsequent emission of green house
gases.

The integration of Apertures VISTA and IRIs TileFlow can enhance the viability and usefulness of CFD modeling,
successfully offering a more cost-effective solution for companies to manage todays dynamic IT environment.

2009 Aperture Technologies, Inc.

Computational Fluid Dynamics Modeling for Operational Data Centers

10

Appendix
A Modeled Scenario of a TileFlow CFD Analysis Integrated with Aperture VISTA Analysis
Capabilities

R
A
FT

Figure 1.0 shows configuration data stored in Aperture VISTA for a single device. VISTA Data Center Configuration
Management is the system of record for the physical infrastructure of the data center capturing detailed information
on data center equipment including location, space, power, cooling and network and storage connectivity.

Figure 1.0: Configuration data stored in Aperture VISTA for a single device.

2009 Aperture Technologies, Inc.

Computational Fluid Dynamics Modeling for Operational Data Centers

11

R
A
FT

Figure 2.0 shows how the device information is rolled up into the rack view. Aperture has created over 20,000 VISTA
Symbols, which are graphic depictions of individual pieces of data center equipment that include all environmental
statistics and nameplate power usage information, dimensions, weight and connection and port information from the
manufacturer. This valuable information base is the basis for VISTAs modeling capability, and is the largest and most
comprehensive data center equipment database in the industry today. At the click of a button, this information is
exported from VISTA and imported into TileFlow.

Figure 2.0: Device information in the Aperture VISTA rack view.

Figure 3.0 shows a typical raised floor data center modeled in TileFlow software. The shape of the data center and
the layout of objects in the space including perforated tiles, computer room air conditioning (CRAC) units and underfloor obstructions are defined in the TileFlow model. All information pertinent to server racks such as size, location
and equipment inside the racks are imported from Aperture VISTA. Data related to the heat load and cooling air
demands of the equipment inside racks is also imported from Aperture VISTA.

The top surfaces of the racks are painted with different colors, each indicating the heat load of the racks (blue
indicates low heat load, red and green indicate higher heat load).

The horizontal lines on the front faces of the racks indicate their internal sections. A section can hold servers, can
have a blanking panel, or can be open. The racks in the front-right section of the data center have a higher heat load
compared to the rest of the racks. Therefore, the perforated tiles placed in front of these racks have higher open area
to provide more airflow.
Analysis generated by the two software applications indicates that under current layout, the total airflow demand of all
the racks in the data center is less than 32,000 cubic feet per minute (CFM), and the total airflow supply from four
CRAC units is 50,000 CFM. Therefore, one of the CRAC units (indicated by the red cross) has been turned off. The
three remaining CRAC units supply 37,500 CFM, which is sufficient to meet the airflow demand of the racks.

2009 Aperture Technologies, Inc.

12

R
A
FT

Computational Fluid Dynamics Modeling for Operational Data Centers

Figure 3.0: Typical raised floor data center modeled in TileFlow software.

Figure 4.0 displays a map of airflow rates from perforated tiles. The perforated tiles are color coded. It can be seen
that the airflow supplied by tiles in front of the high-heat-load racks is around 800 CFM. The flow rates for the other
tiles range from 400 to 550 CFM.

Figure 4.0: Airflow rates from perforated tiles in CFM.

2009 Aperture Technologies, Inc.

Computational Fluid Dynamics Modeling for Operational Data Centers

13

R
A
FT

Figure 5.0 displays the air velocity and temperature distribution in a horizontal plane, as modeled by TileFlow. It
shows a moderate amount of hot air from hot aisles penetrating the cold aisles. Most of the data center is at 75
degrees Fahrenheit or less. There is just a small section on the left side of the diagram that indicates areas spots
around 85 degrees.

Figure 5.0: Air velocity and temperature distribution.

In Figure 1.0, the right side of the diagram has a relatively large empty space. As a part of an expansion process, the
manager of this data center is planning to add three new rows of server racks to the data center. For planning
purposes, the information about the equipment stored in the racks is added to Aperture VISTA. This future layout of
racks is then imported into the data centers TileFlow model for airflow and cooling assessment. This is done during
the planning stage, before commissioning the racks.

Figure 6.0 shows the addition of the new racks on the right side of the diagram. Due to a moderate heat load
produced within the new racks, regular perforated tiles with less open area are placed in front of them.

2009 Aperture Technologies, Inc.

14

R
A
FT

Computational Fluid Dynamics Modeling for Operational Data Centers

Figure 6.0: New racks added to the data center.

The map of airflow rate from perforated tiles shown in Figure 7.0 indicates that the maximum airflow throughout the
modeled data center has dropped from 800 CFM to 700 CFM, and the minimum airflow has dropped from 400 CFM
to 300 CFM. In general, the average airflow from perforated tiles has been reduced by 100 CFM per tile. This is
because the number of perforated tiles has increased.

Figure 7.0: Airflow has dropped from 800 CFM to 700 CFM.

2009 Aperture Technologies, Inc.

Computational Fluid Dynamics Modeling for Operational Data Centers

15

R
A
FT

Finally, Figure 8.0 shows a strong penetration of hot air into the cold aisle following the introduction of the new racks.
It also indicates that most of the data center is at temperatures of higher than 85 degrees.

Figure 8.0: Penetration of hot air into the cold aisle following the introduction of the new racks.

After adding the new racks, the airflow demand in the modeled data center has increased to over 41,000 CFM while
the airflow supply remains at 37,500 CFM. This indicates that the fourth CRAC unit needs to be turned on.

Figure 9.0, the map of airflow rate after turning the forth CRAC unit on, shows an increase of 150 CFM per perforated
tile, with the maximum airflow at more than 900 CFM and minimum airflow at more than 450 CFM.

Figure 9.0: Airflow rate after turning the forth CRAC unit on.

2009 Aperture Technologies, Inc.

Computational Fluid Dynamics Modeling for Operational Data Centers

16

R
A
FT

The temperature fog shown in Figure 10.0 depicts a significant decrease in the temperature values in the data center
after turning on the fourth CRAC unit.

Figure 10.0: Decreased temperature values after turning on the fourth CRAC unit.

The overall thermal assessment of this data center produced by TileFlow and Aperture VSITA technologies indicates
that there is enough cooling air available and the distribution of cooling air is appropriate

Conclusion

By implementing Aperture VISTA and TileFlow in CFD modeling processes, the modeled data center has benefitted
from increased visibility into the data center environment. The analysis ensured the reliable operation of the IT
equipment following the introduction of the new racks by identifying hot spots and devising a solution to guarantee the
proper inlet temperature to the equipment. It also enabled the decommissioning of an unnecessary CRAC unit, saving
money and energy and providing an instantaneous return on investment.

For a better understanding of the potential amount of savings that could ideally be recognized by turning off the fourth
CRAC unit, one should consider a calculation based on a typical 20-ton CRAC unit. Assumptions associated with this
calculation are that the CRAC unit has two 7.5 horse-power blowers and needs about 18 kW to produce chilled air.
The cost of electricity is 8 cents per kW and the blowers of the CRAC unit work 24 hours a day/365 days a year. The
CRAC unit also produces chilled air for only 60 percent of the time. For the other 40 percent of the time, blowers
circulate the room air.
Given these assumptions, the annual energy and maintenance costs for this unit are calculated to be about $11,500
and $3,000 respectively. Therefore, by eliminating this one CRAC unit, the annual operating cost of $14,500 could be
realized.

2009 Aperture Technologies, Inc.

About Aperture

R
A
FT

Aperture is the leading global provider of software for managing the physical infrastructure of data centers.
Apertures solutions reduce operational risk and improve efficiency through the planning and management of data
center resources. Aperture delivers the best practice processes that enable organizations to take control of an
increasingly complex physical infrastructure including equipment, space, power, cooling, network and storage.
With over 20 years of experience, Aperture provides organizations with the information required to optimize their
data center operations, delivering better services at the lowest cost. Apertures customers include the worlds
largest companies, half of which are Fortune 1000 and Global 500 organizations. . For more information, visit
www.aperture.com.

About Emerson

Emerson (NYSE: EMR), based in St. Louis, Missouri (USA), is a global leader in bringing technology and
engineering together to create innovative solutions for customers through its network power, process
management, industrial automation, climate technologies, and appliance and tools businesses. Sales in fiscal 2008
were $24.8 billion. For more information, visit www.Emerson.com.

About Innovative Research, Inc.

Innovative Research, Inc. provides quality software products and consulting services for fluid flow, heat transfer,
combustion, turbulence, and related processes. IRIs flagship product TileFlow is the leading airflow simulation tool
for data centers. It provides unparalleled ease of use, fast solution, and a large variety of colorful displays and
animations of airflow and temperature patterns in data centers. TileFlow can be used for designing the cooling
systems in new data centers and for upgrading or reconfiguring existing data centers in an ongoing manner. For
more information, please go to: http://www.inres.com/.

Das könnte Ihnen auch gefallen