Sie sind auf Seite 1von 12

White Paper

Data center TCO;


a comparison of
high-density and
low-density spaces

M.K. Patterson, D.G. Costello, & P. F. Grimm


Intel Corporation, Hillsboro, Oregon, USA

M. Loeffler
Intel Corporation, Santa Clara, California, USA

Paper submitted to THERMES 2007 for publication in Santa Fe, NM Jan. 2007
White Paper Data Center TCO

Contents
Abstract. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

Background and Definitions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

Example Data Center. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

Benchmark Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

Conclusions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

Abstract
Keywords: The cost to build and operate a modern Data Center continues to increase. This Total
Cost of Ownership (TCO) includes capital and operational expenses. The good news in
Data Center
all of this is the performance or compute capabilities in the same Data Center (DC) is
Thermal Management increasing at a much higher rate than the TCO. This means the actual cost per unit of
TCO compute performance is coming down in the Data Center.
High-Performance Computing
While that is a positive trend the increasing densities still present a challenge. This
challenge though is primarily one of design and operation. One of the most common
misconceptions in this period of growth is that the TCO of a new data center is lower
with a low density design. We look at the construction and design of both types and
present results demonstrating that high-density DCs are a better choice for reducing
the owners cost. These results apply to new construction and mostly-unconstrained
retrofits. Densities of 1000 watts per square foot of work cell are being achieved
with good efficiencies. Modern designs of 200 to 400 watts per square foot of work
cell are much more common, but cost more. Costs of the architectural space, power
systems and cooling systems are reviewed as are the operational costs for these
systems. High-density DCs do cost less. The challenges for the high-density DC are
also called out and suggestions for successful operation are made.


White Paper Data Center TCO

Motivation
ASHRAE (2005) provides projections for datacom density trends, as shown in
Figure 1. Of particular interest in this paper is the trend for compute servers, both
1U & blades, and 2U; these are the primary building blocks of scale-out data
centers. The 1U trend for 2006 indicates a heat load of roughly 4000 watts / sq
ft of equipment floor space. A typical rack has a foot print of 39 by 24 inches, this
represents a 26 kW rack. Very few racks of this power are in place. Is Figure 1 incorrect
or are there other factors? The ASHRAE guide represents the peak value, or what
could be expected in a fully populated rack. But DCs are not being built to this density.
Instead DCs are still being built to the 1999 ASHRAE values of compute density. Is DC
development lagging behind and not technically capable of supporting 26 kW racks? Or
would a data center at that density be too costly, and more expensive than the ones
currently being built? These issues are analyzed and it is shown that the technology
for power and cooling for racks per the ASHRAE trend does exist, and that a data
center built to this standard would have a lower TCO. The authors believe that the
problem has to do with the life of datacom equipment (3~5 years) as compared with
the lifetime of these facilities (~15 years) and the inertia that lifetime builds into data
center strategies and design.

10,000
8,000
6,000
(watts / equipment square feet)
heat load per product footprint

4,000

2,000

1,000
800
600
400

200

100

60
1994 1998 2002 2006 2010 2014

year of product announcement

Communication Equipment—Core Storage Servers

Compute Servers—1U, Blade and Custom Workstations (Standalone)

Communication Equipment—Edge Tape Storage

Compute Servers—2U and Greater

Figure 1. ASHRAE Datacom Trend Chart showing increasing density over time

White Paper Data Center TCO

Background and Definitions


Best practices for today’s data center layout is repeating rows of racks side-by-side
with alternating cold aisles and hot aisles. The cold aisle supplies cool air to the
servers, with each rack discharging into a hot aisle shared with the next row of servers.
Raised floors provide cool supply air to the cold aisles with overhead returns to the air
conditioning system for the warm return air. In this hot aisle / cold aisle configuration,
varying numbers of servers can be fit into each rack based on many factors; cooling
capability, power availability, network availability, and floor loading capability (the rack’s
loaded weight). Other configurations can also be successful (ASHRAE 2006).

Definitions
Prior to evaluating the benefit or drawbacks of High-density data center is taken to mean a
various metrics there are definitions that need to data center with racks at 14 kW and above, with
be presented. The first is that of the work cell a work cell of nominally 16 to 20 square feet.
(see Figure 2). Further, high density does not imply or require
liquid cooling. It has been reported in some trade
The work cell is the repeating unit of cold aisle,
journals and elsewhere that anything above
rack, and hot aisle. This represents the square
14 kW will need supplemental cooling or liquid
footage directly attributable to a specific rack
cooling to be able to handle these types of loads.
of servers.
This is not the case. High density can be cooled
successfully with standard hot-aisle / cold aisle
design as shown in Figure 3 (16 sq ft work cells
and 14 kW to 22 kW racks with 40+ servers
in each).
Hot Aisle Solid Tile Work Cell 16 sq ft

8 feet

6 feet 2 feet

Server in Rack Cold Aisle Perforated

Figure 2. Single work cell in a row of servers in a Data Center, Figure 3. Photograph of a cold aisle
looking down from above in a high-density Data Center


White Paper Data Center TCO

Watts/rack is useful in sizing power distribution


to a rack and for determining how full a rack can
be. It should not be used by itself to define a data
center unless the square footage of the work cell
is known, then the two metrics become
functionally equivalent.

Watts/sq ft of raised floor is also of limited


value. The layout efficiency of the different data
centers can vary and this can greatly affect the
value. One use for this metric is for infrastructure
Figure 4. Infrared photo of 14 kW sizing (e.g., watts/sq ft times the total raised floor
racks at the end of a cold aisle in can give the total needed cooling). But any
a high-density data center method of calculating utility system sizing must
consider average watts/sq ft or watts/rack.
The limit of air-cooling in high-density data
Sizing infrastructure using max power from each
centers is a matter of much debate. Figure 4
server will result in oversized facilities equipment.
shows an infrared image of the data center in
Figure 3. This HPC data center is easily carrying Total Cost of Ownership (TCO) represents
the high-density racks with no recirculation the cost to the owner to build, as well as the cost
problems. On-going internal analysis by the over time to operate and maintain the data
authors show that supporting 30 kW racks is center. These costs are all brought back to a
feasible with air cooling and the results of this present value using appropriate engineering
paper applicable there. economics. The right TCO metric is cost/server
when the specifics and number of servers has
Layout Efficiency measures data center
been determined. Alternately, cost/kW can be a
square footage utilization. This is defined as racks
useful metric, particularly when the servers to be
per thousand square feet. This value is a measure
installed are not known. In this metric, kW is the
of the efficiency of the DC layout. The typical
power available to the servers, rather than the
range is 20 to 30, higher numbers being better.
power into the site (which includes UPS losses
This is similar to the Rack Penetration Factor of
and power for the cooling system). The use of
Malone and Belady (2006); however, this metric
cost/sq ft would not be valid as the high-density
is independent of the size of the rack itself.
DC will have a greater cost/sq ft. Unfortunately,
The metric most often used when discussing DC low density DCs are often chosen based on this
density is watts/sq ft. However this metric is faulty comparison. Cost/sq ft is not valid, as
often misused because it lacks specificity in the compute capability for each square foot is not
denominator. It is typically assumed that the value the same for both DC types.
refers to raised floor area, but it could refer to the
Two analyses were done in determining the
DC and support area, or even the entire campus.
TCO of low and high-density data centers.
Watts/sq ft of work cell is the preferred First an example data center was considered
metric for data center to data center with implications of each density option. The
benchmarking, or in infrastructure discussions. requirements of each were compared and TCO
The metric is particularly well suited in the impacts calculated. Second, a benchmarking
evaluation of cooling load as the ability to move effort’s results are compiled and plotted, and
the requisite cooling air though the raised floor compared with the results of the example data
and the exhaust air through the hot aisle is all center analysis. Finally, specific design and
carried in the size of the work cell. A high- operational considerations for high-density
powered rack surrounded with a large amount of DCs are reviewed.
raised floor in the work cell is not high density.


White Paper Data Center TCO

Example Data Center


Consider a new DC where the owner has determined 10,000 1U dual processor servers
are needed. This DC and the server choices will serve to demonstrate the difference in
TCO of different density options. (1U = 1.75 inch height, equipment racks are measured
in U, with a typical rack being able to hold 42 1U servers, or 21 2U servers, etc.) The
supplier of the equipment has adhered to the ASHRAE Thermal Guideline (ASHRAE,
2004) and published the cooling load and required airflow of the server. Table 1 shows
a summary of the server and the two data centers.

Low-Density Data Center High-Density Data Center


# of servers 10,000 10,000
Watts / server 400 400
CFM / server 39 39
kW/rack 6.6 17
Servers / rack 16 42
Total racks 625 238
Sq ft / work cell 16 16
Layout Efficiency (rack/Ksf) ~22 ~22
Sq ft of raised floor needed 28,571 10,880

Table 1. Data Center and Server Definition

Low density will be taken as the median of a rack. For the high-density DC the main goal is
survey done of 28 new or retrofit data center full racks and minimizing square feet. 42 of
projects being considered by a leading DC design- the selected servers in a rack would require
build firm. The data (Aaron 2006) is shown in just under 17 kW.
Figure 5. The values range from 75 watts/sq ft
The data center can now be further detailed.
to 250 watts/sq ft. The median is between 125
The total airflow required is the 39 CFM for each
and 150 watts/sq ft, which represents a 6.6 kW
server plus a 20% safety factor for leakage and
bypass. Note that the total airflow in each DC is
the same; it is driven by the number of servers
12
and not the space density. The value of 20% is
10 low and would only be successful in a data center
with good airflow management. A value of 35%
8
may be required in a DC where less care is taken
6 in design, CFD analysis, and operational acumen.

The raised floor in the low-density room is 18


4
inches, while the high-density room will need a
2 30-inch raised floor to handle the higher per-rack
flow rate for the high-density cabinets. Both will
0
require 4 MW of UPS power to drive the servers.
75 100 125 150 175 250
Total server power is independent of density, as
watts / square feet of raised floor
long as the assumption of equal work outputs for
each room is adhered to.
Figure 5 New data center projects reported by

national data center design-build firm
White Paper Data Center TCO

Low-Density Data Center High-Density Data Center


Total Airflow CFM 468,000 468,000
Raised floor height 18 inches 30 inches
CFM / rack 749 1966
Total UPS power needed 4 MW 4 MW
Cost of power 10¢/kW-hr 10¢/kW-hr

Table 2. Data Center Design Results

The data center has essentially been scoped out. and this will also need a similar height increase in
Costs associated with the various segments of the return air plenum. The overall building height
the data center can be determined. There are five will be roughly 30" greater. Building cost is far
major areas for consideration; civil, structural, and less sensitive to height than to area. The marginal
architectural (CSA), power, mechanical (primarily cost to increase the height of the building as
cooling). safety and security, and finally the IT required is assumed to be 10% of the building
equipment itself. The density will impact each cost. Experience in high-density data centers is
of these in a different way. that the cost delta for the 30" raised floor is
small compared to the 18" raised floor. Even for
CSA is highly impacted by the density. Turner and
the most challenging seismic area the design is
Seader (2005) provide a useful two component
the same with the exception of the longer pedestal.
cost model. First it includes a cost/sq ft value which
This cost delta is on the order of $1 / sq ft.
is independent of data center density or “Tier” (or
critical-ness). The second component is a value The required power total to the room and the
based on cost/kW of useable UPS power. This cost airflow total in the room are the same for both
does vary by Tier. The cost/sq ft metric is $220/sq concepts. This can be extended to the major
ft and is primarily associated with the CSA portion facilities equipment (Cook 2006). Both the high
of the facility. This is shown in Table 3. A ~$4M and low density options will use the same chillers,
savings on the CSA portion of the facility can be cooling towers, chilled water pumps, utility
had by building a high-density data center with connection, UPS, and major switchgear. It follows
fewer square feet. that the utility spaces for both are identical. In the
DC fewer, larger, higher-capacity fan-coil units
The high-density data center requires nearly 1/2
will have a lower cost per CFM than many small
acre less land. Also the permitting and fees for
distributed units but the cost determination is
any project are often based on square footage.
beyond the scope of this work. Similarly, power
The specifics of the individual site location would
distribution units capital cost favor the high-density
dictate the magnitude of additional savings
space, as does the allowed shorter cable runs.
associated with higher density.
The electrical operational costs are approximately
There are additional costs incurred by the high-
equal. There will be a difference in power used,
density design. First is the cost of a CFD analysis.
with less power drawn in the high-density room
It could be argued that any data center with
due to shorter cable runs and larger (and typically
10,000 servers warrants a CFD analysis. However,
more efficient) equipment, but the value will not
low density designs often go without. The cost
be significant.
of this analysis is difficult to estimate as it depends
more on complexity than on the square footage, Cooling costs will have an impact on the TCO.
but for a new, homogenous DC $5/sq ft is fair. The central cooling plant will be the same for
both. The major difference in cooling costs
The other high-density penalty is that of the higher
comes from the power needed to move the air
building height required. The raised floor is higher


White Paper Data Center TCO

in the data center. Both have the same total flow Safety and security cost is somewhat sensitive
rate. The configurations were modeled using a to area (e.g., smoke sensors / sq ft) but it is not
commercially available CFD program. In the high- the primary driver of these systems so no specific
density room, with the 30" raised floor and >50% credit is taken for high density. Also, a desired
open grates, a static pressure of 0.10" wg. is higher level of monitoring due to the higher
required to drive the proper airflow through each density could offset this.
grate. In the low density design, with the 18"
Lighting will cost more in the low density space.
raised floor and the 25% open perforated tiles,
There are ~18,000 more square feet that need
0.13" wg. is needed. The additional power needed
to be lit at roughly 1.6 watts / sq ft. The
for the low-density DC can be determined from
increased maintenance would also add to
the fan laws. Increasing the pressure required by
the cost.
30% for the same airflow will have an impact of
The IT equipment is a large portion of the total
budget. Both options hold the same number of
servers. The difference is the number of racks.
The partially full racks carry an economic penalty.
The power for the high-density case is from kW/
A per-rack cost of $1500 is assumed, with
CFM values for an actual high-density DC.
another $1500 to move in and install the rack.
The solution to this cost penalty would seem to This value is conservative; with higher values for
be using the more open grates instead of the some of the more advanced, interconnected DCs.
restrictive perforated tile. Unfortunately that
The summary in Table 3 captures the major
solution would not work. Grates, with their non-
differences in costs for the two subject DCs. It
restrictive flow/pressure characteristic, need a
does not represent the total cost, but covers the
deep plenum under the raised floor with minimal
areas where there are differences. The cost for a
flow restrictions to ensure an even flow
high-density DC is measurably less. In the example
distribution. Grates used with a lower, more
above savings were $5.2 million dollars. This
restrictive plenum would not provide uniform
savings is equal to $520/server, or 1700 more
airflow and are not a good airflow management
servers (at ~$3K/ea), or a year’s electricity cost.
tool for shallow raised floors. Perforated tiles
work best in that application because their higher
pressure drop is the controlling factor in the
flow distribution and results in uniform flow.
(VanGilder and Schmidt, 2005)

Low-Density High-Density Notes


Capital Cost – Building $6,285,620 $2,393,600 $220/sq ft for CSA
Design Cost for CFD $0 $54,440 Assumes $5/sq ft;
Capital cost taller DC $0 $239,360 Assumes +10%
Capita cost for 30" RF. $0 $10,880 $1 sq ft
Lighting $126,000 $0 NPV(5 yr, i=5%)
IT Equipment (Racks) $1,875,000 $714,000 $1.5K/ea + $1.5K/install
Oper Cost – Cooling $1,091,000 $736,000 NPV of 5 yr with i=5%
Total Cost Delta $9,377,620 $4,148,240 $5.2 M savings

Table 3. Data center TCO comparisons and major cost deltas


White Paper Data Center TCO

Benchmark Data
The example data center analysis showed that a high-density data center does have a
lower TCO than a low density DC. This result is also supported by a review of Figure 6.

$25,000

$20,000
$ / kW

$15,000 $ / sq ft

Linear ($ / kW)
$10,000
Linear ($ / sq ft)
$5,000

$0

0 100 200 300 400 500 600


watts / square feet of raised floor area

Figure 6. Data center cost metrics as a function of data center density

This shows the result of a benchmarking study Design and Construction: The design of the
of eight different internal and external DCs high-density data center requires a greater focus
completed by Costello (2005). As expected, cost/ on airflow in the room. The challenge is not the
sq ft is higher at greater densities. But for the increased volume per rack but a much greater
same computing capability less square footage is control of airflow distribution. The air must be
needed, making the cost per square foot metric a delivered to where it is needed. Low-density
poor predictor for TCO. The better metric is cost/ rooms often exist with poor airflow management.
kW. The kW basis accounts for the DC’s ability to CFD analysis of the high-density room is a must.
support a given compute workload. The datacom Patel (2002) reviews the importance and
equipment will need the same total power opportunities provided by this level of analysis.
regardless of density. Cost/kW shows a negative
High-density rooms often have higher cooling
slope indicating high density does have a lower
loads than the typical CRAC units prevalent in
first cost. Note that the cost here is based on the
low-density data centers can provide. The CRAC
total data center, including design, land, and all
units, specifically designed for data centers,
infrastructure and is different from the metric
provide benefits such as monitoring, reliability
used by Turner and Seader (2005).
& redundancy, and capacity tailored to high
High Density Considerations sensible heat ratio loads found in data centers.
High-density DCs require different methods in Denser applications often require industrial grade
both design and operations on the part of the units based on size alone. The goal is incorporation
owner, but when these are weighed against a of the CRAC unit’s benefits into the larger system.
lower TCO they are usually a good investment.


White Paper Data Center TCO

Another challenge in the design of high-density aisle. In a properly designed, built, and maintained
spaces is uniformity of the servers. Uniform DC, regardless of density, the hot aisle will be hot.
server loading makes the task of airflow If the hot aisle is not hot, cooling capacity, energy,
management simpler. Non-uniform loading can and money are being wasted.
still be handled but the details have to be
Another issue is data center airflow velocities.
understood. Zoning the DC into homogenous
Consider the 42-server rack discussed earlier.
server types can facilitate airflow management.
With a 20% safety factor, the rack itself will need
Operations: A frequent (but incorrect) concern approximately 2000 CFM. That flow, when deliver
voiced over a high-density data center is that the through a 2x2 floor grate will have a nominal
hot aisle will be too hot for personnel. The hot velocity of 500 fpm. Tate Access Floors (2002)
aisle temperature is independent of density in provides flow rate-versus-static pressure curves
the ranges being discussed here. Assume that in of a typical floor grate. 500 fpm is in the middle
each DC the inlet air to the servers is within of the operating range of a typical grate (~56%
specification and proper designs have precluded open area), so the pressure and flow are
recirculation. The servers, whether there are 42 not extreme.
or 16 in the rack, will pull the needed amount of
500 fpm (5.7 mph) is above what would normally
air per server from the cold aisle and discharge it
be considered a comfort space, particularly at
to the hot aisle. The temperature rise across any
data center supply-air temperatures, however it
server is based on the individual workload and
is not unworkable. The Beaufort Scale (National
thermal control algorithm in place. But that delta T,
Weather Service, 2006) defines this velocity as a
assuming the servers are the same, will be the
light breeze, and not until the velocity reaches a
same. The hot-aisle temperature, which is a direct
moderate breeze (1144 – 1584 fpm) is the
result of server temperature rise, is independent
velocity noted as “raises dust and loose paper.”
of density.

What is more likely the cause of the cooler hot-


aisle phenomena in low density DCs is a greater
chance of air-flow mismanagement with leakage
or bypassed cool air being delivered to the hot

10
White Paper Data Center TCO

Conclusions
High-density data centers will provide the DC owner with a reduced cost of ownership
when compared with that of a low density DC.

High-density data centers require specific design There are risks and ergonomic negatives to a high-
considerations, most notably a path for the density configuration, but these can be overcome
higher volume of air. Grates can replace perforated by proper design and recognition that modern
tiles. Raised floors of 30 inches are needed in the data centers do not require continuous staffing.
cold-aisle/hot-aisle strategy. This could preclude If the DC can be designed or retrofit to support
some legacy data centers from moving to high the infrastructure for high-density computing,
density without local enhanced cooling, but new the owner will be able to have a smaller DC with
data centers and those with sufficient height for the same computing performance at a lower TCO.
a retrofit can benefit from increasing densities.

11
For further information, please visit:
www.intel.com/technology/eep

Acknowledgements
We would like to express our gratitude to Kelly Aaron of Nova for her support of this study.

References
Aaron K., e-mail message to author, Mar. 6, 2006.
American Society of Heating, Refrigerating and Air Conditioning Engineers (ASHRAE) 2004. Thermal Guidelines for Data Processing
Environments. Atlanta: ASHRAE
ASHRAE. 2005. Datacom Equipment Power Trends and Cooling Applications. Atlanta: ASHRAE
ASHRAE. 2006. Design Consideration for Datacom Equipment Centers. Atlanta: ASHRAE
Cook, D. 2006. DC TCO Report, Hillsboro, OR: Intel Internal Report
Costello, D. et. al. 2005. Data Center Benchmarking, Hillsboro, OR: Intel Internal Report
Malone C. and Belady C. 2006. Data Center Power Projections to 2014. iTHERM 2006, San Diego
National Weather Service. 2006. http://www.srh.noaa.gov/mfl/hazards/info/beaufort.php
Patel, C.D., Sharma, R, Bash, C.E., Beitelmal, A. 2002. Thermal Considerations in Cooling Large Scale High Compute Density Data Centers, 2002
Inter Society Conference on Thermal Phenomena, pg 767-776
Tate Access Floors, GrateAire 24 specification sheet, http://www.tateaccessfloors.com/
pdf/grateaire_panel.pdf, Tate Access Floors Inc, Jessup, MD (accessed July 15, 2006)
Turner, W.P. and Seader, J.H. 2006. Dollars per kW plus Dollars per Square Foot are a Better Data Center Cost Model than Dollars per Square Foot
Alone, Uptime Institute White Paper, Santa Fe
VanGilder, J.W. and Schmidt, R.R., 2005, Airflow Uniformity through perforated tiles in a raised-floor Data Center. ASME Interpack 05, San
Fransisco, 2005

Copyright © 2005 Intel Corporation. All rights reserved. Intel, the Intel logo, Intel. Leap ahead., and the Intel. Leap ahead. logo are trademarks
or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries.
* Other names and brands may be claimed as the property of others.
Copyright © 2005 Booz Allen Hamilton 0205/CEG/ESP/XX/PDF Part Number: 306623-001EN

Das könnte Ihnen auch gefallen