Beruflich Dokumente
Kultur Dokumente
Dale Sartor
Lawrence Berkeley National Laboratory
DASartor@lbl.gov
1
High Tech Buildings are Energy Hogs:
LBNL Feels the Pain!
LBNL Super Computer Systems Power:
NERSC Computer Systems Power
(Does not include cooling power)
N8
(OSF: 4MW max)
N7
N6
40
N5b
35
N5a
30
NGF
MegaWatts
25
Bassi
20
Jacquard
15 N3E
10 N3
5 PDSF
0 HPSS
2001 2003 2005 2007 2009 2011 2013 2015 2017
Misc
Typical Data Center Energy End Use
Power Conversions
100
& Distribution
Units
35 Units Cooling
Equipment Server
Load
/Computing
Operations
33 Units
Delivered
Overall Electrical Power Use
Lighting Other
Office Space 2% 13%
Conditioning HVAC - Air
1% Movement
7%
Electrical Room
Cooling Lighting
4% 2%
Data Center
Cooling Tower HVAC -
Server Load
Plant Chiller and
51%
4% Pumps
24%
Data Center
CRAC Units
25%
Data Center Performance Varies
in Cooling and Power Conversion
DCiE (Data Center Infrastructure Efficiency) < 0.5 DCiE
– Power and cooling systems are far from optimized Data Center Infrastructure
Efficiency
– Currently, power conversion and cooling systems
Energy for IT
consume half or more of the electricity used in a data Equipment
DCiE =
center: Total Energy for
Less than half of the power is for the servers Data Center
Cooling
Cooling & Power
Cooling & Power
& Power Conversions
Conversions Server Server
Server Conversions
Load Load
Load /Computing /Computing
/Computing Operations Operations
Operations
0.80
Average .57
0.70
0.60
0.50
Ratio
0.40
Higher is
better
0.30
0.20
0.10
0.00
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
3.50
3.00
Average 1.83
2.50
2.00
Ratio
1.50
1.00 Lower is
better
0.50
0.00
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
Source: LBNL Benchmarking Center Number
Save Energy Now On-line
profiling tool: “Data Center Pro”
INPUTS OUTPUTS
Description Overall picture of
Utility bill data energy use and
efficiency
System information
End-use breakout
IT
Potential areas for
Cooling
energy efficiency
Power improvement
On-site gen Overall energy use
reduction potential
Other Data Center Metrics:
• Watts per square foot
• Power distribution: UPS efficiency, IT power supply
efficiency
– Uptime: IT Hardware Power Overhead Multiplier (ITac/ITdc)
• HVAC
– IT total/HVAC total
– Fan watts/cfm
– Pump watts/gpm
– Chiller plant (or chiller or overall HVAC) kW/ton
• Lighting watts/square foot
• Rack cooling index (fraction of IT within recommended
temperature range)
• Return temperature index (RAT-SAT)/ITΔT
DOE Assessment Tools (Under Development):
• Identifies and prioritizes key performance
metrics
– IT Equipment and software
– Cooling (air management, controls, CRACs, air
handlers, chiller plant)
– Power systems (UPS, distribution, on-site generation)
• Action oriented benchmarking
– Tool will identify retrofit opportunities based on
questionnaire and results of benchmarking
– First order assessment to feed into
subsequent engineering feasibility study
17
Energy Efficiency Opportunities Are
Everywhere • Better air management
• Better environmental conditions
• Move to liquid cooling
• Load management • Optimized chilled-water plants
• Server innovation • Use of free cooling
• On-site generation
• High voltage distribution
• Waste heat for cooling
• Use of DC power Alternative • Use of renewable
• Highly efficient UPS systems Power
Generation energy/fuel cells
• Efficient redundancy
strategies
Using benchmark results to find best practices:
Examination of individual systems and components in the centers
that performed well helped to identify best practices:
• Air management
• Right-sizing
• Central plant optimization
• Efficient air handling
• Free cooling
• Humidity control
• Liquid cooling
• Improving power chain
• UPSs and equipment power
supplies
• On-site generation
• Design and M&O processes
Air Management:
• Typically, much more air is circulated through
computer room air conditioners than is
required due to mixing and short circuiting of
air
• Computer manufacturers now provide
ASHRAE data sheets which specify airflow
and environmental requirements
• Evaluate airflow from computer room air
conditioners compared to server needs
Isolating Hot and Cold:
• Energy intensive IT equipment needs good
isolation of “cold” inlet and “hot” discharge
• Computer room air conditioner airflow can be
reduced if no mixing occurs
• Overall temperature can be raised in the data
center if air is delivered to equipment without
mixing
• Coils and chillers are more efficient with higher
temperature differences
Optimize Air Management:
• Enforce hot aisle/cold aisle arrangement
• Eliminate bypasses and short circuits
• Reduce air flow restrictions
• Proper floor tile arrangement
• Proper locations of air handlers
Data Center Layout
Underfloor Supply
Cold Aisle
Hot Aisle
Overhead Supply
Cold Aisle
Hot Aisle
© 2004, American Society of Heating, Refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). Reprinted by permission from
ASHRAE Thermal Guidelines for Data Processing Environments. This material may not be copied nor distributed in either paper or digital form
without ASHRAE’s permission.
Aisle Air Containment: Hot aisle lid
End cap
Cold Aisle Caps
© APC reprinted with permission
Cold Aisle
Hot Aisle
© 2004, American Society of Heating, Refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). Reprinted by permission from
ASHRAE Thermal Guidelines for Data Processing Environments. This material may not be copied nor distributed in either paper or digital form
without ASHRAE’s permission.
Best Scenario— Isolate Cold and Hot
95-100º
95-100º
70-75º
70-75º
Fan Energy Savings – 75%
85
Baseline Alternate 1 Range
Setup
Setup Alternate 2
80
75
Temperature (deg F)
70
65
60
55
50
Low
45
Med
40
High
Ranges during
6/13/2006 12:00 6/14/2006 0:00 6/14/2006 12:00 6/15/2006 0:00
Time
6/15/2006 12:00
demonstration
6/16/2006 0:00 6/16/2006 12:00
Environmental Conditions
• ASHRAE -
consensus from all
major IT
manufacturers on
temperature and
humidity conditions
• Recommended and
Allowable ranges of
temp and humidity
• Air flow required
Temperature Guidelines –
at The Inlet to IT Equipment
ASHRAE T EM PERAT URE GUIDELINES
100
ASHRAE Recommended
80 Maximum
Degrees F
70
ASHRAE Recommended
60 Minimum
40
Best air management practices:
• Arrange racks in hot aisle/cold aisle configuration
• Try to match or exceed server airflow by aisle
– Get thermal report data from IT if possible
– Plan for worst case
• Get variable speed or two speed fans on servers if
possible
• Provide variable airflow fans for AC unit supply
– Also consider using air handlers rather than CRACs for
improved performance
• Use overhead supply where possible
• Provide isolation of hot and cold spaces
• Plug floor leaks and provide blank off plates in racks
• Draw return from as high as possible
• Use CFD to inform design and operation
Right-Size the Design:
• Data Center HVAC often under-loaded
• Ultimate load uncertain
• Design for efficient part-load operation
– modularity
– variable-speed fans, pumps, compressors
• Upsize fixed elements (pipes, ducts)
• Upsize cooling towers
Optimize the Central Plant:
• Have one (vs. distributed cooling)
• Medium temperature chilled water
• Aggressive temperature resets
• Primary-only CHW with variable flow
• Thermal storage
• Monitor plant efficiency
Design for Efficient Central Air Handling:
• Fewer, larger fans
and motors
• VAV easier
• Central controls
eliminate fighting
• Outside-air
economizers easier
Use Free Cooling:
• Outside-Air Economizers
– Can be very effective (24/7 load)
– Controversial re: contamination
– Must consider humidity
• Water-side Economizers
– No contamination question
– Can be in series with chiller
Improve Humidity Control:
• Eliminate inadvertent dehumidification
– Computer load is sensible only
– Medium-temperature chilled water
– Humidity control at make-up air handler only
• Use ASHRAE allowable RH and
temperature
• Eliminate equipment fighting
– Coordinated controls on distributed AHUs
Use Liquid Cooling of Racks and Computers:
• Water is 3500x more effective than air on
a volume basis
• Cooling distribution is more energy
efficient
• Water-cooled racks available now; liquid-
cooled computers are coming
• Heat rejection at a higher temperature
– Chiller plant more efficient
– Water-side economizer more effective
Electricity Flows in Data Centers
HVAC system
unint err
uptible
load
computer
equipment
UPS PDU computer racks
backup diesel
generators
UPS = Uninterruptible Power Supply
PDU = Power Distribution Unit;
Improving the Power Chain:
• Increase distribution voltage
• DC distribution
• Improve equipment power supplies
• Improve UPS
Specify Efficient Power Supplies and UPSs
85%
Power supplies in IT
80%
75%
equipment generate much
of the heat. Highly
% Efficiency
70%
60%
55%
Average of All Servers reduce IT equipment load
by 15% or more.
50%
45%
0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%
% of Nameplate Power Output Factory Measurements of UPS Efficiency
(tested using linear loads)
100%
95%
90%
Efficiency 85%
80%
Flywheel UPS
Double-Conversion UPS
75%
Redundant
Operation
Consider On-Site Generation:
• Can use waste heat for cooling
– sorption cycles
– typically required for cost effectiveness
• Swaps role with utility for back-up
Improve Design and Operations Processes:
• Get IT and Facilities people to work together
• Use life-cycle total cost of ownership analysis
• Document design intent
• Introduce energy optimization early
• Benchmark existing facilities
• Re-commission as a regular part of maintenance
Top best practices identified through benchmarking
Data Center Best Practices topic organization:
HVAC Facility IT Cross-cutting / misc.
– Air Delivery Electrical Equipment issues
– Water Systems Systems
Air Cooling UPS Power Supply Motor efficiency
management plant systems efficiency
optimization
Air Free cooling Self Sleep/standby Right sizing
economizers generation loads
Humidification Variable AC-DC IT equip fans Variable speed drives
controls speed Distribution
alternatives pumping
Completed
Design Guidelines Are Available
• Design Guides were
developed based upon the
observed best practices
• Guides are available through
PG&E and LBNL websites
• Self benchmarking protocol
also available
http://hightech.lbl.gov/datacenters.html
Industrial Technologies Program Federal Energy Management
• Tool suite & metrics Program
• Energy baselining • Best practices
• Training showcased at Federal data centers
• Qualified specialists • Pilot adoption of Best-in-Class
• Case studies guidelines at Federal data centers
• Certification of continual • Adoption of to-be-developed industry
standard for Best-in-Class at newly
improvement
constructed Federal data centers
• Recognition of high energy savers
• Best practice information
• Best-in-Class guidelines Industry
• Tools
• Metrics
EPA
• Training
• Metrics
• Server performance • Best practice information
rating & ENERGY STAR label • Best-in-Class guidelines
• Data center performance • IT work productivity standard
benchmarking
Links to Get Started
DOE Website: Sign up to stay up to date on new developments
www.eere.energy.gov/datacenters
Lawrence Berkeley National Laboratory (LBNL)
http://hightech.lbl.gov/datacenters.html
LBNL Best Practices Guidelines (cooling, power, IT systems)
http://hightech.lbl.gov/datacenters-bpg.html
ASHRAE Data Center technical guidebooks
http://tc99.ashraetcs.org/
The Green Grid Association – White papers on metrics
http://www.thegreengrid.org/gg_content/
Energy Star® Program
http://www.energystar.gov/index.cfm?c=prod_development.server_efficiency
Uptime Institute white papers
www.uptimeinstitute.org
Contact Information:
DASartor@LBL.gov
(510) 486-5988
http://Ateam.LBL.gov