Sie sind auf Seite 1von 4

What It'll Take to Go Exascale

http://www.sciencemag.org/content/335/6067/394.full

AAAS.ORG

Science Home

Current Issue

FEEDBACK

Previous Issues

HELP

LIBRARIANS

Science Express

Science Magazine

Science Products

Enter Search Term

My Science

ADVANCED

About the Journal

Home > Science Magazine > 27 January 2012 > Service, 335 (6067): 394-396

Science

Prev | Table of Contents | Next

www.sciencemag.org
Science 27 January 2012:
Vol. 335 no. 6067 pp. 394-396
DOI: 10.1126/science.335.6067.394

Leave a comment (0)

NEWS FOCUS

COMPUTER SCIENCE

What It'll Take to Go Exascale


Robert F. Service

Scientists hope the next generation of supercomputers will carry out a million trillion operations
per second. But first they must change the way the machines are built and run.
Using real climate data, scientists at Lawrence Berkeley National
Laboratory (LBNL) in California recently ran a simulation on one of

Related Resources

the world's most powerful supercomputers that replicated the

In Science Magazine

number of tropical storms and hurricanes that had occurred over

PODCASTS

the past 30 years. Its accuracy was a landmark for computer


modeling of global climate. But Michael Wehner and his LBNL

Science Podcast
Science 27 January 2012: 481.

colleagues have their eyes on a much bigger prize: understanding


whether an increase in cloud cover from rising temperatures
would retard climate change by reflecting more light back into
space, or accelerate it by trapping additional heat close to Earth.
To succeed, Wehner must be able to model individual cloud systems
on a global scale. To do that, he will need supercomputers more
powerful than any yet designed. These so-called exascale computers
would be capable of carrying out 1018 floating point operations per
second, or an exaflop. That's nearly 100 times more powerful than
today's biggest supercomputer, Japan's K Computer, which achieves
11.3 petaflops (1015 flops) (see graph), and 1000 times faster than the
Hopper supercomputer used by Wehner and his colleagues. The United
States now appears poised to reach for the exascale, as do China,
Japan, Russia, India, and the European Union.
It won't be easy. Advances in supercomputers have come at a steady
pace over the past 20 years, enabled by the continual improvement in

View larger version:


In this page In a new window

On fire. More powerful

computer chip manufacturing. But this evolutionary approach won't cut

supercomputers now in the

it in getting to the exascale. Instead, computer scientists must first

design stage should make

figure out ways to make future machines far more energy efficient and

modeling turbulent gas

tolerant of errors, and find novel ways to program them.

flames more accurate and

The step we are about to take to exascale computing will be very, very
difficult, says Robert Rosner, a physicist at the University of Chicago in
Illinois, who chaired a recent Department of Energy (DOE) committee
charged with exploring whether exascale computers would be
achievable. Charles Shank, a former director of LBNL who recently
headed a separate panel collecting widespread views on what it would
take to build an exascale machine, agrees. Nobody said it would be

revolutionize engine
designs.
CREDIT: J. CHEN/CENTER
FOR EXASCALE SIMULATION
OF COMBUSTION IN
TURBULENCE, SANDIA
NATIONAL LABORATORIES

impossible, Shank says. But there are significant unknowns.

Gaining support
The next generation of powerful supercomputers will be used to design high-efficiency engines tailored
to burn biofuels, reveal the causes of supernova explosions, track the atomic workings of catalysts in
real time, and study how persistent radiation damage might affect the metal casing surrounding nuclear
weapons. It's a technology that has become critically important for many scientific disciplines, says

1 of 4

10/10/12 10:53 AM

What It'll Take to Go Exascale

http://www.sciencemag.org/content/335/6067/394.full

Horst Simon, LBNL's deputy director.


That versatility has made supercomputing an easy sell to politicians. The massive 2012 spending bill
approved last month by Congress contained $1.06 billion for DOE's program in advanced computing,
which includes a down payment to bring online the world's first exascale computer. Congress didn't
specify exactly how much money should be spent on the exascale initiative, for which DOE had
requested $126 million. But it asked for a detailed plan, due next month, with multiyear budget
breakdowns listing who is expected to do what, when. Those familiar with the ways of Washington say
that the request reflects an unusual bipartisan consensus on the importance of the initiative.
In today's political atmosphere, this is very unusual, says Jack Dongarra, a computer scientist at the
University of Tennessee, Knoxville, who closely follows national and international high-performance
computing trends. It shows how critical it really is and the threat perceived of the U.S. losing its
dominance in the field. The threat is real: Japan and China have built and operate the three most
powerful supercomputers in the world.
The rest of the world also hopes that their efforts will make them less dependent on U.S. technology. Of
today's top 500 supercomputers, the vast majority were built using processors from Intel, Advanced
Micro Devices (AMD), and NVIDIA, all U.S.-based companies. But that's beginning to change, at least at
the top. Japan's K machine is built using specially designed processors from Fujitsu, a Japanese
company. China, which had no supercomputers in the Top500 List in 2000, now has five petascale
machines and is building another with processors made by a Chinese company. And an E.U. research
effort plans to use ARM processing chips made by a U.K. company.

Getting over the bumps


Although bigger and faster, supercomputers aren't fundamentally different from our desktops and
laptops, all of which rely on the same sorts of specialized components. Computer processors serve as
the brains that carry out logical functions, such as adding two numbers together or sending a bit of
data to a location where it is needed. Memory chips, by contrast, hold data for safekeeping for later use.
A network of wires connects processors and memory and allows data to flow where and when they are
needed.
For decades, the primary way of improving computers was creating chips with ever smaller and faster
circuitry. This increased the processor's frequency, allowing it to churn through tasks at a faster clip.
Through the 1990s, chipmakers steadily boosted the frequency of chips. But the improvements came at
a price: The power demanded by a processor is proportional to its frequency cubed. So doubling a
processor's frequency requires an eightfold increase in power.
With the rise of mobile computing, chipmakers couldn't raise power
demands beyond what batteries could store. So about 10 years ago,
chip manufacturers began placing multiple processing cores side by
side on single chips. This arrangement meant that only twice the
power was needed to double a chip's performance.
This trend swept through the world of supercomputers. Those with
single souped-up processors gave way to today's parallel machines
that couple vast numbers of off-the-shelf commercial processors
together. This move to parallel computing was a huge, disruptive
change, says Robert Lucas, an electrical engineer at the University of
Southern California's Information Sciences Institute in Los Angeles.

View larger version:


In this page In a new window

Hardware makers and software designers had to learn how to split

New king. Japan has the

problems apart, send individual pieces to different processors,

fastest machine (bar),

synchronize the results, and synthesize the final ensemble. Today's top

although the United States

machineJapan's K Computerhas 705,000 cores. If the trend

still has the most petascale

continues, an exascale computer would have between 100 million and

computers (number in

1 billion processors.

parentheses).

But simply scaling up today's models won't work. Business as usual

CREDIT: ADAPTED FROM


JACK DONGARRA/TOP 500
LIST/UNIVERSITY OF
TENNESSEE

will not get us to the exascale, Simon says. These computers are
becoming so complicated that a number of issues have come up that
were not there before, Rosner agrees.
The biggest issue relates to a supercomputer's overall power
use. The largest supercomputers today use about 10 megawatts
(MW) of power, enough to power 10,000 homes. If the current
trend of power use continues, an exascale supercomputer would
require 200 MW. It would take a nuclear power reactor to run
it, Shank says.
Even if that much power were available, the cost would be
prohibitive. At $1 million per megawatt per year, the electricity
to run an exascale machine would cost $200 million annually.
That's a non-starter, Shank says. So the current target is a
machine that draws 20 MW at most. Even that goal will require a
300-fold improvement in flops per watt over today's technology.

2 of 4

View larger version:


In this page In a new window

On the rise. The gap in available


supercomputing capacity between
the United States and the rest of
the world has narrowed, with

10/10/12 10:53 AM

What It'll Take to Go Exascale

Ideas for getting to these low-power chips are already


circulating. One would make use of different types of specialized
cores. Today's top-of-the-line supercomputers already combine
conventional processor chips, known as CPUs, with an
alternative version called graphical processing units (GPUs),

http://www.sciencemag.org/content/335/6067/394.full

China gaining the most ground.


CREDIT: ADAPTED FROM JACK
DONGARRA/TOP 500
LIST/UNIVERSITY OF TENNESSEE

which are very fast at certain types of calculations. Chip manufacturers are now looking at going from
multicore chips with four or eight cores to many-core chips, each containing potentially hundreds of
CPU and GPU cores, allowing them to assign different calculations to specialized processors. That
change is expected to make the overall chips more energy efficient. Intel, AMD, and other chip
manufacturers have already announced plans to make hybrid many-core chips.
Another stumbling block is memory. As the number of processors in a supercomputer skyrockets, so,
too, does the need to add memory to feed bits of data to the processors. Yet, over the next few years,
memory manufacturers are not projected to increase the storage density of their chips fast enough to
keep up with the performance gains of processors. Supercomputer makers can get around this by
adding additional memory modules. But that's threatening to drive costs too high, Simon says.
Even if researchers could afford to add more memory modules, that still won't solve matters. Moving
ever-growing streams of data back and forth to processors is already creating a backup for processors
that can dramatically slow a computer's performance. Today's supercomputers use 70% of their power
to move bits of data around from one place to another.
One potential solution would stack memory chips on top of one another and run communication and
power lines vertically through the stack. This more-compact architecture would require fewer steps to
route data. Another approach would stack memory chips atop processors to minimize the distance bits
need to travel.
A third issue is errors. Modern processors compute with stunning accuracy, but they aren't perfect. The
average processor will produce one error per year, as a thermal fluctuation or a random electrical spike
flips a bit of data from one value to another.
Such errors are relatively easy to ferret out when the number of processors is low. But it gets much
harder when 100 million to 1 billion processors are involved. And increasing complexity produces
additional software errors as well. One possible solution is to have the supercomputer crunch different
problems multiple times and vote for the most common solution. But that creates a new problem.
How can I do this without wasting double or triple the resources? Lucas asks. Solving this problem
will probably require new circuit designs and algorithms.
Finally, there is the challenge of redesigning the software applications themselves, such as a novel
climate model or a simulation of a chemical reaction. Even if we can produce a machine with 1 billion
processors, it's not clear that we can write software to use it efficiently, Lucas says. Current parallel
computing machines use a strategy, known as message passing interface, that divides computational
problems and parses out the pieces to individual processors, then collects the results. But coordinating
all this traffic for millions of processors is becoming a programming nightmare. There's a huge concern
that the programming paradigm will have to change, Rosner says.
DOE has already begun laying the groundwork to tackle these and other challenges. Last year it began
funding three co-design centers, multi-institution cooperatives led by researchers at Los Alamos,
Argonne, and Sandia national laboratories. The centers bring together scientific users who write the
software code and hardware makers to design complex software and computer architectures that work
in the fastest and most energy-efficient manner. It poses a potential clash between scientists who favor
openness and hardware companies that normally keep their activities secret for proprietary reasons.
But it's a worthy goal, agrees Wilfred Pinfold, Intel's director of extreme-scale programming in
Hillsboro, Oregon.

Coming up with the cash


Solving these challenges will take money, and lots of it. Two years
ago, Simon says, DOE officials estimated that creating an exascale
computer would cost $3 billion to $4 billion over 10 years. That
amount would pay for one exascale computer for classified defense
work, one for nonclassified work, and two 100-petaflops machines
to work out some of the technology along the way.
Those projections assumed that Congress would deliver a promised
10-year doubling of the budget of DOE's Office of Science. But those
assumptions are out of the window, Simon says, replaced by the
more likely scenario of budget cuts as Congress tries to reduce
overall federal spending.
Given that bleak fiscal picture, DOE officials must decide how
aggressively they want to pursue an exascale computer. What's the
right balance of being aggressive to maintain a leadership position
and having the plan sent back to the drawing board by [the Office of

View larger version:


In this page In a new window

Not so fast. Researchers have


some ideas on how to
overcome barriers to building
exascale machines.

Management and Budget]? Simon asks. I'm curious to see. DOE's strategic plan, due out next month,
should provide some answers.

3 of 4

10/10/12 10:53 AM

What It'll Take to Go Exascale

http://www.sciencemag.org/content/335/6067/394.full

The rest of the world faces a similar juggling act. China, Japan, the European Union, Russia, and India all
have given indications that they hope to build an exascale computer within the next decade. Although
none has released detailed plans, each will need to find the necessary resources despite these tight
fiscal times.
The victor will reap more than scientific glory. Companies use 57% of the computing time on the
machines on the Top500 List, looking to speed product design and gain other competitive advantages,
Dongarra says. So government officials see exascale computing as giving their industries a leg up.
That's particularly true for chip companies that plan to use exascale designs to improve future
commodity electronics. It will have dividends all the way down to the laptop, says Peter Beckman, who
directs the Exascale Technology and Computing Initiative at Argonne National Laboratory in Illinois.
The race to provide the hardware needed for exascale computing will be extremely competitive,
Beckman predicts, and developing software and networking technology will be equally important,
according to Dongarra. Even so, many observers think that the U.S. track record and the current
alignment of its political and scientific forces makes it America's race to lose.
Whatever happens, U.S. scientists are unlikely to be blindsided. The task of building the world's first
exascale computer is so complex, Simon says, that it will be nearly impossible for a potential winner to
hide in the shadows and come out of nowhere to claim the prize.
The editors suggest the following Related Resources on Science sites

Leave a comment (0)

In Science Magazine
PODCASTS

Science Podcast
Science 27 January 2012: 481.
Summary

4 of 4

Full Text

Transcript

10/10/12 10:53 AM

Das könnte Ihnen auch gefallen