You are on page 1of 59

Science in the Clouds: History, Challenges, and Opportunities

Douglas Thain University of Notre Dame GeoClouds Workshop 17 September 2009


1

http://www.cse.nd.edu/~ccl

The Cooperative Computing Lab


We collaborate with people who have large scale computing problems. We build new software and systems to help them achieve meaningful goals. We run a production computing system used by people at ND and elsewhere. We conduct computer science research, informed by real world experience, with an impact upon problems that matter.
3

Clouds in the Hype Cycle

Gartner Hype Cycle Report, 2009

What is cloud computing?


A cloud provides rapid, metered access to a virtually unlimited set of resources. Two significant impact on users:
End users must have an economic model for the work that they want to accomplish. Apps must be flexible enough to work with an arbitrary number and kind of resources.
5

Example: Amazon EC2 Sep 2009


(simplified slightly for discussion) Small: 1 core, 1.7GB RAM, 160GB disk
10 cents/hour

Large: 2 cores, 7.5GB RAM, 850GB disk


40 cents/hour

Extra Large: 4 cores, 15 GB, 1690GB disk


80 cents/hour

And the Simple Storage Service:


15 cents per GB-month stored 17 cents per GB transferred (outside of EC2) 1 cent per 1000 write operations 1 cent per 10000 read operations
6

Is Cloud Computing New?


Not entirely, but a combination of the old ideas of utility computing and distributed computing.
1960 MULTICS 1980 The Cambridge Ring 1987 Condor Distributed Batch System 1989 Seti@Home 1990s Clusters, Beowulf, MPI, NOW 1995 Globus, Grid Computing 2001 TeraGrid 2004 Sun Rents CPUs at $1/hour 2006 Amazon EC2 and S3
7

Clouds Trade CapEx for OpEx

Cost

2X
OpEx of Ownership
Capital Expense of Ownership OpEx of Cloud Computing

Time
8

What about grid computing?


A vision much like clouds:
A worldwide framework that would make massive scale computing as easy to use as an electrical socket.

The more modest realization:


A means for accessing remote computing facilities in their native form, usually for CPUintensive tasks.

The social context:


Large collaborative efforts between computer scientists and computer-savvy fields, particularly physics and astronomy. 9

Clouds vs Grids
Grids provide a job execution interface:
Run program P on input A, return the output. Allows the system to maximize utilization and hide failures, but provides few performance guarantees and inaccurate metering.

Clouds provide resource allocation:


Create a VM with 2GB of RAM for 7 days. Gives predictable performance and accurate metering, but exposes problems to the user. Can be used to build interactive services. How do I run 1M jobs on 100 servers?
10

Submit 1M Jobs

Allocate 100 CPUs

Grid Computing Layer Provides Job Execution Dispatch Jobs Manage Load

Cloud Computing Layer Provides Resource Allocation


11

Create a Condor Pool with 100 Nodes

Allocate 100 Cores

Run 1M Jobs

12

Clouds Solve Some Grid Problems


Application compatibility is simplified.
You provide a VM for Linux 2.3.4.1.2.

Performance is reasonably predictable.


10% variations rather than orders of mag.

Fewer administrative headaches for the lone user.


A credit card swipe instead of a certificate.

13

But, Problems New and Old:


How do I reliably execute 1M jobs? Can I share resources and data with others in the cloud? How do I authenticate others in the cloud? Unfortunately, location still matters. Can we make applications efficiently span multiple cloud providers? Can we join existing centers with clouds? (These are all problems contemplated by grid.)
14

More Open Questions


Can I afford to move my data in to the cloud? Can I afford to get it out? Do I trust the cloud to secure my data? How do I go about constructing an economic model for my research? Are there social/technical dangers in putting too many eggs in one basket? Is pay-go the proper model for research? Should universities get out of the data center business?
15

Clusters, clouds, and grids give us access to unlimited CPUs. How do we write programs that can run effectively in large systems?

16

MapReduce ( S, M, R )
Set S
K,V K,V K,V K,V K,V K,V K,V

Key0 V V V

O0

Key1

O1

KeyN V V V V
17

O2

Of course, not all science fits into the Map-Reduce model!

18

Example: Biometrics Research


Goal: Design robust face comparison function.

0.97

0.05
19

Similarity Matrix Construction

1.0

0.8 1.0

0.1 0.0 1.0

0.0 0.1 0.0 1.0

0.0 0.1 0.1 0.0 1.0

0.1 0.0 0.3 0.0 0.1

Challenge Workload: 60,000 iris images 1MB each .02s per F 833 CPU-days 600 TB of I/O

1.0
20

I have 60,000 iris images acquired in my research lab. I want to reduce each one to a feature space, and then compare all of them to each other. I want to spend my time doing science, not struggling with computers.
I own a few machines I have a laptop. I can buy time from Amazon or TeraGrid.

Now What?

21

22

23

24

Non-Expert User Using 500 CPUs


Try 1: Each F is a batch job. Failure: Dispatch latency >> F runtime. Try 2: Each row is a batch job. Failure: Too many small ops on FS.

CPU CPU CPU CPU CPU F F F F F

F F F F F F F F F F F CPU CPU CPU CPU F F F F CPU F F F F F

HN

HN

Try 3: Bundle all files into one package. Failure: Everyone loads 1GB at once.

Try 4: User gives up and attempts to solve an easier or smaller problem.

F F F F F F F F F F F CPU CPU CPU CPU F F F F CPU F F F F F

HN
25

Observation
In a given field of study, many people repeat the same pattern of work many times, making slight changes to the data and algorithms. If the system knows the overall pattern in advance, then it can do a better job of executing it reliably and efficiently. If the user knows in advance what patterns are allowed, then they have a better idea of how to construct their workloads.
26

Abstractions for Distributed Computing


Abstraction: a declarative specification of the computation and data of a workload. A restricted pattern, not meant to be a general purpose programming language. Uses data structures instead of files. Provide users with a bright path. Regular structure makes it tractable to model and predict performance.
27

Working with Abstractions


A1 A2 An A1 A2 Bn F

AllPairs( A, B, F )

Custom Workflow Engine

Compact Data Structure

Cloud or Grid

28

All-Pairs Abstraction
AllPairs( set A, set B, function F ) returns matrix M where M[i][j] = F( A[i], B[j] ) for all i,j
A1 A1 An
B1 B1 Bn F B3 F F F
29

A1

A2

A3

allpairs A B F.exe
B1 AllPairs(A,B,F) B2 F F F F F F

How Does the Abstraction Help?


The custom workflow engine:
Chooses right data transfer strategy. Chooses the right number of resources. Chooses blocking of functions into jobs. Recovers from a larger number of failures. Predicts overall runtime accurately.

All of these tasks are nearly impossible for arbitrary workloads, but are tractable (not trivial) to solve for a specific abstraction.
30

31

Choose the Right # of CPUs

32

Resources Consumed

33

All-Pairs in Production
Our All-Pairs implementation has provided over 57 CPU-years of computation to the ND biometrics research group over the last year. Largest run so far: 58,396 irises from the Face Recognition Grand Challenge. The largest experiment ever run on publically available data. Competing biometric research relies on samples of 100-1000 images, which can miss important population effects. Reduced computation time from 833 days to 10 days, making it feasible to repeat multiple times for 34 a graduate thesis. (We can go faster yet.)

35

Are there other abstractions?

36

Wavefront( matrix M, function F(x,y,d) ) returns matrix M such that M[i,j] = F( M[i-1,j], M[I,j-1], M[i-1,j-1] )
M[0,4] M[0,3]
x d

F
y

M[2,4] M[3,4] M[4,4]


x

M
Wavefront(M,F)

F
d y

F
d y

M[3,2] M[4,3]
x d x d

M[0,2] M[0,1]

x
d

F
y

x d x d

F
y

F
y

M[4,2]
x d

x d

F
y

F
y

F
y

F
y

M[0,0] M[1,0] M[2,0] M[3,0] M[4,0] 37

Applications of Wavefront
Bioinformatics:
Compute the alignment of two large DNA strings in order to find similarities between species. Existing tools do not scale up to complete DNA strings.

Economics:
Simulate the interaction between two competing firms, each of which has an effect on resource consumption and market price. E.g. When will we run out of oil?

Applies to any kind of optimization problem solvable with dynamic programming.


38

Problem: Dispatch Latency


Even with an infinite number of CPUs, dispatch latency controls the total execution time: O(n) in the best case. However, job dispatch latency in an unloaded grid is about 30 seconds, which may outweigh the runtime of F. Things get worse when queues are long! Solution: Build a lightweight task dispatch system. (Idea from Falkon@UC)
39

worker worker worker worker worker worker

1000s of workers Dispatched to the cloud

queue tasks wavefront engine tasks done work queue

put F.exe put in.txt exec F.exe <in.txt >out.txt get out.txt

worker

In.txt

out.txt
40

Problem: Performance Variation


Tasks can be delayed for many reasons:
Heterogeneous hardware. Interference with disk/network. Policy based suspension.

Any delayed task in Wavefront has a cascading effect on the rest of the workload. Solution - Fast Abort: Keep statistics on task runtimes, and abort those that lie significantly outside the mean. Prefer to assign jobs to machines with a fast history.
41

500x500 Wavefront on ~200 CPUs

42

Wavefront on a 200-CPU Cluster

43

Wavefront on a 32-Core CPU

44

The Genome Assembly Problem


AGTCGATCGATCGATAATCGATCCTAGCTAGCTACGA Chemical Sequencing

AGTCGATCGATCGAT TCGATAATCGATCCTAGCTA AGCTAGCTACGA

Millions of reads 100s bytes long.

Computational Assembly

AGTCGATCGATCGAT TCGATAATCGATCCTAGCTA AGCTAGCTACGA


45

Sample Genomes
Reads
A. gambiae scaffold A. gambiae complete S. Bicolor simulated 101K

Data
80MB

Sequential Pairs Time 738K 12M 84M 12 hours 6 days 30 days

180K 1.4GB 7.9M 5.7GB

46

Some-Pairs Abstraction
SomePairs( set A, list (i,j), function F(x,y) ) returns list of F( A[i], A[j] )
A1 A1 An (1,2) (2,1) (2,3) (3,3) F A1

A1

A2

A3

SomePairs(A,L,F)
A2 F F

A3

F
47

Distributed Genome Assembly


A1 A1 An (1,2) (2,1) (2,3) (3,3) F

100s of workers dispatched to worker Notre Dame, worker worker Purdue, and worker worker Wisconsin worker

queue tasks somepairs master tasks done work queue

detail of a single worker:


put align.exe put in.txt exec F.exe <in.txt >out.txt get out.txt worker

in.txt

out.txt

48

Small Genome (101K reads)

49

Medium Genome (180K reads)

50

Large Genome (7.9M)

51

Whats the Upshot?


We can do full-scale assemblies as a routine matter on existing conventional machines. Our solution is faster (wall-clock time) than the next faster assembler run on 1024x BG/L. You could almost certainly do better with a dedicated cluster and a fast interconnect, but such systems are not universally available. Our solution opens up research in assembly to labs with NASCAR instead of Formula-One hardware.
52

What if your application doesnt fit a regular pattern?

53

Makeflow
part1 part2 part3: input.data split.py ./split.py input.data out1: part1 mysim.exe ./mysim.exe part1 >out1 out2: part2 mysim.exe ./mysim.exe part2 >out2 out3: part3 mysim.exe ./mysim.exe part3 >out3

result: out1 out2 out3 join.py ./join.py out1 out2 out3 > result

54

Makeflow Implementation
bfile: afile prog prog afile >bfile
worker worker worker worker worker worker

100s of workers dispatched to the cloud

queue tasks makeflow master tasks done work queue

detail of a single worker:


put prog put afile exec prog afile > bfile get bfile worker

Two optimizations: Cache inputs and output. Dispatch tasks to nodes with data.

afile

prog

bfile

55

Experience with Makeflow


Still in initial deployment, so no big results to show just yet. Easy to test and debug on a desktop machine or a multicore server. The workload says nothing about the distributed system. (This is good.) Graduate students in bioinformatics running codes at production speeds on hundreds of nodes in less than a week.
56

Abstractions as a Social Tool


Collaboration with outside groups is how we encounter the most interesting, challenging, and important problems, in computer science. However, often neither side understands which details are essential or non-essential:
Can you deal with files that have upper case letters? Oh, by the way, we have 10TB of input, is that ok? (A little bit of an exaggeration.)

An abstraction is an excellent chalkboard tool:


Accessible to anyone with a little bit of mathematics. Makes it easy to see what must be plugged in. Forces out essential details: data size, execution time.
57

Conclusion
Grids, clouds, and clusters provide enormous computing power, but are very challenging to use effectively. An abstraction provides a robust, scalable solution to a narrow category of problems; each requires different kinds of optimizations. Limiting expressive power, results in systems that are usable, predictable, and reliable. Is there a menu of abstractions that would satisfy many consumers of clouds?
58

Acknowledgments
Cooperative Computing Lab
http://www.cse.nd.edu/~ccl
Faculty:
Patrick Flynn Nitesh Chawla Kenneth Judd Scott Emrich

Grad Students
Chris Moretti Hoang Bui Li Yu Mike Olson Michael Albrecht

Undergrads
Mike Kelly Rory Carmichael Mark Pasquier Christopher Lyon Jared Bulosan

NSF Grants CCF-0621434, CNS-0643229


59