Sie sind auf Seite 1von 49

CriB 2010 Seminar Series

Scientific Computing on the Cloud:


Many Task Computing and other
opportunities

Constantinos Evangelinos Pierre F. J. Lermusiaux


Chris Hill Jinshan Xu
MIT Patrick J. Haley Jr.
Earth, Atmospheric and Planetary Sciences MIT/Mechanical Engineering

MIT/EAPS & Mech.Eng.


C. Evangelinos (ce107@computer.org)
Outline
● Many Task Computing
● ESSE as an MTC application
● ESSE on clusters, grids and Amazon EC2
● Amazon EC2 for HPC?
● Amazon EC2 for education
● Conclusions

MIT/EAPS & Mech.Eng.


C. Evangelinos (ce107@computer.org)
Motivation
● Could cloud computing be in our future for
climate (ocean and coupled climate) models?
– Can it be useful for more than EP or Map-Reduce
type of applications?
– Are the days of having to purchase, install and
maintain personal clusters coming to an end?
– Could grant money buy cloud cycles some day?
– Can it be used for HPC instruction?
– Can it be used for Geosciences education?
● What about HPC performance in a virtual
machine environment?
– Issues and middleware
MIT/EAPS & Mech.Eng.
C. Evangelinos (ce107@computer.org)
Many Task Computing
● Loose definition by Foster et al.: high-performance computations
comprising multiple distinct activities, coupled via (for example) file
system operations or message passing. Tasks may be small or large,
uniprocessor or multiprocessor, compute-intensive or data-intensive.
The set of tasks may be static or dynamic, homogeneous or
heterogeneous, loosely or tightly coupled. The aggregate number of
tasks, quantity of computing, and volumes of data may be extremely
large.
● What it is not:
– Plain MPMD (unless one speaks of dynamic/heterogeneous)
– Workflow (only part of the story)
– Capacity computing
– High Throughput computing
– Embarrassingly parallel computing
● Instead of metric jobs/day, metric is units per sec or per hour.

MIT/EAPS & Mech.Eng.


C. Evangelinos (ce107@computer.org)
DA Motivation
● Improve the forecasting capabilities of ocean
data assimilation and related fields via
increased access to parallelism
● Move existing computational framework to a
more modern, non-site specific setup
● Test the opportunities for executing massive
task count workflows on distributed clusters,
Grid and Cloud platforms.
● Provide an external outlet to handle peak-
demand for compute resources during live
experiments in the field

MIT/EAPS & Mech.Eng.


C. Evangelinos (ce107@computer.org)
Ocean Data Assimilation
dx =M(x, t) + dη; M the model operator
o
y = H(xk, tk) + εk; H the measurement operator
k

minx J(xk,yko; dη, εk, Q(t), Rk); J objective function


Model errors are assumed Brownian:
dη = N(0,Q(t)) with E{dη(t) dη(t) T} = Q(t) dt
In fact the models are forced by processes with
noise correlated in space and time (meteo)
Measurement errors follow white Gaussian:
εk = N(0, Rk)
MIT/EAPS & Mech.Eng.
C. Evangelinos (ce107@computer.org)
Ocean Acoustics
Estimate of the ocean temperature and salinity
fields (and uncertainties) necessary for calculating
acoustic fields and their uncertainties.
Sound-propagation studies often focus on vertical
sections. Time is fixed and an acoustic broadband
transmission loss (TL) field is computed for each
ocean realization.
A sound source of specific frequency, location
and depth is chosen. The coupled physical-
acoustical covariance P for the section is
computed and non-dimensionalized and used for
assimilation of hydrographic and TL data.
MIT/EAPS & Mech.Eng.
C. Evangelinos (ce107@computer.org)
Acoustic climatology maps
Mean Transmission Loss (TL) TL STD over depth TL STD over bearing

55dB 1.3 dB 3 dB

77km

65km 65dB 0.1 dB 0.1 dB


effect of Effect of steep
internal tides bathymetry

● Underwater acoustics transmission loss variability predictions in a 56 x 33


km area northeast of Taiwan.
● 2D propagation over 15km distance at 31x31 = 961 grid points X 8
directions
● Each job a short 3 minute acoustics 2D ray propagation problem
● Distributed on 100 dual-core computer nodes, speed up more than 100
times in real time experiment (SGE overhead of scheduling short jobs)
MIT/EAPS & Mech.Eng.
C. Evangelinos (ce107@computer.org)
Canyon Nx2D acoustics modeling

OMAS – moving sound source

Bathymetry of Mien Hua Canyon

MIT/EAPS & Mech.Eng.


C. Evangelinos (ce107@computer.org)
AOSN-II Monterey Bay

MIT/EAPS & Mech.Eng.


C. Evangelinos (ce107@computer.org)
Error Subspace Statistical Estimation

MIT/EAPS & Mech.Eng.


C. Evangelinos (ce107@computer.org)
ESSE Surf. Temp. Error Standard Deviation Forecasts for AOSN-II
Leonard and Ramp, Lead PIs
Aug 12 Aug 13 Aug 14

Start of Upwelling First Upwelling period

Aug 24 Aug 27 Aug 28

End of Relaxation Second Upwelling period


MIT/EAPS & Mech.Eng.
C. Evangelinos (ce107@computer.org)
Serial and Parallel ESSE workflows

MIT/EAPS & Mech.Eng.


C. Evangelinos (ce107@computer.org)
The ESSE workflow engine
● Is actually (for historical and practical reasons)
a heavily modified C-shell script (master)!
– Catches signals to kill all remaining jobs
● Grid Engine, Condor and PBS variants
– Submits and tracks singleton jobs
● Or uses job arrays for scalability
– Further variants depending on I/O strategy:
● Separate pert singletons?
● Input/output to shared or local disk (or mixed)?
● Shared directories store files with the execution
status of each of the singleton scripts
● Singletons need the perturbation number:tricks!
MIT/EAPS & Mech.Eng.
C. Evangelinos (ce107@computer.org)
Multi-level parallelism in ESSE
● Nested ocean model
runs (HOPS) are run
in parallel
– Limited parallelism
– 2 or 3 levels
– bi-directional
● SVD calculation is
based on
parallelizable
LAPACK routines
● Convergence check
calculation also.
MIT/EAPS & Mech.Eng.
C. Evangelinos (ce107@computer.org)
ESSE and ocean acoustics
● As things stand ESSE is used to provide the
necessary temperature and salinity information
for sound propagation studies.
● The ESSE framework can also be extended to
acoustic data assimilation. With significantly
more compute power one can compute the
whole “acoustic climate” in a 3D region
– providing TL for any source and receiver
locations in the region as a function of time
and frequency,
– by running multiple independent tasks for
different sources/frequencies/slices at
different times.
MIT/EAPS & Mech.Eng.
C. Evangelinos (ce107@computer.org)
Canyon Nx2D acoustics modeling

● Acoustics transmission loss difference in 6 hours (internal tides or other


uncertainties)
● In future, incorporate with ESSE for uncertainties estimation, computation
cost will be 1800 (directions) X 15 locations X HUNDREDS of cases.

MIT/EAPS & Mech.Eng.


C. Evangelinos (ce107@computer.org)
Ocean DA/ESSE/acoustics: MTC
● A minimum of hundreds to thousands (and with
increased fidelity tens of thousands) of ocean
model runs (tens of minutes or more) preceded
by an equal number of IC perturbations (secs)
● File I/O intensive, both for reading and writing
● Concurrent reads to forcing files etc.
● Thousands of short acoustics runs (mins)
● Future directions for ESSE will generate even
more tasks:
– dynamic path sampling for observing assets
– combined physical-acoustical ESSE
MIT/EAPS & Mech.Eng.
C. Evangelinos (ce107@computer.org)
“Real-time” experiments

MIT/EAPS & Mech.Eng.


C. Evangelinos (ce107@computer.org)
Notable differences
From many parameter sweeps and other MTC apps:
● there is a hard deadline associated with the execution of the
ensemble worflow, as a forecast needs to be timely;
● the size of the ensemble is dynamically adjusted according to
the convergence of the ESSE workflow which is not a DAG;
● individual ensemble members are not significant (and their
results can be ignored if unavailable) - what is important is the
statistical coverage of the ensemble;
● the full resulting dataset of the ensemble member forecastis
required, not just a small set of numbers; IC are different for
each ensemble members
● individual forecasts within an ensemble, especially in the case
of interdisciplinary interactions and nested meshes, can be
parallel programs themselves.

MIT/EAPS & Mech.Eng.


C. Evangelinos (ce107@computer.org)
And their implications
● Deadline: use any Advanced Reservation capabilities available
● Dynamic: means that the actual total compute and data
requirements for the forecast are not known beforehand and
change dynamically
● Dropped members: suggests that failures (due to software or
hardware problems) are not catastrophic and can be tolerated.
Moreover runs that have not finished (or even started) by the
forecast deadline can be safely ignored provided they do not
collectively represent a systematic hole in the statistical
coverage.
● I/O needs: mean that relatively high data storage and network
bandwidth constraints will be placed on the underlying
infrastructure
● Parallel ensemble members: mean that the compute
requirements will not be insignificant either.
MIT/EAPS & Mech.Eng.
C. Evangelinos (ce107@computer.org)
Ocean DA on local clusters
● Local Opteron cluster
– Opteron 250 2.4GHz (4GB RAM) compute
nodes (single gigabit network connection)
– Opteron 2380 2.5GHz (24GB RAM) head node
– 18TB of shared disk (NFS) over 10Gbit Ethernet
– 200Gbit switch backplane
– Grid Engine and Condor co-existing
● Tried both GridEngine and Condor versions of
ESSE workflows. Test 600 member ensemble:
– I/O optimizations (all local dirs) 86 to 77 mins
– SGE 10-20% faster than Condor
● without heroic tuning of the latter
MIT/EAPS & Mech.Eng.
C. Evangelinos (ce107@computer.org)
Ocean DA on the Teragrid
● Extensive use of sshfs to share directories for
checking state of runs etc.
● Remote job submissions (over (gsi)ssh)
– part of driver and modified singletons
● Or Condor-C and Glide-in with care if root
● Condor-G will not scale
● Or Personal Condor & Mycluster
System cores pert pemodel
ORNL 2 67.83 1823.99
Purdue 4 6.25 1107.4
local 2 6.21 1531.33
MIT/EAPS & Mech.Eng.
C. Evangelinos (ce107@computer.org)
Advantages of the Teragrid
● Enormous numbers of theoretically available
cores and very large sizes for storage
– Condor pool supposedly 14-27kcores (~1800)
● Shared high-speed parallel filesystems
● High speed connections to the home cluster
● Suites of Grid software for remote file access
and job submission, control etc.
– Mixed blessing...
● Free after writing the proposal to convince
Teragrid to get the SUs...

MIT/EAPS & Mech.Eng.


C. Evangelinos (ce107@computer.org)
Disadvantages of the Teragrid
● Very large heterogeneity in both hardware, O/S and
paths (to scratch disks etc.) requiring mods to the
singleton code – user confusion.
● Without advance reservations one cannot be
guaranteed not to have to use multiple Teragrid sites
to reach the desired number of processors within the
deadline.
– Backfilling can help but per user job limits also limit
the usability of a single Teragrid site
– Schedulers favor large processor count runs
– Complicated tricks to submit many jobs as one
● Teragrid MPPs not always suitable for scripts
● Careful fetching of results back to home (congestion)
MIT/EAPS & Mech.Eng.
C. Evangelinos (ce107@computer.org)
Ocean DA on the Cloud
● We have been experimenting with the use of
Cloud computing for more traditional HPC
usage – including parallel runs of I/O intensive
data parallel ocean models such as MITgcm.
● Given the limitations seen in network
performance it was natural to try and
investigate the usability of Amazon EC2 for
MTC applications such as ESSE.

MIT/EAPS & Mech.Eng.


C. Evangelinos (ce107@computer.org)
Cloud Modes of usage
● Stand-alone (batch) on-demand EC2 cluster
– Torque or SGE (all-in-the cloud or remote submits)
● Augmented local cluster with EC2 nodes
– We have a Torque setup
– Used recipes for SGE setup.
– Condor use of EC2 too restrictive
– MyCluster dynamic SGE or Condor merged clusters
– Commercial (Univa Unicloud, Sun Cloud
Adapter in Hedeby/SDM) for fully dynamic
provisioning
● Experientation with parallel filesystems:
PVFS2/GlusterFS/FhGFS
MIT/EAPS & Mech.Eng.
C. Evangelinos (ce107@computer.org)
Serial pert/pemodel performance
 System cores pert pemodel
m1.small 0.5 13.53 2850.14
m1.large 2 9.33 1817.13
m1.xlarge 4 9.14 1860.81
c1.medium 2 9.8 1008.11
c1.xlarge 8 6.67 1030.42
m2.2xlarge 4 3.39 779.77
m2.4xlarge 8 3.35 790.86
●m1.xxxx AMIs are using Opteron processors
●A binary optimized with the Pathscale compilers was used

● All cores were loaded.

● I/O is to local disk (EBS is slower, so is NFS that is used for

the centrally coordinating directory of the run)


● Total runtime is reported.

● Better than 2.5 speedup for m1.small to c1.medium

● Nehalems (m2.xxxxx) not the best option for price/perf.

MIT/EAPS & Mech.Eng.


C. Evangelinos (ce107@computer.org)
Advantages of the Cloud
● For all intents and purposes the response is immediate.
Currently a request for a virtual EC2 cluster gets satisfied on-
demand, without having to worry about queue times and backfill
slots.
● The use of virtual machines allows for deploying the same
environment as the home cluster. This provides for a very clean
integration of the two clusters.
● Having the same software environment also results in no need
to rebuild (and in most cases having to revalidate) executables.
This means that last minute changes (because of model build-
time parameter tuning) can be used ASAP instead of having to
go through a buildtest-deploy cycle on each remote platform.
● EC2 allows our virtual clusters to scale at will: (default limit 20)
● Since the remote machines are under our complete control,
scheduling software and policies etc. are tuned to our needs.
MIT/EAPS & Mech.Eng.
C. Evangelinos (ce107@computer.org)
Cost analysis
● Cost-wise for example an ESSE calculation
with 1.5GB input data, 960 ensemble members
each sending back 11MB (for a total of 6.6GB)
would cost:
– 1.5(GB)×0.1+10.56(GB)x0.17 for the data
– 2(hr)x20x0.68 for the computations
– For a total of $29.15
● Use of reserved instances would drop pricing
for the cpu usage by more than a factor of 3.
● Compare that to the cost of overprovisioning
your local cluster resources to handle the peak
load required a few times a year.
MIT/EAPS & Mech.Eng.
C. Evangelinos (ce107@computer.org)
Disadvantages of the Cloud
● Inhomogeneity needs to be kept in mind or it will bite you
● Any extra security issues need to be worked out.
● EC2 usage needs to be directly paid to Amazon. Amazon
charges by the hour - like a cell-phone, 1 hour 1 sec. counts as
2 hours. Charges for data movement in and out of EC2.
● The performance of virtual machines is less than that of “bare
metal”, the difference more pronounced when it comes to I/O.
● No persistent large parallel filesystem. One can be constructed
on demand (just like the virtual clusters) but the Gigabit
Ethernet connectivity used throughout Amazon EC2 alongside
the randomization of instance placement mean that parallel
performance of the filesystem is not up to par. Horror stories...
● Unlike national and state supercomputing facilities, Amazon’s
connections to the home cluster are bound to be slower and
result in file transfer delays.
MIT/EAPS & Mech.Eng.
C. Evangelinos (ce107@computer.org)
Future work directions
● Reimplement the workflow engine.
– Considering Swift – other options? Nimrod?
● Generalize ESSE work-engine away:
– Use with other ocean models (MITgcm,ROMS)
● Expand production use of ESSE:
– Heterogeneous sites on the Teragrid
– Open Science Grid
– MPPs with sufficient support: Blue Gene/P?
● Expand uses for ESSE (and number of tasks):
– ESSE for Acoustics
– ESSE for adaptive sampling
MIT/EAPS & Mech.Eng.
C. Evangelinos (ce107@computer.org)
Which sampling on Aug 26 optimally reduces uncertainties on Aug 27?
4 candidate tracks, overlaid on surface T fct for Aug 26

•Based on nonlinear error covariance evolution


•For every choice of adaptive strategy, an ensemble
is computed
Best predicted relative error reduction: track 1
ESSE fcts after DA of
IC(nowcast) DA each track

Aug 24 Aug 26 Aug 27

DA 1 ESSE for Track 1

DA 2 ESSE for Track 2


2-day ESSE fct
DA 3 ESSE for Track 3

DA 4 ESSE for Track 4

MIT/EAPS & Mech.Eng.


C. Evangelinos (ce107@computer.org)
6
Memory Bandwidth
5

1 thread
3
5.4 5.4 5.3 5.6 N threads
per thread
2

2.6 2.6 2.8 2.8


1

0
m1.small c1.medium Opteron 1.4GHz

● The small instance memory bandwidth appears to be equal to the full


memory expected from such a platform despite the 50% cpu time throttler –
not entirely unexpected for memory bandwidth.
● The faster CPU in the c1.medium instance does considerably worse.

In fact an original 1st gen 1.4 GHz Opteron system also does worse (DDR2
memory in the m1.small instance should help).
● This suggests that for memory bandwidth limited applications the small
instance may be the most efficient
● The increase of memory bandwidth with the c1.medium instance suggests
that the 2 cores are not on the same die. This would be an Amazon policy.
MIT/EAPS & Mech.Eng.
C. Evangelinos (ce107@computer.org)
Serial Performance
System Class A Class W EP (A) EP (W)
m1.small 132 149 6.66 6.73
c1.medium 312 357 15.59 15.04
ratio 2.36 2.4 2.34 2.23
NAS NPB serial (geometric mean of all tests except EP) in

Mop/s
● Compiled with system gcc (generic flags)

● A single instance running on the c1.medium case (no

memory bandwidth contention)


● 1:2.5 theoretical ratio becomes 1:2.3

● Still the price ratio is 1:2

● When loading both cores in the the c1.medium the resulting

ratio depends on the memory vs cpu utilization


characteristics of the individual benchmark
MIT/EAPS & Mech.Eng.
C. Evangelinos (ce107@computer.org)
10000
I/O performance
Serial IOR read and write bandwidth (128KB requests, 1MB
1000 blocksize, 100 segments), fsync, MB/s

/tmp write
100 /tmp read
NFS write
1626 1882 2399 NFS read

168 284 264


10
73 42 63 63
40 41

1
m1.small c1.medium c1.medium 2 cores

100

Serial NAS NPB 3.3 BT-IO; Fortran I/O, MB/s

Class A /tmp
10 Class W /tmp
Class A NFS
Class W NFS
29 29 26 30 28 28 26
17 16 21
15 14

1
m1.small c1.medium c1.medium 2 cores

MIT/EAPS & Mech.Eng.


C. Evangelinos (ce107@computer.org)
250
MPI performance
198.59
200

150

117.64 unidirectional bandwidth (MB/s)


bidirectional bandwidth (MB/s)
100
81.98 77.07 83.42
57.85 54.6 58.49
50
26.08
15.72 16.4417.99
0
LAM GridMPI MPICH2 nemesis MPICH2 sock OpenMPI LAM/ACES

350

300 300
300

250

200
latency (us)
150

100 81.2 83.46 85.87

50 35.83

0
LAM GridMPI MPICH2 nemesis MPICH2 sock OpenMPI LAM/ACES

MIT/EAPS & Mech.Eng.


C. Evangelinos (ce107@computer.org)
MPI performance cont.

MIT/EAPS & Mech.Eng.


C. Evangelinos (ce107@computer.org)
Coupled climate simulation
● The MITgcm (MIT General Circulation Model) is
a numerical model designed for study of the
atmosphere, ocean, and climate.
– MPI (+OpenMP) code base, Fortran 77(/90) with
some C bits – very portable
– Memory bandwidth intensive but not entirely
memory bound – also I/O intensive for climate
applications.
● Coupled ocean-coupler-(atmosphere-land-ice)
model on a ~2.8° cubed sphere (6 32x32 faces)
– MPMD mode, 3 binaries, up to 6+6+1 processes in
a standard configuration.
MIT/EAPS & Mech.Eng.
C. Evangelinos (ce107@computer.org)
ECCO-GODAE
● 1 degree MITgcm ocean simulation (including sea-ice)
that computes costs with respect to misfits to
observational data. Automatic differentiation.
– Followed by an optimization step that generally will
not fit on EC2 nodes (large memory)
– Loops over – so a lot of data transfer involved.
● 32, 60 or 64 processor runs usually.
● Very I/O intensive (60-120 or more GB input data, 25-
200 or more GB output data that need to be kept,
more in terms of intermediate files).
● Per process I/O useful but bothersome.
● Ensembles of forward runs less I/O demanding (MTC
at large scale?)
MIT/EAPS & Mech.Eng.
C. Evangelinos (ce107@computer.org)
Modes of usage
● Stand-alone (interactive) on-demand EC2
cluster
● Stand-alone (batch) on-demand EC2 cluster
– Torque or SGE
● Augmented local cluster with EC2 nodes
– We have a Torque setup
– Used recipes for SGE setup.
● Project Hedeby
● Parallel filesystems: PVFS2/GlusterFS/FhGFS
● Inhomogeneity needs to be kept in mind
● Security issues need to be worked out.
MIT/EAPS & Mech.Eng.
C. Evangelinos (ce107@computer.org)
180
Optimizing compiler issues
160

140

120

100
Class W
80 159.46 165.91 Class A
148.83 150.22 151.11
141.25 143.82 149.91 150.97 146
131.57 139.01 139.02
60

40

20

0
gcc 4.1 open64 4.1 Pathscale 2.5 PGI 6.1 Absoft 10 Intel 9.1 Studio12

● Two high performance compilers that can be deployed without licensing


issues for academics and may perform better: Open64 and Sun Studio 12.
– The latter provides an 11.5% performance boost for the geometric mean
of the tests (up to 25% for MG).
– The MPI runtime may need to be rebuilt for the new compiler every time
● To use the Intel, Absoft, PGI compilers one can employ a local virtual
machine with a valid software license using the same OS and middleware as
the virtual cluster and then run the executables on the EC2 cluster.
MIT/EAPS & Mech.Eng.
C. Evangelinos (ce107@computer.org)
The economics of Clouds
• So can we move to an all-cloud option for our HPC
needs?
̶ The enticement: No more worrying about hardware
maintenance, upgrades, network administration,
possibly system administration (using pre-
configured clusters) leading to lower costs.
̶ At the same time “virtual” clusters retain part of the
“cluster”-hugging mentality of some users.
̶ And at the institute level:
̶ No need to worry about
building/renovating/retrofitting datacenters
̶ And most importantly in days of increasing energy
costs, you don't see electricity bills anymore
̶ The carbon impact becomes someone else's
problem.
MIT/EAPS & Mech.Eng.
C. Evangelinos (ce107@computer.org)
An exercise
• Part of an effort at MIT for investigating future needs:
̶ 0.68c/hour for 2 cpu, 8 core Xeon instance on Amazon
EC2 (cheapest option offering fullest flexibility currently
available)
̶ Cost of 158 racks, 2U, low density, 21 node per rack
equivalent is 158x21x0.68 is $2256.24 per hour.
̶ Using reserved instances it is 158x21x0.24=$796.3
̶ Assuming an 85% utilization, that amounts to
2256.24*24*365*0.85 = $16.8 million per year, ~7 times
our expected electricity bill for a highly efficient
datacenter.
2800/3*158*21+2654.4*24*365*0.24=$8.7 per annum
̶ With the cost of building a datacenter the cloud costs
more after 4 (9) (or less for more racks) years.
̶ But sporadic use is very well suited economically to the
use of clouds.
– Gigabit Ethernet limitations for large instance counts.
MIT/EAPS & Mech.Eng.
C. Evangelinos (ce107@computer.org)
Education
● Using cloud computing for geosciences
education.
– Multi-tiered approach(client-server emphasis)
– Up to one Amazon EC2 instance per student
(full cpu power for each student if needed)
– VNC or other remote visualization approach
– Menu/forms driven models
– Web interface integrating course material with
demonstrations
– Simulations mimicking experiments run in class
● MPI/OpenMP class taught at MIT (IAP 2008-
10)
MIT/EAPS & Mech.Eng.
– EC2 and/or VMware image
C. Evangelinos (ce107@computer.org)
Educational uses
● The opportunity to
host all of ESSE's
computational
needs on EC2
allows for a vision
of ocean DA for
education.
● CITE (Cloud-
computing
Infrastructure and
Technology for
Education) – NSF
STCI project.
MIT/EAPS & Mech.Eng.
C. Evangelinos (ce107@computer.org)
Virtual teaching environment

MIT/EAPS & Mech.Eng.


C. Evangelinos (ce107@computer.org)
LCML/LEGEND
● LCML (Legacy Computing Markup Language)
is an XML Schema based framework for
encapsulating the process of configuring the
build-time and run-time configuration of legacy
binaries alongside constraints.
● It was implemented for ocean/climate models
but designed for general applications that use
Makefiles, imake, cmake, autoconf etc. to setup
build-time configuration (not ant).
● LEGEND is a Java-based validating GUI
generator that parses LCML files describing an
application and produces a GUI for the user to
build and run the model.
MIT/EAPS & Mech.Eng.
C. Evangelinos (ce107@computer.org)
LEGEND in action

MIT/EAPS & Mech.Eng.


C. Evangelinos (ce107@computer.org)

Das könnte Ihnen auch gefallen