An Inquiry Report On



GRID
Submitted By



COMPUTING

Jeevan Kumar Vishwakarman

vishnudhoodhan@gmail.com | +91 9020 321 091

© Jeevan Kumar Vishwakarman. All rights reserved. This is a collection of information availed from various sources including internet, text books, and newspaper publications. This collection is a sole work of the Mr. Jeevan Kumar Vishwakarman and any kind of reuse of this work in any form is prohibited, and all the rights on this collection is reserved to him and violation of this prohibition is punishable under the laws whichever is applicable.

1st M.C.A |M9 MCA AA 0010
Sree Saraswathy Thyagaraja College, Thippampatti, Pollachi.

For Personal Contact Use These…. Karampotta, Kozhinjampara, Palakkad. 678555

vishnudhoodhan@gmail.com | vishnudhoodhan@yahoo.

CONTENTS
1. 2. 3. 4. 5. 6. 7. A Gentle Introduction To Grid Computing And Technologies ............. 1 What Is Grid Computing? ......................................................................... 1 Grid Computing’s Ancestors..................................................................... 2 The Architecture ......................................................................................... 3 Five Big Ideas .............................................................................................. 4 Grids Versus Conventional Supercomputers .......................................... 5 Virtual Organizations................................................................................. 6
From passenger jets to chemical spills .................................................................... 6

8. 9.

The Hardware ............................................................................................. 6 Design Considerations And Variations .................................................... 7

10. CPU Scavenging ......................................................................................... 8 11. History ......................................................................................................... 9 12. Father of the Grid ....................................................................................... 9 13. Current Projects And Applications......................................................... 15
Fastest virtual supercomputers ............................................................................ 17

14. Definitions ................................................................................................. 17
But what does "high performance" mean? ........................................................... 18

15. The Death Of Distance ............................................................................. 18
Faster! Faster! ..................................................................................................... 19

16. Secure Access ............................................................................................ 19
Security and trust................................................................................................ 20

17. Resource Use ............................................................................................. 20
Middleware to the rescue ..................................................................................... 21

18. Resource Sharing ...................................................................................... 21 19. But Would You Trust Your Computer To A Complete Stranger? ....... 21

Jeevan Kumar Vishwakarman

iii

20. Open Standards ........................................................................................ 22 21. Who Is In Charge Of Grid Standards?.................................................... 22 22. The Middleware ....................................................................................... 23
Agents, brokers and striking deals........................................................................ 23 Delving inside middleware .................................................................................. 23

23. Globus Toolkit .......................................................................................... 24
Globus includes programs such as:....................................................................... 25

24. National Grids .......................................................................................... 26 25. International Grids ................................................................................... 28 26. High-Throughput Problems .................................................................... 32 27. High-Performance Problems ................................................................... 33 28. Grid Computing In 30 Seconds ............................................................... 34 29. The Dream ................................................................................................. 34 30. ''Gridifying'' Your Application ................................................................ 35 31. Computational Problems ......................................................................... 35
Parallel calculations: ............................................................................................ 35 Embarrassingly parallel calculations: ................................................................... 36 Coarse-grained calculations: ................................................................................ 36 Fine-grained calculations: .................................................................................... 36 High-performance vs. high-throughput ................................................................ 36 And grid computing..? ........................................................................................ 36

32. Breaking Moore’s Law? ........................................................................... 37
Nice Idea, But... ................................................................................................... 37

33. More On Moore's Law ............................................................................. 38 34. Works Cited .............................................. Error! Bookmark not defined.
Index ..................................................................... Error! Bookmark not defined.

Jeevan Kumar Vishwakarman

iv

global computer is what many people dream "the grid" will be. Australia. Several software toolkits and systems have been developed. and owned by thousands of different people. networks. sprawling.. granting users and applications access to vast it capabilities. This paper presents an introduction to Grid computing and discusses two complimentary Grid technologies: Globus developed by researchers from Argonne National Laboratory and University of Southern California. network bandwidth and storage capacity to provide a unique system image. . supercomputers. laptops. Imagine several million computers from all over the world. USA. Grid computing is already reality. "The grid" takes its name from an analogy with the electrical "power grid". This makes Grid application management and deployment a complex undertaking.. What Is Grid Computing? “grid computing allows the virtualization of distributed computing and data resources such as processing. meteorological sensors and telescopes. and instruments like mobile phones. data vaults.. all over the world. huge and super-powerful computer! This huge. and Gridbus by researchers from the University of Melbourne.” Although "the grid" is still just a dream. Grid middlewares provide users with seamless computing ability and uniform access to resources in the heterogeneous Grid environment. most of which are results of academic research projects.A Gentle Introduction To Grid Computing And Technologies Grid is an infrastructure that involves the integrated and collaborative use of computers. Grid applications often involve large amounts of data and/or computing resources that require secure resource sharing across organizational boundaries. Globus primarily focuses on providing core Grid services whereas Gridbus focuses on providing user-level Grid services in addition to utility computing model for management of grid resources. Imagine they include desktops.. databases and scientific instruments owned and managed by multiple organizations. The idea was that accessing computer power from a computer grid would be as simple as accessing electrical power from an electrical grid". Now imagine that all of these computers can be connected to form a single.

and the foster-kesselman duo had published a paper in 1997. metacomputing. ian foster of argonne national laboratory and carl kesselman of the university of southern california published "the grid: blueprint for a new computing infrastructure".An Inquiry Report On Grid Computing – a Report Work Grid Computing’s Ancestors Grid computing didn't just come out of nowhere. clearly linking the globus toolkit with its predecessor.  Grid computing was born at a workshop called "building a computational grid". both conceived in 1995.  Fafner (factoring via network-enabled recursion) aimed to factorize very large numbers. held at argonne national laboratory in september 1997. in 1998. is generally credited with popularizing the term. such as those listed below:  Grid computing's immediate ancestor is "metacomputing". Following this. often called "the grid bible". which dates back to around 1990. Since this challenge could be broken into small parts. conceptually similar to those being developed for grid computing today. a former director of the national center for supercomputing applications in the us. called "globus: a metacomputing infrastructure toolkit". even fairly modest computers could contribute useful power.  Fafner and i-way were cutting-edge metacomputing projects in the us. Jeevan Kumar Vishwakarman 2 . One of i-way's innovations was a computational resource broker. an alternative approach to distributed supercomputing. I-way strongly influenced the development of the globus project. Many fafner techniques for dividing and distributing computational problems were forerunners of technology used for seti@home and other "cycle scavenging" software. Metacomputing was used to describe efforts to connect us supercomputing centers. Each influenced the evolution of key grid technologies.  I-way (information wide area year) aimed to link supercomputers using existing networks. It grew from previous efforts and ideas. Ian foster had previously been involved in the i-way project. Larry smarr. a challenge very relevant to digital security. as well as the legion project. which is at the core of many grid activities.

and academic problems. Jeevan Kumar Vishwakarman 3 . These varying definitions cover the spectrum of "distributed computing". Equipment grids which have a primary piece of equipment e. The creation of a "virtual supercomputer" by using a network of geographically dispersed computers.An Inquiry Report On Grid Computing – a Report Work Grid computing is a phrase in distributed computing which can have several meanings:   A local computer cluster which is like a "grid" because it is composed of multiple nodes. The Architecture Grid architecture is the way in which a grid has been designed. mathematical. known as utility computing. or "cloud computing". Volunteer computing. Offering online computation or storage as a metered commercial service. "computing on demand". otherwise. Functionally. and where the surrounding grid is used to control the equipment remotely and to analyze the data produced.g. and sometimes the two terms are used as synonyms. see computer cluster.   The creation of a "virtual supercomputer" by using spare computing resources within an organization. This article focuses on distributed computing technologies which are not in the traditional dedicated clusters. Data grids or the controlled sharing and management of large amounts of distributed data. one can also speak of several types of grids:    Computational grids (including CPU scavenging grids) which focuses primarily on computationally-intensive operations. A telescope. which generally focuses on scientific. is the most common application of this technology.

business. The application layer often includes the so-called serviceware. which connects grid resources. so getting grid security right is crucial. networks. 5. where each layer has a specific function. whereas lower layers are more hardware-centric. 3. Standardization also encourages Jeevan Kumar Vishwakarman 4 . electronic data catalogues. focused on computers and networks. finance and more. storage systems.  The middleware layer provides the tools that enable the various elements (servers. Sharing resources conflicts with security policies in many individual computer centers. making it possible for everyone can contribute constructively to grid development. Resource sharing: global sharing is the very essence of grid computing. engineering. Above the network layer lies the resource layer: actual grid resources. Resource use: efficient. etc. Open standards: interoperability between different grids is a big goal.An Inquiry Report On Grid Computing – a Report Work A grid's architecture is often described in terms of "layers". The death of distance: distance should make no difference: you should be able to access to computer resources from wherever you are. The higher layers are generally user-centric. Secure access: trust between resource providers and users is essential. 2. and on individual pcs. The middleware layer is sometimes the "brains" behind a computing grid!  The highest layer of the structure is the application layer. which includes applications in science. Five Big Ideas Grid computing is driven by five big areas: 1. as well as portals and development toolkits to support the applications. especially when they don't know each other. which performs general management functions like tracking who is providing grid resources and who is using them. 4. balanced use of computing resources is essential. sensors and telescopes that are connected to the network. storage.   The lowest layer is the network. and is driven forward by the adoption of open standards for grid development.) To participate in a grid. This is the layer that grid users "see" and interact with. such as computers.

etc. due to the low need for connectivity between nodes relative to the capacity of the public internet. The infrastructure and programming considerations needed to do this on each type of platform are different. a "thin" layer of "grid" infrastructure can cause conventional. The primary performance disadvantage is that the various CPUs and local storage areas do not have high-speed connections. which when combined can produce similar computing resources to a many-CPU supercomputer. Grids Versus Conventional Supercomputers "distributed" or "grid computing" in general is a special type of parallel computing which relies on complete computers (with onboard CPU. This makes it possible to write and debug programs on a single conventional machine. This is in contrast to the traditional notion of a supercomputer. or computations for completely different applications. standalone programs to run on multiple machines (but each given a different part of the same problem). storage. This arrangement is thus well-suited to applications where multiple parallel computations can take place independently.) Connected by a conventional network interface. This is due to the economies of scale of producing commodity hardware. power supply. It can be costly and difficult to write programs so that they can be run in the environment of a supercomputer. which might be different simulations for the same project. but at lower cost. or require the program to address concurrency issues. network interface. which has many CPUs connected by a local high-speed computer bus. Both supercomputers and grids can be used to run multiple parallel computations at the same time. however. Conventional supercomputers also create physical challenges in supplying sufficient electricity and cooling capacity in a single location. There are also differences in programming and deployment. which may have a custom operating system.An Inquiry Report On Grid Computing – a Report Work industry to invest in developing commercial grid services and infrastructure. The primary advantage of distributed computing is that each node can be purchased as commodity hardware. without the need to communicate intermediate results between CPUs. compared to the lower efficiency of designing and constructing a small number of custom supercomputers. and eliminates Jeevan Kumar Vishwakarman 5 . such as ethernet or the internet. The high-end scalability of geographically dispersed grids is generally favorable. If a problem can be adequately parallelized.

sensors and networks.An Inquiry Report On Grid Computing – a Report Work complications due to multiple instances of the same program running in the same shared memory and storage space at the same time. and usually time-limited. For example. This grid can give vo members direct access to each other's computers. biology research and more. people within a vo choose to share their resources. testing various combinations of components from different manufacturers. flexible.things like computers and networks. tasked with managing a chemical spill. The needs of each vo are different. programs. data. They will need to create a short term mitigation plan and help emergency response personnel to plan and coordinate the evacuation. a vo formed to develop a nextgeneration passenger jet will need to run complex computer simulations. Another example is an environmental science vo. Vos exist for astronomy research. which forms the physical infrastructure of a grid . To achieve their mutual goal. while keeping the proprietary know-how associated with each component hidden from the other consortium members. creating a computer grid. The Hardware Grids must be built "on top of" hardware. files. Virtual Organizations Virtual organizations (vos) are groups of people who share a data-intensive goal. This vo will need to analyze local weather and soil models to estimate the spread of the spill and determine its impact. This sharing must be controlled. Jeevan Kumar Vishwakarman 6 . This infrastructure is often called the grid "fabric". From Passenger Jets To Chemical Spills Many scientists form vos to pursue their research. secure. alternative energy research.

Ultra-fast networks also help to minimize latency: the delays that build up as data are transmitted over the internet. The designers of the system must thus introduce measures to prevent malfunctions or malicious participants from producing false. A further level down are the 10 to 100mbps desktop-to-institution network links. This often involves assigning work randomly to different nodes (presumably with different owners) and checking that at least two different nodes report the same answer for a given work unit. national and international) and throughput (the amount of data transferred in a specific time). misleading. which has 10gbps performance on the network "backbone". mbps (m for mega. or erroneous results. Performance of these is typically 1gbps. a million) or gbps (g for giga.An Inquiry Report On Grid Computing – a Report Work Networks are an essential piece of the grid "fabric". and from using the system as an attack vector. One of the big ideas of grid computing is to take advantage of ultra-fast networks. This backbone links the major "nodes" on the grid (like national computing centres). which join individual institutions to nodes on the backbone. or make it easier to assemble volunteer computing networks. Design Considerations And Variations One feature of distributed grids is that they can be formed from computing resources belonging to multiple individuals or organizations (known as multiple administrative domains). allowing them to be handled as one huge computer. Networks link the different computers that form part of a grid. a billion). One disadvantage of this feature is that the computers which are actually performing the calculations might not be entirely trustworthy. This idea allows us to access globally distributed resources in an integrated and data-intensive way. This can facilitate commercial transactions. Jeevan Kumar Vishwakarman 7 . such as the intraeuropean geant network. as in utility computing. where kilo means a thousand). Networks are characterized by their size (local. Grids are built "on top of" high-performance networks. Discrepancies would identify malfunctioning and malicious nodes. Throughput is measured in kbps (kilo bits per second. One level down from the "backbone" are the network links.

or creating new security holes. transmitting private data. or to an open external network of volunteers or contractors. These variations can be accommodated by assigning large work units (thus reducing the need for continuous network connectivity) and reassigning work units when a given node fails to report its results as expected. though potentially at the expense of high performance on any given node (due to run-time interpretation or lack of optimization for the particular platform). the participating nodes must trust the central system not to abuse the access that is being granted.An Inquiry Report On Grid Computing – a Report Work Due to the lack of central control over the hardware. or shared computing creates a "grid" from the unused resources in a network of participants (whether worldwide or internal to an organization). using different operating systems and hardware architectures. or for the purpose of setting up new grids. cycle-scavenging. Cross-platform languages can reduce the need to make this tradeoff. Boinc is a common one for academic projects seeking public volunteers. With many languages. In many cases. The impacts of trust and availability on performance and development difficulty can influence the choice of whether to deploy onto a dedicated computer cluster. Usually this technique is used to make use of instruction cycles on desktop computers that would otherwise be wasted at night. Cpu Scavenging CPU-scavenging. or Jeevan Kumar Vishwakarman 8 . more are listed at the end of the article. by interfering with the operation of other programs. parabon computation produces grid computing software that operates in a java sandbox. cycle stealing. Other systems employ measures to reduce the amount of trust "client" nodes must place in the central system. For example. mangling stored information. there is a tradeoff between investment in software development and the number of platforms that can be supported (and thus the size of the resulting network). to idle machines internal to the developing organization. Various middleware projects have created generic infrastructure. during lunch. Some nodes (like laptops or dialup internet customers) may also be available for computation but not network communications for unpredictable periods. to allow various scientific and commercial projects to harness a particular associated grid. Public systems or those crossing administrative domains (including different departments in the same organization) often result in the need to run on heterogeneous systems. there is no way to guarantee that nodes will not drop out of the network at random times.

security provisioning. In practice. ram. The ideas of the grid (including those from distributed computing. and network bandwidth. widely regarded as the "fathers of the grid. in addition to raw CPU power. object oriented programming. trigger services and information aggregation. Father Of The Grid An exact extraction of an article by Amy m. participating computers also donate some supporting amount of disk storage space. Jeevan Kumar Vishwakarman 9 . a number of other tools have been built that answer some subset of services needed to create an enterprise grid.An Inquiry Report On Grid Computing – a Report Work even in the scattered seconds throughout the day when the computer is waiting for user input or slow devices. number 4). Carl Kesselman and Steve Tuecke. History The term grid computing originated in the early 1990s as a metaphor for making computer power as easy to access as an electric power grid in Ian foster and Carl Kesselmans seminal work. as their owners use their resources for their primary purpose. cluster computing. web services and others) were brought together by Ian foster. CPU scavenging and volunteer computing were popularized beginning in 1997 by distributed.net and later in 1999 by seti@home to harness the power of networked PCs worldwide. monitoring and a toolkit for developing additional services based on the same infrastructure including agreement negotiation. Volunteer computing projects use the CPU scavenging model almost exclusively." they led the effort to create the globus toolkit incorporating not just CPU management (examples: cluster management and cycle scavenging) but also storage management. ( on April 2004 volume 96. in order to solve CPU-intensive research problems. data movement. notification mechanisms. While the globus toolkit remains the de facto standard for building grid solutions. "the grid: blueprint for a new computing infrastructure". Nodes in this model are also more vulnerable to going "offline" in one way or another from time to time. Braverman. in Chicago university emagazine.

ten. a hospitals psychiatry fellow—the father of a five. he rarely gets to sleep in. “i might not be too articulate today.” In a bare research institutes building room with white. who heads the distributed systems lab at Argonne national laboratory. 45.” The previous night a west coast student’s paper was due at midnight. Though corporations recently have begun to show interest in grids. HewlettPackard.” the Arthur holly Compton distinguished service professor of computer science warns.An Inquiry Report On Grid Computing – a Report Work “computer scientist Ian foster has developed the software to take shared computing to a global level.and a six-year-old. And because the “father of grid computing” is also—with wife Angela Smyth. research institutions have long been a ripe testing ground.” akin to “the internet”—will perform complex tasks such as designing semiconductors or screening thousands of potential pharmaceutical drugs in an hour rather than a year. foster says. Large projects are already using the technology. awake anyway. clearly has had more inspired moments. persuading the federal government to invest in several multimilliondollar grid-technology projects and convincing companies such as IBM. The national digital mammography archive (ndma) in the united states and ediamond in the united kingdom are creating digital-image libraries to hold their respective countries’ Jeevan Kumar Vishwakarman 10 . eyes glazed behind wire-rimmed glasses. in the same way that the internet sprouted in academia before blossoming in the commercial world. “I’m not feeling very creative right now. Just as the internet is a tool for mass communication. fermilab. grids are a tool for amplifying computer power and storage space. he worked online with some European colleagues. So when asked to predict how grid computing will change everyday life in five. Ian foster sits at a red table holding his laptop. Pacific Time. and 11 other institutions to map a quarter of the night sky. 15 years.” he says in the quick cadence of a native New Zealander. and then. blinds shut to block the window’s glare. By linking far-flung supercomputers. servers. oracle. The Sloan digital sky survey—an effort at Chicago. he thinks for a moment but comes up short. cinder-block walls. But foster. and Sun Microsystems that grids are the answer to complex computational problems—the next major evolution of the internet. a huge global grid—“the grid. Several grid projects exist today. storage systems. grids allow more numbers to be crunched faster than ever before. but eventually. and databases across existing internet lines. determining more than 100 million celestial objects’ positions and absolute brightness—harnesses computer power from labs nationwide to perform in minutes scans that previously took a week. md’00. “I’m on two hours’ sleep.

scientists nationwide convene in a virtual conference room. A similar venture. sits in material scientist nestor zaluzec’s office.An Inquiry Report On Grid Computing – a Report Work scans. nees seeks to advance earthquake-engineering research and reduce the physical havoc earthquakes create. By now the access grid. say.” the ndma web site explains. an 18-square-inch mini shake table. to be completed in october. An $82 million program funded by the national science foundation. With an expected 35 million u. Through the grid. engineers building san francisco’s new bay bridge tested their design structures remotely to make sure they met the latest earthquakeresistance standards.6 petabytes [a petabyte is 1 million gigabytes] a year.s. san diego can activate the mini shake table. Still another project is a grid for the network for earthquake engineering simulation (nees). A major automobile company and some oil and gas companies have developed their own access grids. on a white argonne wall. developed by argonne’s futures lab for remote group collaboration. sm’02. compiles brain images from different databases so researchers can compare the brains of alzheimer’s patients. to smaller thursday test cruises held to keep the system bug-free. star trek–like. At these sessions access grid programmers susanne lefvert and eric olson sit at personal computers. “at 160 megabytes per exam. With the access grid. mammograms a year. used for software development and demonstration. moving it quickly back and forth to agitate the 2-foot-tall plastic model sitting on it. A researcher in. talking with wall-projected images of scientists from other energy department labs. the biomedical informatics research network. has more than 250 research “nodes”— rooms equipped to connect—on five continents. from his desktop zaluzec can maneuver video cameras in places like boulder. At argonne even some meetings about grids are held using grids. doctors can view a patient’s progress over time. where 28 sites popped in. including the princeton plasma physics lab and lawrence berkeley national lab. first used in 1999. illinois. “the annual volume could exceed 5. Jeevan Kumar Vishwakarman 11 . Last fall jonathan silverstein. The grid. Likewise. a neesgrid partner. and the minimal daily traffic a day is expected to be 28 terabytes [a terabyte is 1. colorado. for example. to watch or participate in experiments. or access diagnostic tools. from large groups such as a 2002 national science foundation meeting. and chicago researchers also are experimenting with the technology. At argonne. notes futures lab research manager and computer-science doctoral student mike papka. or champaign. to those of healthy people. compare her with other patient populations. links civil engineers around the country with 15 sites containing equipment such as 4-meter-by-4-meter shake tables or tsunami simulators. By combining computer power and storage space from multiple locations.024 gigabytes]”—traffic that wouldn’t be possible without a grid.

a parallel molecular dynamics code designed to simulate large biomolecular systems. needed to be more efficient.” The teragrid aims to revolutionize the speed at which science operates. the three-year prototype project could change the way hospitals process information. eliminate hand-offs. ambulances. multiple users could share them by doling out that unemployed power. Another project. Today Jeevan Kumar Vishwakarman 12 . Connecting operating rooms. such research can move forward. which tests quantum chromodynamic theory and helps interpret high-energy accelerator experiments. won a national institutes of health contract to install access grid nodes at the u of c hospitals.An Inquiry Report On Grid Computing – a Report Work assistant professor of surgery and senior fellow in the joint argonne/ chicago computation institute. the researchers reasoned. “we are in all these complex environments. avoid phone tag”—instead of passing messages between multiple physicians or waiting before taking the next step. Students will watch not only real-time operating-room video feeds but also feeds from laparoscopic devices and robotic surgeons. To be finished by late september. distributed infrastructure for open scientific research. the california institute of technology. Radiologists will beam three-dimensional xray scans to surgeons—minus middlemen and waiting time. and residents’ hand-held tablet pcs. “we could all meet for one moment” and relay necessary information. namd. and the pittsburgh supercomputing center—the project has since picked up four more partners. the emergency room. the teragrid aims to be “the world’s largest. uses more than 2 million processor hours of computer time per year—and needs more. it will have 20 teraflops (a teraflop equals a trillion operations per second) of computing power and a petabyte of storage space. radiology. the web page says. Beginning with five sites—argonne. Because they spent much time idly waiting for human input. most comprehensive. already used by some projects. The grid allows medical workers literally to “share environments. Then there’s the teragrid.” silverstein says. san diego. On the teragrid. for instance.” its web site declares. the university of california. has maxed out the fastest system available. teragrid executive director charlie catlett says. Launched in 2001 by the national science foundation with $53 million. the university of illinois– urbana-champaign. along with chicago anesthesiologist stephen small and argonne/chicago computer scientist rick stevens. Sharing resources—a practice known as “distributed computing”—goes back to computers’ early days. The multi-institutional mimd lattice computation collaboration. then costing tens or even hundreds of thousands of dollars. In the late 1950s and early 1960s researchers realized that the machines. already boast “a crosscountry network backbone four times faster than the fastest research networks currently in existence. Many of its sites.

he writes in scientific american. creating a common language and tools. for example. and thomas a.” with standardized methods to authenticate identities. but they’re still underutilized—“five percent usage is normal. he used parallel networks. reserve time. But a lot had to happen between the grid’s earliest inklings and its current test beds. who also directs argonne’s math and computer-science division.An Inquiry Report On Grid Computing – a Report Work computers are cheaper. At a 1995 supercomputing conference rick stevens. berkeley. and then monitor their execution. headed a prototype project. it could transform the process of scientific work. “high-speed networks were starting to appear. The concept was quickly put to use. Programming specialized languages for computing chemistry codes. a software system for international scientific collaboration. And there’s internet computing. most notably seti@home. now used in many forms. such as napster. “knitted” the sites “into a single virtual system. and control data movement. a virtual supercomputer based at the university of california. locate suitable computers.” Scientists performed computationally complicated Jeevan Kumar Vishwakarman 13 . came to argonne in 1989. In a sense grids are simply another variety of distributed computing. With steven tuecke. authorize specific activities. he began the globus project. with scientists from different institutions trying to share data that was growing exponentially. In 1994 foster refocused his research to distributed computing. who switched from studying math and chemistry to computer science at new zealand’s university of canterbury before earning a doctorate in the field at london’s imperial college. Foster’s team developed the software that. In the same way that internet protocols became standard for the web. and carl kesselman. links multiple pcs to replace unwieldy mainframes or supercomputers. similar to clusters. users who have downloaded specific software can connect to each other and share files. load application codes. that linked 17 high-speed research networks for two weeks. they envisioned globus software that would link sites into a “virtual organization.” so users could “log on once. that analyzes data from puerto rico’s arecibo radio telescope to find signs of extraterrestrial intelligence. Pc users download seti@home’s screen-saver program. In peer-to-peer computing. Defanti.” he writes in the april 2003 scientific american. “and it became clear that if we could integrate digital resources and activities across networks.” foster says—which is one reason many companies connect their computers to form unified networks. now director of the center for grid technologies at the university of southern california’s information sciences institute. and when their computers are otherwise idle they retrieve data from the internet and send the results to a central processing system.” Indeed research was occurring more and more on an international scale. Cluster computing. called i-way (information wide area year). today the lead software architect in argonne’s distributed systems laboratory. Foster. director of the university of illinois–chicago’s electronic visualization lab.

The national science foundation. And while foster and his crew have used an open-source approach to develop the technology. noninteroperable distributed systems. and given a chicago innovation award last year by the sun-times. the globus toolkit’s sole corporate funder for the past three years.000 a year for three years. “everyone not sleeping for three days. incompatible. “it was the woodstock of the grid.An Inquiry Report On Grid Computing – a Report Work simulations such as colliding neutron stars and moving cloud patterns around the planet. the uk’s e-science program is developing ways to handle the different systems. foster writes in “what is the grid?” (july 2002). the conference’s program chair and now director of the california institute for telecommunications and information technology. the software might not have become the de facto standard for most grid projects. named the “most promising new technology” by r&d magazine in 2002. The u.” The experience inspired much enthusiasm—and funding. When physicists overloaded one grid system by submitting tens of thousands of tasks at once. and ibm. foster says. As the technology moves from research institutions. making the software freely available and its code open for outside programmers to read and modify. defense advanced research projects agency gave the globus project $800. a top-ten “emerging technology” by technology review in 2003. “success of the grid depends on everyone adopting it. and the energy department began grid projects. to corporations.” Brokerage firm charles schwab uses a grid developed by ibm to give its clients real-time investment advice. still needs work to perfect security and other measures. For foster—the british computer society’s 2002 lovelace medal winner and a 2003 american association for the advancement of science fellow—such corporate ventures are a critical step in Jeevan Kumar Vishwakarman 14 . much like that used to develop the internet. “so it’s counterproductive to work in private. In 1997 foster’s team unveiled the first version of the globus toolkit. the software that does the knitting. allow users to collaborate “with any interested party and thus to create something more than a plethora of balkanized. which favor databases. Without the open-source approach. Such standards. a group that meets three times a year to adopt basic language and infrastructure standards. with globus underlying them all. nasa. for example.” The globus toolkit. wouldn’t have taken such an active role.” he says. running around and engaged in a kind of scientific performance art. the university of wisconsin helped design applications to manage a grid’s many users.” larry smarr. told the new york times last july. in 1998 he and his colleagues also began the global grid forum. The computer company also has projects under way with morgan stanley and hewitt associates. whose data is stored mostly in electronic files. But the open-source model.s. has proved useful in ferreting out bugs and making improvements.

bristol-myers squibb. as with electricity or water. They also provide a means for offering information technology as a utility for commercial and non-commercial clients.000 machines. and foster skipped the march global grid forum meeting in berlin to talk up grids in his homeland new zealand. In the 1960s mit’s fernando corbato.” he says. and if he can’t predict where the next generation will head. co.” meaning computer access would operate like water.” described shared computing as a “utility. which was using more than 3 million computers to achieve 23. “it’s a process. Although large grids are running in both the united states and europe. Today the grid is envisioned similarly. with those clients paying only for what they use. Grid computing is presently being applied successfully by the national science foundation's national technology grid. and climate/weather modeling. and “utility computing” is used synonymously. Current Projects And Applications Grids offer a way to solve grand challenge problems like protein folding.000 machines.37 sustained teraflops (979 lifetime teraflops) as of september 2001. But when grids will become so ubiquitous remains a big question. As of may 2005. Even on a full night’s sleep foster—today’s father figure—hesitates to guess beyond “that’s some way out. he’s prepared the grid to lead the way.. and american express. whom foster calls “the father of time-sharing operating systems. Grids offer a way of using the information technology resources optimally inside an organization.” happily encouraging his virtual child but not wanting to impose unrealistic expectations. and electricity. multi-industry path he’s forging. As of august 2009 folding@home achieves more than 4 petaflops on over 350. important in everyday life. pratt & whitney. earthquake simulation. where a client would connect and pay by usage amount. folding@home had achieved peaks of 186 teraflops on over 160. There’s more to be done. financial modeling. Jeevan Kumar Vishwakarman 15 . nasa's information power grid. One of the most famous cycle-scavenging networks is seti@home.An Inquiry Report On Grid Computing – a Report Work making grids.” It’s a global. already a powerful scientific tool. “we haven’t nailed down all the standards. gas. when the grid will be as common as the internet—and as seamless.

where storage rates of several gigabytes per second (10 petabytes per year) are required.com. The results of these cross analyzes are provided by the website it-tude. According to the project fact sheet. along with the lhc computing grid (lcg) have been developed to support the experiments using the cern large hadron collider. is a follow up project to the european datagrid (edg) and is arguably the largest computing grid on the planet. The project is coordinated by atos origin. which at 24. which cycle scavenges on volunteer pcs connected to the internet. one business. Until april 27. Beingrid (business experiments in grid) is a research project partly funded by the european commission[citation needed] as an integrated project under the sixth framework programme (fp6) sponsorship program.net.An Inquiry Report On Grid Computing – a Report Work The european union has been a major proponent of grid computing. the project will run 42 months. their mission is “to establish effective routes to foster the adoption of grid computing across the eu and to stimulate research into innovative business models using grid technologies”. Many of the projects are highlighted below. Another well-known project is distributed. Jeevan Kumar Vishwakarman 16 . which is based in the european union and includes sites in asia and the united states. the grid mp ran on about 3.8 million euros.000 machines . but two deserve special mention: beingrid and enabling grids for e-science. united devices operates the united devices cancer research project based on its grid mp product. To extract best practice and common themes from the experimental implementations. 15. Started in june 1. This. which was started in 1997 and has run a number of successful projects in its history. Many projects have been funded through the framework programme of the european commission.7 million is provided by the european commission and the remainder by its 98 contributing partner companies. The lcg project is driven by cern's need to handle huge amounts of data. 2007. until november 2009. As of june 2005.100. two groups of consultants are analyzing a series of pilots. The nasa advanced supercomputing facility (nas) has run genetic algorithms using the condor cycle scavenger running on about 350 sun and sgi workstations. The relevant software and documentation is also publicly accessible . The enabling grids for e-science project. but also for its budget. A list of active sites participating within lcg can be found online as can real time monitoring of the egee infrastructure . one technical. Of this. The project is significant not only for its long duration. 2006. is the largest of any fp6 integrated project.

to gain access to applications and data. capacity. A grid is a type of parallel and distributed domains system that based on enables the sharing. performance.525 teraflops ( as of 4 jun 2007 ) Definitions Today there are many definitions of grid computing:  The definitive definition of a grid is provided by ian foster in his article "what is the grid? A three point checklist" the three points of this checklist are:    Computing resources are not administered centrally. Non-trivial quality of service is achieved. processing power. selection. cost and users' quality-of-service requirements"  An earlier example of the notion of computing as utility was in 1965 by mit's fernando corbató. Open standards are used."  Ibm defines grid computing as "the ability. Rajkumar Buyya is a Senior Lecturer and the Storage Tek fellow of Grid Computing in the Department of Computer Science and Software Engineering at the University of Melbourne. on-demand provisioning. using a set of open standards and protocols.  Buyya (Dr. and service (resource) sharing between organizations. Fernando and the other designers of the multics operating system envisioned a computer facility operating "like a power company or water company".An Inquiry Report On Grid Computing – a Report Work Fastest Virtual Supercomputers  Boinc . and aggregation of resources distributed across 'multiple' administrative their (resources) availability. Australia) defines a grid as "a type of parallel and distributed system that Jeevan Kumar Vishwakarman 17 . storage capacity and a vast array of other computing resources over the internet.  Plaszczak/wellner define grid technology as "the technology that enables resource virtualization.

"  Pragmatically. enterprise grids and global grids. Grid computing is quickly gaining popularity due to its ability to maximize the efficiency of computing sources as well as its ability to solve large problems with considerably less computing power”. “The definition of Grid Computing is the simultaneous application of multiple computers to a problem that typically requires access to significant amounts of data or a large number of computer processing cycles. The Death Of Distance Jeevan Kumar Vishwakarman 18 . An engineering department connecting desktop machines. Grids can be categorized with a three stage model of departmental grids. and users' qualityof-service requirements". selection. A flop is a basic computational operation . or a billion operations. capability.  Grid computing is also attractive to large commercial enterprises with complex computation problems who aim to fully exploit their internal computing power: internal grids.like adding two numbers together. But What Does "High Performance" Mean? Performance is measured in flops.An Inquiry Report On Grid Computing – a Report Work enables the sharing. one of the largest users of grid technology.e. performance. A global grid is a connection of enterprise and departmental grids that can be used in a commercial or collaborative manner.  ServePath. clusters and equipment. This progresses to enterprise grids where nontechnical staff's computing resources can be used for cycle-stealing and storage. A gigaflop is a billion flops.com defines grid computing as. talk of the grid: "a service for sharing computer power and data storage capacity over the internet. cost. These correspond to a firm initially utilising resources within a single group i. and aggregation of geographically distributed autonomous resources dynamically at runtime depending on their availability. grid computing is attractive to geographicallydistributed non-profit collaborative research efforts like the ncsa bioinformatics grids such as birn: external grids.  Cern.

Imagine if cars had made the same improvements in speed since 1985…you could easily go into orbit by pressing down hard on the accelerator! Faster! Faster! Some researchers have computing needs that make even the fastest connections seem slow: some scientists need even higher-speed connectivity. grid developers and users need to manage three important things: Jeevan Kumar Vishwakarman 19 . it would have been stupid to send large amounts of data across the globe for processing on other computer resources. That translates to a 3000x improvement in 15 years. Ten years ago.s. like transmission errors or pc crashes. up to tens of gigabits per second (gbps). including the optimization of transport protocols and the development of technical solutions such as high-performance ethernet switching. Such international grids are possible today because of the impressive development of networking technology. so that complicated calculations requiring constant communication between processors can be performed. or work from australia using computers from taiwan. which means there is minimal delay when sending date to remote colleagues in "real time". others need ultra-low "latency".An Inquiry Report On Grid Computing – a Report Work Computing grids use international networks to link computing resources from all over the world. To meet such critical requirements. Today. several high-performance networking issues have to be solved. This means you can sit in france and use computers in the u. all this is possible and more! Pushed by the internet economy and the widespread penetration of optical fibers in telecommunications systems. To avoid communication bottlenecks. To ensure secure access. Other researchers want "just-in-time" delivery of data across a grid. grid developers also have to determine ways to compensate for failures. the performance of wide area networks has been doubling every nine months or so over the last few years. because of the time taken to transfer the data. Secure Access Secure access to shared resources is one of the most challenging areas of grid development.

You could do them yourself. someone could read or modify your data . which may change from day to day.. But it is a never-ending race to stay ahead of malicious hackers. you would give one question or "job" to each computer. and have a reliable accounting mechanism.. including sophisticated data encryption techniques. If you used a grid of 100 computers. such accounting will be used to decide pricing policies for using a grid. Ultimately. New security solutions are constantly being developed. and so grids require new solutions.how do you determine whether a certain operation is consistent with the rules? Grids need to efficiently track of all this information. This means that grids need to be extremely flexible. But grid users must share resources. Imagine if the owner of a café were to lend some tables to another café. orders and payments? Security And Trust The issue of security is linked to trust: you may trust the other users. or you could use a computing grid. Imagine if you had to do 1000 difficult maths questions. Jeevan Kumar Vishwakarman 20 . When a computer finished one "job". Resource Use Grids allow you to efficiently and automatically spread your work across many computer resources.what is shared? Who is allowed to share? When can sharing occur? Authentication .how would you securely track customers.how do you identify a user or resource? Authorization .hence the warnings about security when you use your credit card on the internet. These accounting challenges are not new . The result? Your jobs are finished much faster. but do you trust that your data and applications are securely protected on their shared machines? Without adequate security. The issue of security concerns all information technologies and is taken very seriously.An Inquiry Report On Grid Computing – a Report Work    Access policy .the same questions arise whenever you use your credit card in a café.

Middleware uses information about the different "jobs" submitted to each queue to calculate the optimal allocation of resources. In this way.An Inquiry Report On Grid Computing – a Report Work it would automatically ask for another. other times.special grid computing software . These people could be strangers. your 1000 questions could be finished in a flash. This doesn't work perfectly yet. But Would You Trust Your Computer To A Complete Stranger? What about your car? A computing grid is a bit like a car pool . neither did the web in its early days (remember when they called it the world wide wait?!) Resource Sharing Resource sharing is the crux of grid philosophy . but then. But grids are shared resources. but if they are part of the same car pool organization as you.sometimes you share your car with other people. computers and data A grid can even give you access and control of remote sensors. and how long each job will take. To do this. with all 100 computers working to full efficiency.    Grids give you shared access to extra computing power A grid can also give you direct access to remote software. Grid computing aims to involve everyone in the advantages of resource sharing and the benefits of increased efficiency.but grid computing is not about getting something for nothing. Jeevan Kumar Vishwakarman 21 . they share their car with you. we ideally need to know how many jobs are in each queue.to allocate jobs efficiently. right? So what happens when there is a queue of people is waiting to use a computing grid? How do you decide whose "job" is next in line? Middleware To The Rescue Computing grids rely on middleware . telescopes and other devices that do not belong to you.

and what can be done with them. exist in different administrative domains. Open Standards By standardizing the way we create computing grids. This presents a major challenge. technology continues to evolve and provides new tools that need to be integrated within the existing grid machinery. we're one step closer to making sure all the smaller grids can connect together to form larger. Something standard. and each of these developers have their own views on what makes a good standard. they will normally put conditions on the use of those resources. specifying limits on which resources can be used when. Grid resources are owned by many different people who run different software. For example. Grids are kind of the same. and there are mechanisms to deal with breach of trust. “standard” can often be equated with “average” or “boring”: how can you innovate or invent when you’re bound by standards and regulations? How can you push the boundaries when you’re stuck inside a box? Yet how can you create something on a grand scale—something that can slot together with other grand things—unless you create something interoperable. which may require revising the standards. and use different systems for security and access. common standards for grid computing might sound obvious. So there is trust. the others will complain and eventually kick you out of the car pool. Who Is In Charge Of Grid Standards? Jeevan Kumar Vishwakarman 22 . more powerful grid computing resources. But when was the last time you needed a ¼ inch screw and only had metric screws available? And have you ever blown up a 120v machine by accidentally sticking it into 240v mains? So much for "universal" standards! The sticky question is.An Inquiry Report On Grid Computing – a Report Work you will generally trust each other at some level. Adopting open. which standards should be used for grid computing? There are hundreds of software developers working to create dozens of different grids. If you are always late. when someone decides to share their computing resources on a grid. While they work.

and a higher layer of "collective services". this code automates all the "machine to machine" (m2m) interactions that create a single. Delving Inside Middleware There are many other layers within the middleware layer.An Inquiry Report On Grid Computing – a Report Work The open grid forum is a standards body for the grid community. special "housekeeping" agents optimize network routings and monitor quality of service. specific data and resources. some middleware programs act as "agents" and others as "brokers". Brokers And Striking Deals Middleware automatically negotiate deals in which resources are exchanged. and payment for. the broker schedules the necessary computational activities and oversees the data transfers. in a fraction of the time that it would take humans at their computers to do manually. data and resources. In these deals. containing hundreds of thousands of lines of computer code. Once a deal is set. Together. Jeevan Kumar Vishwakarman 23 . For example. and then strike the "deals" for access to. seamless computational grid. At the same time. With more than 5000 volunteer members. Middleware is made up of many software programs. passing from a grid resource provider to a grid user. middleware includes a layer of "resource and connectivity protocols". Agent programs present "metadata" (data about data) that describes users. Agents. this body is a significant force for setting standards and community developments. Broker programs undertake the m2m negotiations required for user authentication and authorization. The Middleware "middleware" is the software that organizes and integrates the resources in a grid. And all this occurs automatically.

enabling exchange of data. This is done with communication protocols. resource management. Jeevan Kumar Vishwakarman 24 . computers contributing to a particular grid must recognize grid-relevant messages and ignore the rest. which obtain information about the structure and state of the resources on a grid. covering security measures. and management protocols. and authentication protocols. which is being developed by the globus alliance.An Inquiry Report On Grid Computing – a Report Work Resource and connectivity protocols handle all grid-specific network transactions between different computers and grid resources. a team primarily involving ian foster's team at argonne national laboratory and carl kesselman's team at the university of southern california in los angeles. Many of the protocols and functions defined by the globus toolkit are similar to those in networking and storage today. which negotiate uniform access to the resources. Collective services include:      Updating directories of available resources Brokering resources (which like stock broking. Globus Toolkit The globus toolkit is a popular example of grid middleware. but have been optimized for grid-specific deployments. communications and so on. is about negotiating between those who want to "buy" resources and those who want to "sell") Monitoring and diagnosing problems Replicating data so that multiple copies are available at different locations for ease of use Providing membership/policy services for tracking who is allowed to do what and when. Many major grid projects use the globus toolkit. For example. which provide secure mechanisms for verifying the identity of both users and resources. resource location. The collective services are also based on protocols: information protocols. which allow the resources to communicate with each other. It's a set of tools for constructing a grid.

providing a bag of services so that developers can choose the services that best meet their needs. secure. which means anyone is free to use or improve the software.An Inquiry Report On Grid Computing – a Report Work Globus Includes Programs Such As:    Gram (globus resource allocation manager): figures out how to convert a request for resources into commands that local computers can understand Gsi (grid security infrastructure): authenticates users and determines their access rights Mds (monitoring and discovery service): collects information about resources such as processing capacity. This is similar to the world wide web and the linux operating system. For example. and status Giis (grid index information service): coordinates arbitrary gris services Gridftp (grid file transfer protocol): provides a high-performance. Rather than providing a uniform programming model for grid applications. The tools can also be introduced one at a time. Grids need to support a wide variety of applications created according to different programming paradigms. the globus toolkit has an "object-oriented approach". an application can use gram or gris without having to necessarily use the globus security or replica management systems. and so on      Gris (grid resource information service): queries resources for their current configuration. bandwidth capacity. and robust data transfer mechanism Replica catalog: provides the location of replicas of a given dataset on a grid The replica management system: manages the replica catalog and gridftp. capabilities. allowing applications to create and manage replicas of large datasets. There are two main reasons for the strength and popularity of the globus toolkit: 1. The globus toolkit is available under an "open-source" licensing agreement. 2. type of storage. Jeevan Kumar Vishwakarman 25 .

integrated resource platform for high-performance computing and related services to enable the processing of large amounts of scientific data and information. D-grid Eneagrid Grid-ireland National grid service Norgrid Teragrid dutchgrid fermilab computing division hungrid nersc swegrid thai national grid twgrid Project details Synopsis d-grid (germany) the first d-grid projects started in september 2005 with the goal of developing a distributed. Project details Synopsis dutchgrid (the netherlands) dutchgrid is the platform for grid computing and technology in the netherlands. Project details Synopsis eneagrid (italy) eneagrid makes use of grid technologies to provide an integrated production environment including all the high performance and high throughput computational resources available in enea.s. the italian national agency for new technologies. Project details fermilab computing division (fermilab in the u. Interoperability with other grid infrastructures is currently in operation.) Jeevan Kumar Vishwakarman 26 . energy and the environment.An Inquiry Report On Grid Computing – a Report Work National Grids National grids like those listed below combine national computing resouces to create powerful grid computing resources. dutchgrid aims to coordinate various grid deployment efforts and to offer a forum for the exchange of experiences on grid technologies. Open to all institutions for research and test-bed activities.

and is available to select osg virtual organizations for compute and storage resources. Now involved in developing and supporting innovative computing solutions and services for fermilab.) users can access several nersc resources via globus grid interfaces using x509 grid certificates. Project details Synopsis hungrid (uk) hungrid is the first official hungarian virtual organization of egee.s.. Project details norgrid (norway) Jeevan Kumar Vishwakarman 27 . It is also an egee testing environment for hungarian research communities that show interest in starting their own virtual organizations. Nersc is part of the open science grid (osg). Its goal is to allow grid users of hungarian academic and educational institutes to perform the computing activities relevant for their researches and thus the vo functions as a catch-all vo for all the hungarian participants that do not (yet) have an established vo in their respective field of research. changing the way that computing was done at the lab by improving efficiency and making better use of resources.An Inquiry Report On Grid Computing – a Report Work Synopsis fermigrid united fermilab’s computing resources into a single grid infrastructure. Project details Synopsis grid-ireland (ireland) grid-ireland fosters and promotes grid activities in ireland. Project details Synopsis nersc (national energy research scientific computing center in the u. Project details Synopsis national grid service (uk) the ngs aims to provide coherent electronic access for uk researchers to all computational and data based resources and facilities. Join their mailing list for up-to-the-minute ngs action. involving partners across the country.

Project details Synopsis thai national grid project (thailand) the thai national grid project is a national initiative on grid computing funded by the royal thai government through the software industry promotion agency of the ministry of information and communication technology. twgrid provides the grid-related technology and infrastructure support for the lhc experiments in taiwan. fastest. as well as working to produce new grid-powered science applications to further international e-science advances. Project details Synopsis teragrid (u. Norgrid is the norwegian component in the third phase of the egee project. languages. It involves involves partners across the u. International Grids International grids cross national boundaries. The sites are connected through the highperformance gigasunet network. Project details Synopsis swegrid (sweden) swegrid is a swedish national computational resource. distributed infrastructure for open scientific research. consisting of 600 computers in six clusters at six different sites across sweden. Jeevan Kumar Vishwakarman 28 . Project details Synopsis twgrid (taiwan) twgrid is the taiwanese grid and a member of global grid projects including egee and wlcg.An Inquiry Report On Grid Computing – a Report Work Synopsis norgrid aims to establish and maintain a national grid infrastructure in norway. spanning cultures. technologies and more to create international resources and power global science using global computing.s. most comprehensive. Coordinated by academia sinica grid computing.s. supercomputing grid) teragrid aims to build and deploy the world's largest.

Project details Synopsis eela e-science for europe and latin america eela aims to provide grid facilities to promote scientific collaboration between europe and latin america. resources and knowledge in order to build.. Project details Synopsis d4science distributed collaboratories infrastructure on grid enabled technology 4 science d4science aims to create grid-based and data-centric einfrastructures to support scientific research. nurture and promote grid technologies and applications.An Inquiry Report On Grid Computing – a Report Work Ap grid Deisa Egee Euasiagrid Gridpp Nextgrid Open grid forum Open science grid Winds d4science eela-2 egi_ds eu-indiagrid lcg nordugrid ogf-europe pragma Project details Synopsis ap grid asia-pacific grid ap grid is a partnership for grid computing in the asia-pacific region. Partners come from 15 countries in the asiapacific and beyond. aiming to ensure the long-term sustainability of the e-infrastructure. Project details egee enabling grids for e-science Jeevan Kumar Vishwakarman 29 . aiming to share technologies. Project details Synopsis deisa distributed european infrastructure for supercomputing applications deisa combines the power of supercomputing centres across europe to accelerate scientific research. It is co-funded by the european commission until 2010 and involves partners across europe.

in synergy with the other european grid initiatives in europe and asia.An Inquiry Report On Grid Computing – a Report Work Synopsis egee is the largest multi-disciplinary grid infrastructure in the world. 24 hours a day. Driven by the needs and requirements of the research community. Project details Synopsis egi_ds european grid initiative design study the european grid initiative design study aims to establish a sustainable grid infrastructure in europe. 7 days a week. who are building a grid for particle physics. Géant aims for high speed connectivity. thereby supporting collaborative scientific discoveries in the european research area. global connectivity and guaranteed quality of service. It comprises 27 european national research and education networks. The main objective is to develop and deploy a Jeevan Kumar Vishwakarman 30 . Egee comprises 250 sites in 48 countries and more than 68. geographical expansion. as well as an infrastructure for network research. bringing together more than 120 organisations to provide scientific computing resources to the european and global research community. Project details Synopsis gridpp grid for uk particle physics gridpp is a collaboration of particle physicists and computing scientists from the uk and cern.000 CPUs available to some 8. Project details Synopsis eu-indiagrid collaboration between europe and india eu-indiagrid will bring together over 500 multidisciplinary organisations to build a grid-enabled e-science community aiming to boost r&d innovation across europe and india.000 users. it is expected to enable the next leap in research infrastructures. Project details Synopsis géant géant pan-european gigabit research network provides networking infrastructure to support researchers. Egi_ds includes partners across europe Project details Synopsis euasiagrid collaboration between europe and asia euasiagrid aims to pave the way towards an asian e-science grid infrastructure.

Ogf aims to promote and support grid technologies via the creation and documentation of "best practices" . Project details Synopsis lcg worldwide lhc computing grid the mission of the lhc computing project (lcg) is to build and maintain a data storage and analysis infrastructure for the entire high energy physics community that will use the large hadron collider. thus creating a dynamic marketplace for new services and products. An eufunded project with multiple partners Project details Synopsis nordugrid grids in the nordic region nordugrid is a grid research and development collaboration aiming at development. industry and the ordinary citizen. user experiences. Involves more than 400 organizations from 50 countries.technical specifications. maintenance and support of a free grid middleware known as the "advanced resource connector" (arc). Project details Synopsis open grid forum international grid standards the open grid forum is a community-initiated forum of 5000+ people interested in distributed computing and grid technologies. and implementation guidelines.An Inquiry Report On Grid Computing – a Report Work large-scale science grid in the uk for use by the worldwide particle physics community. Project details Synopsis ogf-europe key role in european and international grid standards influencing the drive towards global ogf-europe works closely with open grid forum and plays a standardisation efforts and in bringing best practices in the european computing environment. Project details Synopsis nextgrid supporting mainstream use of grids nextgrid aims to enable the widespread use of grids by research. Jeevan Kumar Vishwakarman 31 . The collaboration was established by five nordic academic institutes and is based upon a memorandum of understanding.

Osg combines resources at many u. and proposing a long-term cooperation strategy in the field of ict research. aims to further develop and support ict research and development collaboration between europe.winds-lac. promoting excellence research from the regions in europe. labs and universities and provides access to shared resources for the benefit of scientific applications. research issues and opportunities for cooperation. and formalize resourcesharing agreements.An Inquiry Report On Grid Computing – a Report Work Project details Synopsis open science grid open grid infrastructure for collaborative science the open science grid consortium provides an open grid infrastructure for science in the u. dealing them out to the different computer processors in the grid.s.eu platform. latin america and the the www. Project details Synopsis winds caribbean grid collaboration in europe. build new collaborations.s and beyond. High-Throughput Problems High-throughput applications are problems that can be divided into many independent tasks. latin america and the caribbean by identifying common needs. maintained by the winds-la and winds-caribe projects. Project details Synopsis pragma assembly pacific rim applications and grid middleware pragma is an open organization in which pacific rim institutions collaborate to develop grid-enabled applications and to deploy the infrastructure throughout the pacific region. As soon as a processor finishes one Jeevan Kumar Vishwakarman 32 . Computing grids can be used to schedule these tasks. Pragma aims to enhance current collaborations and connections.

Supercomputers are different to computing grids: where grids link computers that are distributed around an institution. which aids in the search for extraterrestrial intelligence. which models the evolution of drug resistance and helps to design new anti-hiv drugs. helping scientists to tackle problems that cannot be solved on a single system. which works on gravitational ray tracing. Examples of high-throughput :   Error! Bookmark not defined. Supercomputers generally deal with computer-centric problems. so it doesn't matter whether some tasks take a long time. After a "time-out" period. In this way. they're generally talking about supercomputing. country or the world. Jeevan Kumar Vishwakarman 33 . applications include:? The analysis of thousands of particle collisions in a bid to understand more about our universe. supercomputers are one giant computer in a single room. fightaids@home. the next task arrives.An Inquiry Report On Grid Computing – a Report Work task. High-Performance Problems When people talk about "high performance computing" or hpc. or brats@home. as in the large hadron collider computing grid The analysis of thousands of molecules in a bid to discover a drug candidate against a specific malaria protein. unfinished tasks are simply sent elsewhere to be processed. or to solve problems much more quickly. These "@home" tasks involved are totally independent. hundreds of tasks can be performed in a very short time. using rosetta software in the open science grid  The use of volunteer computing to power applications including seti@home. the secret to solving these problems is "teraflops": as many as possible. Grid computing allows large computational resources to be combined. as part of the grid-enabled wisdom project  The analysis of thousands of protein folding configurations in a bid to discover more efficient ways of packaging drug proteins.

or a particular group of users.An Inquiry Report On Grid Computing – a Report Work Examples of these supercomputing grids are deisa in europe or teragrid in the u.s. The Dream The grid computing dream began with talk of creating an all-powerful "grid": one grid comprised of many smaller grids joined together. Grid computing uses the internet to help us share computer power. researchers and software engineers are working to bring "the grid" closer to achieving the dream. modeling the world economy) Grid Computing In 30 Seconds Grid computing is a service for sharing computer power and data storage capacity over the internet. How is grid computing different from the world wide web? Simple... forming a global network of computers that can operate as one vast computational resource.g. And across the world.. Grid computing is making big contributions to scientific research. Jeevan Kumar Vishwakarman 34 . while the web uses the internet to help us share information. there are already hundreds of grids around the world. simulations of a car crash or a new airplane design) Climate modeling (e.g. simulations of a supernova explosion or black hole collision) Automotive/aerospace industry (e.. Typical hpc grid applications include:     Astrophysics (e. In grid computing reality. helping scientists around the world to analyze and store massive amounts of data. simulations of a tornado or climate prediction) Economics (e. each one created to help a specific group of researchers.g.g.

"gridification" means adapting applications to include new layers of grid-enabled software. Once gridified. grid users need to "gridify" their applications to run on a grid. so that many subcalculations can be worked on "in parallel". This means that each sub-calculation can be worked on by a different processor. This allows you to speed up your computation. thousands of people will be able to use the same application and run it trouble-free on interoperable grids (like most software. Jeevan Kumar Vishwakarman 35 . asking to extract data. Here are a few that are important to grid technology: Parallel Calculations: Parallel calculations can be split into many smaller sub-calculations. notifying the user when analysis is complete.An Inquiry Report On Grid Computing – a Report Work ''Gridifying'' Your Application An application that ordinarily runs on a stand-alone pc must be "gridified" before it can run on a grid. Just like "webifying" applications to run on a web browser. initiate computations. a gridified data analysis application will be able to:     Obtain the necessary authentication credentials to open the files it needs Query a catalogue to determine where the files are and which grid resources are able to do the analysis Submit requests to the grid. Computational Problems There are many different ways to describe computational problems. and detecting and responding to failures (collective services). and provide results Monitor progress of the various computations and data transfers. For example. there will always be a few bugs here and there).

For example. High-Performance Vs. "Monte Carlo simulations". so that the right information is available to processors at the right time. Fine-grained parallel calculations require very clever programming to make the most of their parallelism. when calculating the weather. analyzing a large databank of medical images is embarrassingly parallel.: Fine-grained calculations are better suited to high-performance computing. each calculation in one volume of atmosphere is affected by surrounding volumes. where you vary the parameters in a model and then study the results. monolithic supercomputer. Embarrassingly parallel calculations are ideal for high-throughput computing: more loosely coupled networks of computers where delays in getting results from one processor will not affect the work of the others. And Grid Computing. Fine-Grained Calculations: In a fine-grained calculation..An Inquiry Report On Grid Computing – a Report Work Embarrassingly Parallel Calculations: A calculation is embarrassingly parallel when each sub-calculation is independent of all the other calculations. each sub-calculation is dependent on the result of another sub-calculation. reliable network between the processors. which usually involves a big. since each image is independent of the others.and coarsegrained calculations. and this is where grids can be particularly powerful: Jeevan Kumar Vishwakarman 36 .? Many interesting problems in science require a combination of fine. High-Throughput Error! Bookmark not defined. For example. are also coarse-grained calculations. Coarse-Grained Calculations: Coarse-grained calculations are often embarrassingly parallel. or very tightly coupled computer clusters with lots of identical processors and an extremely fast.

. or "data storage density is doubling every 12 months". Moore noted that the number of transistors that could be squeezed on to a silicon chip was doubling every year. Today. Jeevan Kumar Vishwakarman 37 . computer processors are not keeping up with data storage and network capacity. it might be said that these trends are "outperforming" Moore's law.. this has been revised to doubling every 18 months. Further. It is best to see Moore's law as simply a metaphor for exponential growth in the performance of IT hardware. comparing different growth rates using Moore's law is often misleading. Using a grid. Each calculation is a fine-grained parallel calculation that needs to run on a single cluster or supercomputer. these many independent calculations can be distributed over many different grid clusters. comparisons are made between different quantities that have nothing to do with Moore's law. researchers launch many similar calculations to see how different parameters affect their models. Moore's law is one of the most misused concepts in computing. Even though Moore's statement was limited to a very specific quantity . This means that processor power grows faster than Moore's law. This ignores a number of trends which Moore's law does not take into account. This could mean that.it is now used for just about everything else in computing.An Inquiry Report On Grid Computing – a Report Work For example. thus adding coarse-grained parallelism and saving a lot of time. improvements in chip architecture and operating systems also make processors more powerful than the mere sum of their transistors. one of the founders of Intel. "Computing power doubles every 18 months" is one common misuse of Moore's observation. Nice Idea. Over time. But. In short. somehow.the number of transistors on a chip . in the case of complex climate modeling. For example. For example. the clock cycle of processors increases along with the increase in the number of transistors per chip. Breaking Moore’s Law? Moore's law was a statement made in 1965 by Gordon Moore. if "network performance is doubling every nine months".

with every year that passes. Jeevan Kumar Vishwakarman 38 . Individual computers also become more powerful. like climate change and sustainable power. which means that computer grids are increasingly able to solve increasingly complex problems. All this computing power helps our scientists find solutions to the big questions.An Inquiry Report On Grid Computing – a Report Work More On Moore's Law As a result of this exponential growth. the grid concept becomes more feasible: networks become faster and distributed processors can be more tightly integrated.

M.. & Venugopal. December). A Gentle Introduction to Grid Computing and Technologies. A. K. (2005..gridcafe. Introduction to Grid Computing.Works Cited (2010. & Trivedi. (2006. Educause Learning Initiative. January). Retrieved from WikiPedia: http://www. Grid computing.educause.com Braverman. M. (2005. April). Brown. Retrieved from educause. January). 7 things you should know about. Retrieved from IBM RedBooks: http://www. Retrieved from GridCafe.edu: www.. Retrieved from IBM RedBooks: ibm. January). Fundamentals of Grid Computing.com/redbooks ..org: http://www.edu/eli Jacob.wikipedia. Retrieved from University of Chicago Magazine: http://uchicago.. N.org (2010. S.com/redbooks Viktors Berstis.edu Buyya. Fukui. July). (2004.. B. Father of Grid Computing.ibm. (2001). R.

Sign up to vote on this title
UsefulNot useful