Sie sind auf Seite 1von 14

A Seminar Report On

COEVAL AND DISSEMINATE COMPUTING


In partial fulfillment of requirements for the degree of Bachelor of Technology In Information Technology

SUBMITTED TO Ms. Neha Gupta Seminar Coordinator (Head of Department)

SUBMITTED BY Monika Shekhawat Roll No 08EJCIT042 VIII Semester

DEPARTMENT OF INFORMATION TECHNOLOGY JAIPUR ENGINEERING COLLEGE AND RESEARCH CENTRE JAIPUR, RAJASTHAN 302022 2011 2012

Candidates Declaration

I hereby that the work, which is being presented in the seminar, entitled COEVAL AND DISSEMINATE COMPUTING in partial fulfillment for the award of Degree of Bachelor of Technology in Department of Information Technology submitted to the Department of Information Technology , JAIPUR ENGINEERING COLLEGE AND RESEARCH CENTRE is a record of my own investigations carried under the Guidance of Mr. Sunil Jangir Department of Information Technology I have not submitted the matter presented in this seminar anywhere for the award of any other Degree.

Monika Shekhawat B.Tech.( Information Technology) Enrolment No.:

Counter Singed by Mr. Sunil Jangir Seminar Guide

Counter Singed by Ms. Neha Gupta Head of Department Seminar Coordinator

ACKNOWLEDGEMENT
I would like to place on record my deep sense of gratitude to Ms. Neha Gupta, Head of Department (H.O.D.), Department of Information Technology, Jaipur Engineering College and Research Centre, Jaipur, Rajasthan, India for her generous guidance, help and useful suggestions. I would also like to thank Prof. K.K. Agarwal and Asst. Prof. Shyam Sunder Manaktala for their kind support and helpful guidance throughout my entire seminar. I express my sincere gratitude to Mr. Sunil Jangir, Department of Information Technology, Jaipur Engineering College and Research Centre, Jaipur, India, for his stimulating guidance, continuous encouragement and supervision throughout the course of present work. .

Monika Shekhawat Roll No. 42 VIII Semester Information Technology

TABLE OF CONTENTS
HEADING Acknowledgement Preface List of Abbreviations (if any) A-1. .dbm List of Figures F-1. RED F-2. WHITE F-3. GREEN F-4. BLUE F-5. ORANGE List of Tables T-1. Student T-2. Faculty T-3. Attendance i ii iii i ii iii iv v i Page No.

Chapter Title 1. Introduction 1.1. What coeval & disseminate computing means? 1.2. Short Description 1.3. Advantages/Merits 1.4. Disadvantages/ Demerits 1.5. Applications 2. Theory/ Research in Past 2.1. What is meaning ? 2.2. Why was it introduced? 2.3. Why new need is there? 1 2 3 4 5 6 7 8 10 12 13 3. Implementation 3.1. How to implement? 3.2. Why was the need? 3.3. Different ways 14 16 18

3.4. Other options available 4. Result/Research in Present 4.1. What new feature added? 4.2. Why was this necessary? 4.3. Other new feature? 5. Conclusion/ Research in Future 5.1. What more and new? 5.2. Beneficial or not? 5.3. New emerging application 6. Bibliography 6.1. Research Papers 6.2. Pdf files 6.3. Books Referred 6.4. Other sources

19 20 21 24 27 28 29 32 33 36 37 38 39 40

1. Introduction
1.1. What coeval & disseminate computing means: The last two decades spawned a revolution in the world of computing; a move away from central mainframe-based computing to network-based computing. Today, servers are fast achieving the levels of CPU performance, memory capacity, and I/O bandwidth once available only in mainframes, at a cost orders of magnitude below that of a mainframe. Servers are being used to solve computationally intensive problems in science and engineering that once belonged exclusively to the domain of supercomputers. A distributed computing system is the system architecture that makes a collection of heterogeneous computers, workstations, or servers act and behave as a single computing system. In such a computing environment, users can uniformly access and name local or remote resources, and run processes from anywhere in the system, without being aware of which computers their processes are running on. Distributed computing systems have been studied extensively by researchers, and a great many claims and benefits have been made for using such systems. In fact, it is hard to rule out any desirable feature of a computing system that has not been claimed to be offered by a distributed system. Distributed computing relies to a large extent on the processing power of the individual nodes of the network. Microprocessor performance has been growing at a rate of 35 to 70 percent during the last decade, and this trend shows no indication of slowing down in the current decade. The enormous power of the future generations of microprocessors, however, cannot be utilized without corresponding improvements in memory and I/O systems. Research in main-memory technologies, high-performance disk arrays, and high-speed I/O channels are, therefore, critical to utilize efficiently the advances in processing technology and the development of cost-effective high performance distributed computing.

1.2. Short Description Parallelism has been successfully used in many domains such as high performance computing (HPC), servers, graphics accelerators, and many embedded systems. The multicore inflection point, however, affects the entire market, particularly the client space, where parallelism has not been previously widespread. Programs with millions of lines of code must be converted or rewritten to take advantage of parallelism; yet, as practiced today, parallel programming for the client is a difficult task performed by few programmers. Commonly used programming models are prone to subtle, hard to reproduce bugs, and parallel programs are notoriously hard to test due to data races, non-deterministic interleavings, and complex memory models. Mapping a parallel application to parallel hardware is also difficult given the large number of degrees of freedom (how many cores to use, whether to use special instructions or accelerators, etc.), and traditional parallel environments have done a poor job virtualizing the hardware for the programmer. As a result, only the highest performance seeking and skilled programmers have been exposed to parallel computing, resulting in little investment in development environments and a lack of trained manpower. There is a risk that while hardware races ahead to ever-larger numbers of cores, software will lag behind and few applications will leverage the potential hardware performance. The word disseminate in terms such as "disseminate computing", "distributed programming", and "distributed algorithm" originally referred to computer networks where individual computers were physically distributed within some geographical area. The terms are nowadays used in a much wider sense, even referring to autonomous processes that run on the same physical computer and interact with each other by message passing. While there is no single definition of a distributed system the following defining properties are commonly used:

There are several autonomous computational entities, each of which has its own local memory. The entities communicate with each other by message passing.

1.3. Advantages/Merits Increased performance. The existence of multiple computers in a distributed system allows applications to be processed in parallel and thus improves application and system performance. For example, the performance of a file system can be improved by replicating its functions over several computers; the file replication allows several applications to access that file system in parallel. Furthermore, file replication distributes network traffic associated with file access across the various sites and thus reduces network contention and queuing delays. Sharing of resources. Distributed systems are cost-effective and enable efficient access to all system resources. Users can share special purpose and sometimes expensive hardware and software resources such as database servers, compute servers, virtual reality servers, multimedia information servers, and printer servers, to name just a few. Increased extendibility. Distributed systems can be designed to be modular and adaptive so that for certain computations, the system will configure itself to include a large number of computers and resources, while in other instances, it will just consist of a few resources. Furthermore, limitations in file system capacity and computing power can be overcome by adding more computers and file servers to the system incrementally. Increased reliability, availability, and fault tolerance.The existence of multiple computing and storage resources in a system makes it attractive and cost-effective to introduce fault tolerance to distributed systems. The system can tolerate the failure in one computer by allocating its tasks to another available computer. Furthermore, by replicating system functions and/or resources, the system can tolerate one or more component failures. Cost-effectiveness. The performance of computers has been approximately doubling every two years, while their cost has decreased by half every year during the last decade. Furthermore, the emerging highspeed network technology [e.g., wave-division multiplexing, asynchronous transfer mode (ATM)] will make the development of distributed systems attractive in terms of the price/performance ratio compared to that of parallel computers.

These advantages cannot be achieved easily because designing a general purpose distributed computing system is several orders of magnitude more difficult than designing centralized computing systemsdesigning a reliable general-purpose distributed system involves a large number of options and decisions, such as the physical system configuration, communication network and computing platform characteristics, task scheduling and resource allocation policies and mechanisms, consistency control, concurrency control, and security, to name just a few. The difficulties can be attributed to many factors related to the lack of maturity in the distributed computing field, the asynchronous and independent behavior of the systems, and the geographic dispersion of the system resources.

1.4 Disadvantages/ demerits : There is a lack of a proper understanding of distributed computing theory the field is relatively new and we need to design and experiment with a large number of general-purpose reliable distributed systems with different architectures before we can master the theory of designing such omputing systems. One interesting explanation for the lack of understanding of the design process of distributed systems was given by Mullender. Mullender compared the design of a distributed system to the design of a reliable national railway system that took a century and half to be fully understood and mature. Similarly, distributed systems (which have been around for approximately two decades) need to evolve into several generations of different design architectures before their designs, structures, and programming techniques can be fully understood and mature. The asynchronous and independent behavior of the system resources and/or (hardware and software) components complicate the control software that aims at making them operate as one centralized computing system. If the computers are structured in a masterslave relationship, the control software is easier to develop and system behavior is more predictable. However, this structure is in conflict with the distributed system property that requires computers to operate independently and asynchronously.

The use of a communication network to interconnect the computers introduces another level of complexity. Distributed system designers not only have to master the design of the computing systems and system software and services, but also have to master the design of reliable communication networks, how to achieve synchronization and consistency, and how to handle faults in a system composed of geographically dispersed Heterogeneous computers. The number of resources involved in a system can vary from a few to hundreds, thousands, or even hundreds of thousands of computing and storage resources. Despite these difficulties, there has been limited success in designing special-purpose distributed systems such as banking systems, online transaction systems, and point-of-sale systems. However, the design of a general purpose reliable distributed system that has the advantages of both centralized systems (accessibility, management, and coherence) and networked systems (sharing, growth, cost, and autonomy) is still a challenging task 1.5. Applications

Telecommunication networks: o Telephone networks and cellular networks. o Computer networks such as the Internet. o Wireless sensor networks. o Routing algorithms.

Network applications: o World wide web and peer-to-peer networks. o Massively multiplayer online games and virtual reality communities. o Distributed databases and distributed database management systems. o Network file systems. o Distributed information processing systems such as banking systems and airline reservation systems.

Real-time process control: o Aircraft control systems. o Industrial control systems.

Parallel computation: o Scientific computing, including cluster computing and grid computing and various volunteer computing projects; see the list of distributed computing projects. o Distributed rendering in computer graphics. o Weather Forecasting. o Grocery shops and malls.

2. Theory/ Research in Past


2.1 What is meant by Coeval & Disseminate computing ? In simple terms , coeval computing deals with the development of programs where multiple concurrent processes cooperate in the fulfillment of a common task. Here , many calculations are carried out simultaneously operating on the principle that large problems can often be divided into smaller ones, which are then solved concurrently. Parallelism has been employed for many years, mainly in high-performance computing, but interest in it has grown lately due to the physical constraints preventing frequency scaling. As power consumption (and consequently heat generation) by computers has become a concern in recent years, parallel computing has become the dominant paradigm in computer architecture, mainly in the form of multicore processors. Disseminate computing deals with the development of applications that execute on different computers interconnected by networks. Disseminate computing in local networks is also called cluster computing while in widearea networks we nowadays talk about grid computing. In the World Wide Web, web services implement globally distributed applications. If a single computer can change your life, why not connect several of them? Over the past two decades, networks of computers have even further changed the way businesses operate and the way the government functions. From college computer LANs to the Internet that we take for granted, networks have been another great technological advancement. Disseminate computing is the next steps in computer progress, where computers are not only networked, but also smartly distribute their workload across each computer so that they stay busy and don't squander the electrical energy they feed on. This setup rivals even the fastest commercial supercomputers built by companies like IBM or Cray. When we combine the concept of distributed computing with the tens of millions of computers connected to the Internet, weve got the fastest computer on Earth.

Das könnte Ihnen auch gefallen