Sie sind auf Seite 1von 105

Blood bank management system using amazon elastic cloud computing

CHAPTER 1 INTRODUCTION
The term cloud is used as a metaphor for the Internet, based on the cloud drawing used to depict the Internet in computer network diagrams as an abstraction of the underlying infrastructure it represents. Typical cloud computing providers deliver common business applications online which are accessed from a web browser, while the software and data are stored on servers.

1.1 What is Cloud computing?


Cloud computing has become one of the most discussed IT paradigms of recent years. It builds on many of the advances in the IT industry over the past decade and presents significant opportunities for businesses to shorten time to market and reduce costs by consuming shared computing and storage resources rather than building, operating, and improving infrastructure on their own. The speed of change in markets creates significant pressure on the enterprise IT infrastructure to adapt and deliver. As defined by Gartner 1, Cloud computing is a style of computing where scalable and elastic IT-enabled capabilities are delivered as a service to external customers using Internet technologies. Computing cloud platforms such as Amazon Web Services, Microsoft Windows Azure, IBM Blue Cloud, and others provide great computational power that can be easily accessed via the Internet at any time without the overhead of managing large computational infrastructures. Traditionally, HPC applications have been run on distributed parallel computers such as supercomputers and large cluster computers. Especially data intensive HPC applications, such as BLAST, require very large computational and storage resources. For example, the ATLAS experiment, which searches for new discoveries in the head-on collisions of protons of extraordinarily high energy, will typically generate more than one petabyte of raw data per year. In addition, replicas of these data and derived simulation results will increase the data footprint even more. It is difficult for one domain or organization to manage and finance the resources to store and analyze such amounts of data.

Dept Of CSE/BTLIT/BATCH 5

Blood bank management system using amazon elastic cloud computing

While clouds are currently mainly used for personal and commercial use (e.g. for web applications), their large number of storage and computational resources, high accessibility, reliability and simple cost model make them very attractive for HPC applications as well.

1.2 What are Data Intensive HPC Applications?


High-Performance Computing (HPC) uses supercomputers and computer clusters to solve advanced computation problems. The term is most commonly associated with computing used for scientific research or computational science. High-performance computing (HPC) is a term that arose after the term "supercomputing." HPC is sometimes used as a synonym for supercomputing. 1.2.1 High Performance Computing Using Amazon EC2 Organizations of all sizes, from large automotive and pharmaceutical firms to small financial and life sciences firms, have problems to solve that require processing a large amount of information using applications running highly parallel processes on scalable computing infrastructure. Solving these problems can be constrained by the amount of infrastructure on hand or in budget, which is often insufficient for the capacity or timing needs of the project. Utilizing Amazon EC2 for these large computational problems can alleviate these challenges by providing access to elastic computing resources with the benefits of flexibility and cost efficiency. Amazon EC2 provides resizable compute capacity in the cloud with the flexibility to choose from a number of different instance types to meet your computing needs. Each instance provides a predictable amount of dedicated compute capacity and is charged per instance-hour consumed.

Dept Of CSE/BTLIT/BATCH 5

Blood bank management system using amazon elastic cloud computing

1.3 Objective
This thesis takes the idea step further and proposes to use the concept of cloud computing (Amazon Web Services) in order to effectively distribute large amount of data stored at Amazon Simple Storage Service (S3) to the all highly computational Elastic Compute Cloud (EC2) nodes before or during the run. In a cloud, these data are typically stored in a separate storage service. Distributing data from the storage service to all compute nodes is essentially a multicast operation. The simplest solution is to let all nodes download the same data directly from the storage service.

1.4 General Approach


The basic block diagram of the system is as shown

S3

E 2 C

E 2 C

E 2 C

Figure 1.1 Basic block diagram shows working of S3 & EC2 Nodes. The data stored at Amazon S3 is distributed to the Amazon EC2 Instances. Multicast Operations is carried out to distribute data from the storage service to all compute nodes.
Dept Of CSE/BTLIT/BATCH 5 3

Blood bank management system using amazon elastic cloud computing The algorithm used here consists of two phases:

(1) The scatter phase and (2) The allgather phase. In the scatter phase, the root node (S3) divides the data to be multicast into blocks of equal size depending on the number of EC2 nodes. These blocks are then sending to each corresponding node using a binomial tree. After all the nodes have receives the divided blocks, they start the allgather phase in which the missing blocks are exchanged and collected by adjacent EC2 nodes.

1.5 Scope of the Project


We present two efficient algorithms to distribute large amounts of data within clouds. The proposed algorithms combine optimization ideas from multicast algorithms used in parallel distributed systems and P2P systems to achieve high performance and scalability with respect to the number of nodes and the amount of data. Each algorithm first divides the data to download from the cloud storage service over all nodes, and then exchanges the data via a mesh overlay network. Furthermore, the second algorithm uses work stealing to automatically adapt the amount of data downloaded from the cloud storage service to the individual throughput of the nodes. We have implemented the two algorithms and evaluated their performance on a real cloud environment (Amazon EC2/S3). As a result, we have confirmed that proposed algorithms achieve high and stable performance.

1.6 Motivation
Many optimization methods and algorithms for multicast operations have been developed for different environments such as high-performance collective communication algorithms for parallel distributed systems (e.g. clusters and grids) & data streaming and file sharing on P2P systems.
Dept Of CSE/BTLIT/BATCH 5 4

Blood bank management system using amazon elastic cloud computing

Each of these approaches essentially performs a multicast operation, but has to take different assumptions and settings into consideration depending on the target environment. While cloud platforms are similar to parallel distributed systems, they also share some characteristics with P2P systems. First, cloud computing services generally provide virtualized computing environments that are shared with other cloud users. The available bandwidth within a cloud can therefore change dynamically. Moreover, the underlying physical network topology and activity of other users is generally unknown.

1.7 Thesis Organization


The rest of the dissertation proceeds as follows: Chapter 2: Focuses on the literature survey which helps understand existing algorithm/ mechanism & motivates to develop the new algorithm with a new approach to overcome the drawbacks of the existing algorithms / mechanism. Chapter 3: The software requirement specification which gives the hardware, software requirements with user and external characteristics. Chapter 4: System design gives the overall description of the project modules. Chapter 5: Deals with implementation. Chapter 6: Presents testing of the product along with test cases & results. Chapter 7: Gives the snapshots of the project. Chapter 8: Conclusion & Future Work.

CHAPTER 2 LITERATURE SURVEY


2.1Amazon Web Services
Dept Of CSE/BTLIT/BATCH 5 5

Blood bank management system using amazon elastic cloud computing

Amazon Web Services (AWS) is a collection of remote computing services (also called web services) that together make up a cloud computing platform, offered over the Internet by Amazon.com. The most central and well-known of these services are Amazon EC2 and Amazon S3. Amazon Web Services offerings are accessed over HTTP, using Representational State Transfer (REST) and SOAP protocols. All services are billed on usage. AWS is comprehensive cloud computing platform & is more than a collection of infrastructure services. With pay as you go pricing, you can save time by incorporating compute, database, storage, messaging, payment, and other services that will give you a head start on delivering for your business. All AWS services can be used independently or deployed together to create a complete computing platform in the cloud.

2.1.1 Elastic Compute Cloud (EC2)


Amazon EC2 is a web service that provides resizable compute capacity in the cloud. It is designed to make web-scale computing easier for developers. Amazon EC2s simple web service interface allows you to obtain and configure capacity with minimal friction. It provides you with complete control of your computing resources and lets you run on Amazons proven computing environment. Amazon EC2 reduces the time required to obtain and boot new server instances to minutes, allowing you to quickly scale capacity, both up and down, as your computing requirements change. Amazon EC2 changes the economics of computing by allowing you to pay only for capacity that you actually use. Amazon EC2 provides various features like Amazon Elastic Load Balancing, Auto scaling, Amazon CloudWatch for monitoring to developers the tools to build failure resilient elastic applications and isolate themselves from common failure scenarios. Amazon EC2 provides virtualized computational resources that can range from one to hundreds of nodes as needed. The provided resources are virtual machines on top of Xen, and called instances. A user first chooses an OS image (called an Amazon Machine Image or AMI) that is stored in S3, and selects the type of instance to run the image on. Various instance types exist, with varying CPU power, memory size etc. Second, a user selects where to run the instances. All instances will then be booted immediately and become active and usable within several tens of seconds.
Dept Of CSE/BTLIT/BATCH 5 6

Blood bank management system using amazon elastic cloud computing

2.1.2 Simple Storage Service (S3)


Amazon S3 is storage for the Internet. Amazon S3 provides a simple web services interface that can be used to store and retrieve any amount of data, at any time, from anywhere on the web. It gives any developer access to the same highly scalable, reliable, fast, inexpensive data storage infrastructure that Amazon uses to run its own global network of web sites. The service aims to maximize benefits of scale and to pass those benefits on to developers. Amazon S3 is a cloud storage service that can be accessed via the Internet. Files can be uploaded and downloaded via standard GET, PUT and DELETE commands over HTTP or HTTPS that are sent through a REST or a SOAP API. S3 stores files as objects in a unique name space, called a bucket. Buckets have to be created before putting objects into S3. They have a location and an arbitrary but globally unique name. The size of objects in a bucket can currently range from 1 byte to 5 gigabytes. The S3 API allows users to access a whole object or a specific byte range of it. For example, one could access bytes 10 to 100 from an object with a size of 200 bytes. This API feature is important for our algorithms. The cloud computing business model is to provide as many resources as needed when needed to the user. A compute cloud gives the illusion of an infinite resource with relatively high SLA guarantees as if the user owned his machine. As an overall shared resource infrastructure, it tries to be as efficient as possible, overlaying multiple users to a single resource in order to control the resource consumption of the user while allowing effectively infinite resource allocation. EC2 and S3 provide a simple cost model based on pay-as-you- go. A user pays for each EC2 instance used, the total storage space occupied in S3 as well as the network bandwidth used by the EC2 instances and S3 objects.

2.1.3 EC2 Instance Types

Dept Of CSE/BTLIT/BATCH 5

Blood bank management system using amazon elastic cloud computing

Standard Instances
Instances of this family are well suited for most applications. Small Instance (Default) 1.7 GB of memory, 1 EC2 Compute Unit (1 virtual core with 1

EC2 Compute Unit), 160 GB of local instance storage, 32-bit platform. Large Instance 7.5 GB of memory, 4 EC2 Compute Units (2 virtual cores with 2 EC2

Compute Units each), 850 GB of local instance storage, 64-bit platform. Extra Large Instance 15 GB of memory, 8 EC2 Compute Units (4 virtual cores with 2

EC2 Compute Units each), 1690 GB of local instance storage, 64-bit platform.

Micro Instances
Instances of this family provide a small amount of consistent CPU resources and allow you to burst CPU capacity when additional cycles are available. They are well suited for lower throughput applications and web sites that consume significant compute cycles periodically. Micro Instance 613 MB of memory, up to 2 ECUs (for short periodic bursts), EBS

storage only, 32-bit or 64-bit platform.

High-Memory Instances
Instances of this family offer large memory sizes for high throughput applications, including database and memory caching applications. High-Memory Extra Large Instance 17.1 GB memory, 6.5 ECU (2 virtual cores with

3.25 EC2 Compute Units each), 420 GB of local instance storage, 64-bit platform. High-Memory Double Extra Large Instance 34.2 GB of memory, 13 EC2 Compute

Units (4 virtual cores with 3.25 EC2 Compute Units each), 850 GB of local instance storage, 64bit platform.

Dept Of CSE/BTLIT/BATCH 5

Blood bank management system using amazon elastic cloud computing

High-Memory Quadruple Extra Large Instance 68.4 GB of memory, 26 EC2 Compute

Units (8 virtual cores with 3.25 EC2 Compute Units each), 1690 GB of local instance storage, 64-bit platform.

High-CPU Instances
Instances of this family have proportionally more CPU resources than memory (RAM) and are well suited for compute-intensive applications. High-CPU Medium Instance 1.7 GB of memory, 5 EC2 Compute Units (2 virtual cores

with 2.5 EC2 Compute Units each), 350 GB of local instance storage, 32-bit platform. High-CPU Extra Large Instance 7 GB of memory, 20 EC2 Compute Units (8 virtual

cores with 2.5 EC2 Compute Units each), 1690 GB of local instance storage, 64-bit platform.

2.2 Papers referred


1. A paper on Building a high-performance collective communication library, in Supercomputing, 1994, pp. 107116. By M. Barnett, L. Shuler, S. Gupta, D. G. Payne, R. A. van de Geijn, and J. Watts. 2. A Toroidal LHC Apparatus Project (ATLAS), http://atlas.web.cern.ch/. In the first paper [1] & the second web link [2] , it is proposed that Data intensive HPC applications such as BLAST & ALTAS experiments requires very large computational and storage resources, which searches for new discoveries in the head-on collisions of protons of extraordinarily high energy, will typically generate more than one petabyte of raw data per year. In addition, replicas of these data and derived simulation results will increase the data footprint even more. 3. A paper on Efficient mpi collective operations for clusters in long-and-fast networks, by M. Matsuda, T. Kudoh, Y. Kodama, R. Takano, and Y. Ishikawa.

Dept Of CSE/BTLIT/BATCH 5

Blood bank management system using amazon elastic cloud computing

4. A paper on Optimization of collective communication operations in MPICH, by R. Thakur, R. Rabenseifner, and W. Gropp.

In these papers [3] [4] it is found that the optimization methods for collective operations. In particular, optimization of multicast communication has been researched in message passing systems like MPI and their collective operation algorithms. Target applications in this environment are mainly HPC applications, so these optimization techniques focus on making a multicast operation as fast as possible. 5. A paper on Incentives build robustness in BitTorrent, by B. Cohen. 6. A paper on Splitstream: High-bandwidth multicast in cooperative environments, by M. Castro, P. Druschel, A. Kermarrec, A. Nandi, A. Rowstron, and A. Singh. In these papers [5] [6], It is observed that Optimization of multicast communication is also studied within the context of P2P overlay networks. Examples include file sharing applications like BitTorrent and data streaming protocols like SplitStream. Consequently, the main focus of multicast communication protocols on P2P systems is being robust. They divide the data to multicast into small pieces that are exchanged with a few neighbor nodes. All nodes tell their neighbors which pieces they have and request pieces they lack. 7. A paper on The Cost of Doing Science on the Cloud: The Montage Example, by E. Deelman, G. Singh, M. Livny, B. Berriman, and J. Good. In this paper it was proposed that the cost of running an application on a cloud depends on the compute, storage and communication resources it will provision and consume. Different execution plans of the same application may result in significantly different costs& to evaluate the cost of running data-intensive applications on Amazon EC2/S3 in terms of different execution and resource provisioning plans.

2.3 An Overview of Different Techniques

Dept Of CSE/BTLIT/BATCH 5

10

Blood bank management system using amazon elastic cloud computing

This section will characterize various multicast methods for large amounts of data on parallel distributed systems and P2P networks, & specific issues with optimizing multicast operations on clouds. 2.3.1. Multicast on Parallel Distributed Systems For parallel distributed systems, such as clusters and grids, there are many types of optimization methods for collective operations. In particular, optimization of multicast communication has been researched in message passing systems like MPI and their collective operation algorithms. Target applications in this environment are mainly HPC applications, so these optimization techniques focus on making a multicast operation as fast as possible. The optimization of multicast communication for parallel distributed systems makes several assumptions: 1) Network performance is high and stable. 2) Network topology does not change. 3) Available bandwidth between nodes is symmetric. Based on these assumptions, optimized multicast algorithms generally construct one or more optimized spanning trees by using network topology information and other monitoring data. The data is then forwarded along these spanning trees from the root node to all others. These multicast techniques are therefore sender-driven (i.e. push-based). For large amounts of data, some optimization algorithms try to construct multiple spanning trees that maximize the available bandwidth of nodes. The data is then divided into small pieces that are transferred efficiently to each node by using the pipelining technique. For example, Stable Broadcast uses depth-first search to find multiple spanning pipeline trees based on estimated network topology information of multiple clusters, and maximizes available bandwidth of all nodes by reducing the effect of slow bandwidth nodes. 2.3.2 Overlay multicast on P2P systems Optimization of multicast communication is also studied within the context of P2P overlay networks. Examples include file sharing applications like BitTorrent and data streaming

Dept Of CSE/BTLIT/BATCH 5

11

Blood bank management system using amazon elastic cloud computing

protocols like Splitstream. The target environment of these applications differs from parallel distributed systems; 1) Network performance is very dynamic. 2) Nodes can join and leave at will. 3) Available bandwidth between nodes can be asymmetric. Consequently; the main focus of multicast communication protocols on P2P systems is being robust. They divide the data Multicast into small pieces that are exchanged with few neighbor nodes. All nodes tell their neighbors which pieces they have and request pieces they lack. Multicast communication in P2P networks is therefore receiver-driven (i.e. pull-based). Receiver-driven multicast can also improve throughput. For example, MOB optimizes multicast communication between multiple homogeneous clusters connected via a WAN. The MOB protocol is based on BitTorrent, and takes node locality into account to reduce the number of wide-area transfers. Nodes in the same cluster are grouped into mobs. Each node in a mob steals an equal part of all data from peers in remote clusters, and distributes the stolen pieces locally. This way, each piece is transferred to each cluster only once, which greatly reduces the amount of wide-area traffic compared to BitTorrent. The incoming data is also automatically spread over all nodes in a mob, which works very well when the NICs of the nodes are the overall bandwidth bottleneck instead of the wide-area links.

2.3.3. Multicast on Clouds Cloud services such as Amazon EC2/S3 provide virtualized application execution environments which are becoming increasingly popular as a platform for HPC applications. For example, Deelman et al. tried to evaluate the cost of running data-intensive applications on Amazon EC2/S3 in terms of different execution and resource provisioning plans. Their target application is Montage, which is a portable software toolkit for science-grade mosaics of the sky by composing multiple astronomical images. In this case, if users continuously use and access the data sets stored on cloud storage many times, it is good to store the generated mosaic, because the storage costs are cheaper than the CPU and the data transfer costs. Some works have shown the calculation performance of Amazon EC2 by using LINPACK and NAS Parallel Benchmark.
Dept Of CSE/BTLIT/BATCH 5 12

Blood bank management system using amazon elastic cloud computing

Cloud systems are based on large clusters in which nodes are densely connected, as opposed to P2P environments in which nodes are sparsely connected. Contrary to traditional clusters, the computational and storage resources provided by clouds are fully or partly virtualized. A multicast algorithm for clouds can therefore not assume anything about the exact physical infrastructure. Similar t o P2P environments, the network performance within clouds is dynamic. The performance of the uplink and downlink of a virtual compute node can be affected by other virtual compute nodes that are running on the same physical host. Routing changes and load balancing will also affect network performance. Furthermore, the data to distribute is often located in the clouds storage service, which introduces another potential shared bottleneck. Existing multicast solutions for parallel distributed systems, like the pipelined trees described in Section 2.3.1, rely on monitoring data and topology information or a map of the available to optimize data transfers for the current network conditions. However, obtaining such information is tedious and hard, and keeping it up-to-date is almost impossible. Our multicast algorithms therefore apply solutions commonly used in P2P environments to handle the dynamic network performance in clouds. These solutions are much easier to deploy and more reactive to changes in network performance. 2.3.4 The cloud usage model of data intensive applications Our multicast algorithms are designed for effective hosting of data-intensive applications on clouds. We will assume that a typical data-intensive application would be started on multiple compute resources in a cloud in parallel, in order to process a large data set. The data set would be stored in the clouds storage service for flexibility, cost and durability. While storing the data at a user site and uploading it to each instance in the cloud would be too costly and slow, as well as users own storage. Hence, the ideal solution is to let each compute resource initially transfer the data set from the cloud storage service to its local storage before the application can start. In this Project, we adopt Amazon EC2/S3 for the aforementioned usage scenario,
Dept Of CSE/BTLIT/BATCH 5 13

Blood bank management system using amazon elastic cloud computing

although we believe our results would also apply to other clouds. Our objective is to distribute a large amount of data stored on S3 to all EC2 instances of an application, as fast as possible. To maintain the practicality of the usage scenario, we assume that EC2 instances and S3 objects are located in the same region as with the current EC2/S3 infrastructure, transferring data from one region to another would be too costly and slow for processing large amounts of data.

CHAPTER 3 SOFTWARE REQUIREMENTS SPECIFICATION


3.1 Introduction
The complexity and size of the software systems are continuously increasing. As the scale changes to more complex and larger software systems, new problems occur that did not exist in the smaller
Dept Of CSE/BTLIT/BATCH 5 14

Blood bank management system using amazon elastic cloud computing

systems ( or were of minor significance), which leads to the redefining of priorities of the activities that go into developing software. As systems grew more complex, it became evident that the goals of the entire system could not be easily comprehended. Hence the need for more rigorous requirements analysis arose. In System Engineering & Software Engineering, Requirement analysis encompasses those tasks that go into determining the requirements of the various users. Requirements analysis is critical to the success of the project. Requirements must be measurable, testable, related to identified business needs, opportunities, and specific customers, and defined to the level of detail sufficient for system design. Since the Software Requirements Specification contains both Functional & non-functional requirements the following document states both these requirements according to the IEEE standards.

3.2 Specific Requirements


To properly satisfy basic goals, an SRS should have certain proper requirements and must show different type of requirements and below stared are some important requirements in developing this system.

3.2.1 Functional requirements This section describes the functional Requirements of the system for those that refer to the functionality of the system, i.e., what services it will provide to the user. This system developed consists of various requirements, which can be broadly classified accordingly to their functionality as given below. 3.2.1.1 Creation of Amazon Web Services (AWS) Account.
Dept Of CSE/BTLIT/BATCH 5 15

Blood bank management system using amazon elastic cloud computing

AWS account provides cloud computing platform which is essential to fulfill the cloud services such as storage and computing resources. 3.2.1.2 Sign up for Amazon S3 & Amazon EC2 services. In AWS, We need to sign up for the Amazon S3 & Amazon EC2 services. AWS credentials are verified before the user can sign up for any services. 3.2.1.3 Create Bucket and specify region & bucket name. In AWS, Once user has signed up for the S3 service, they are allowed to perform many operations on it, Such as creation of bucket to store your data by specifying the Bucket Name & choose a region where your bucket and object(s) reside to optimize latency, minimize costs, or address regulatory requirements. 3.2.1.4 Upload Objects in the Specified Buckets & perform operations. In Amazon S3, User can perform many operations such as uploading objects into bucket; perform write, read & delete operations on the S3 objects & its contents. 3.2.1.5 Choose the appropriate AMI & Launch EC2 instances. In AWS, Once user has signed up for the EC2 service, they are allowed to create an Amazon Machine Image (AMI) containing your applications, libraries, data, and associated configuration settings by configuring security & network access for Amazon EC2 instance. Then user can choose Instance type, OS for the required EC2 & launch the configured instances for the selected AMI. 3.2.1.6 Perform the Non Steal & Steal algorithm in EC2 and get the objects from Bucket of S3. Once the communications is established between S3 & EC2 Instances, We can run both the multicast algorithms and get all the contents of the objects from bucket to EC2 computational Systems. 3.2.2 Non-Functional requirements

Dept Of CSE/BTLIT/BATCH 5

16

Blood bank management system using amazon elastic cloud computing

Nonfunctional (supplementary) requirements pertain to other information needed to produce the correct system and are detailed separately. These are requirements that are not functional in nature, i.e., these are constraints within which the system must work. The program must be self-contained so that it can easily be moved from one Computer to another. It is assumed that JDK (Java Development Kit) will be available on the computer on which the program resides. Capacity, scalability and availability The system shall achieve 100 per cent availability at all times, & shall be scalable to support additional clients and volunteers. Maintainability. The system should be optimized for supportability, or ease of maintenance as far as possible. This may be achieved through the use documentation of coding standards, naming conventions, class libraries and abstraction. Randomness, verifiability and load balancing. The system should be optimized for supportability, or ease of maintenance as far as possible. This may be achieved through the use documentation of coding standards, naming conventions, class libraries and abstraction. It should have randomness to check the nodes and should be load balanced.

3.3 User Characteristics


Any layman must be able to use this product, the understanding of output in detail needs some technical details about the system and also on Amazon Web Services. 3.3.1 User Interface The graphical user interface is implemented using a common design as followed by many other OS applications to make it user friendly. It provides
Dept Of CSE/BTLIT/BATCH 5 17

Blood bank management system using amazon elastic cloud computing

Menu List having Create Bucket & Object Upload Download Show Share

The user has to enter the data through interface provided as a front end which will be displaying Amazon S3 operations so that will be place of interaction between user and the system. The input & output requirements are as mentioned below.

3.3.1.1 Input Requirements 1. If a bucket is to be created, click on Menu Item create bucket, enter the Bucket name and click on create button. 2. If a file is to be uploaded , click on MenuItem File Upload ,click browse button, select the file ,click ok button. 3. If a Object is to be downloaded, select the Object and click on download button. 4. To get the Object list of neighboring nodes, click on show button. 5. To get the missing Object Contents from neighboring nodes, click on share button.

3.3.1.2. Output Requirements The EC2 nodes receives the Object list from the S3 server and object contents should be distributed from the S3 Server to the respective EC2 nodes (i,e Objects or data stored in S3 bucket should be downloaded to the respective nodes) and nodes communicates with each other for exchanging missing Object / Data.

Dept Of CSE/BTLIT/BATCH 5

18

Blood bank management system using amazon elastic cloud computing

3.4 Design Constraints This section should indicate any design constraints on ht system being built. A design constraint represents design decisions that have been mandated and must be adhered to while developing the product.

3.4.1 Hardware Requirements Processor: Pentium IV or AMD 1.0 GHZ or above. Memory: 1 GB RAM (Min). Hard disk space: 3.5 GB (Min). Keyboard, mouse and other peripherals.

3.4.2 Software Requirements Operating System: Windows XP / Vista. Browser: Google Chrome / IE 7. Language used: Java 1.5 or higher. Java Swing front end. Networking-Socket programming.

Packages Used: AWS Java SDK 1.1.7.1 EC2 API Tools.

Dept Of CSE/BTLIT/BATCH 5

19

Blood bank management system using amazon elastic cloud computing

CHAPTER 4 SYSTEM DESIGN


4.1 High Level Design
Software sometimes can be a complex entity. Its development usually follows what is known as Software Development Life Cycle (SDLC). The second stage in the SDLC is the design Stage. The objective of the design stage is to produce overall design of the software. The design stage involves two sub stages mainly:

Dept Of CSE/BTLIT/BATCH 5

20

Blood bank management system using amazon elastic cloud computing

1. High-Level Design. 2. Detailed-Level Design. In the High-Level Design, the Technical Architect of the project will study the proposed applications functional & non functional (quantitative) requirements and the design overall solution architecture of the applications, which can handle those needs. High-Level Design discusses an overall view of how something should work and the top level components that will comprise the proposed solutions. It should have very little detail on implementation, i.e. no explicit class definitions, and in some cases not even details such a databases type (Relational or object) and programming language and platform. In this chapter we give an overview of the design of the system and how it is organized and the flow of data through the system. By reading this document the user should have an overall understanding of the problem and its solution. I have also discussed about the problems encountered during the design of the system and justified the use of the design. Also the Data Flow Diagrams [DFD] is given in this chapter.

4.2 Design Considerations


The most important design considerations are as follows: Create Amazon Web Services (AWS) Account. Verify & store the Access Key & Secret Key needed to access AWS. Sign up for Amazon S3 & Amazon EC2 services. Create Bucket and specify region & bucket name.
21

Dept Of CSE/BTLIT/BATCH 5

Blood bank management system using amazon elastic cloud computing

Upload Objects in the Specified Buckets & perform operations. Choose the appropriate AMI & Launch EC2 instances. Access Objects stored in S3 from EC2 instances. Perform the Non Steal & Steal algorithm in EC2 and get the objects from Bucket of S3. Finally all the running EC2 Instances will have the contents of the objects.

Amazon S3 was built to fulfill the following design requirements:

Scalable: Amazon S3 can scale in terms of storage, request rate, and users to support an unlimited number of web-scale applications. It uses scale as an advantage: Adding nodes to the system increases, not decreases, its availability, speed, throughput, capacity, and robustness.

Reliable: Store data with up to 99.999999999% durability, with 99.99% availability. There can be no single points of failure. All failures must be tolerated or repaired by the system without any downtime.

Fast: Amazon S3 must be fast enough to support high-performance applications. Serverside latency must be insignificant relative to Internet latency. Any performance bottlenecks can be fixed by simply adding nodes to the system.

Inexpensive: Amazon S3 is built from inexpensive commodity hardware components. All hardware will eventually fail and this must not affect the overall system. It must be hardware-agnostic, so that savings can be captured as Amazon continues to drive down infrastructure costs.

Simple: Building highly scalable, reliable, fast, and inexpensive storage is difficult. Doing so in a way that makes it easy to use for any application anywhere is more difficult. Amazon S3 must do both.

4.3 Data Flow Diagrams


Dept Of CSE/BTLIT/BATCH 5 22

Blood bank management system using amazon elastic cloud computing

Data flow diagrams are commonly used during problem analysis. They are quite general and are not limited to problem analysis alone during the software requirements specification. Data Flow Diagrams indicate the flow of data through the system. It views the system as a function that transforms the inputs into desired outputs. Since a complex system will not undergo this transformation in a single step, data must typically undergo a series of transformation before it becomes the output. The DFD aims to capture the transformations that take place within a system to the input data so that eventually the output data is produced. It should be noted that DFD is not a flowchart. A DFD represents the flow of data, while a flowchart represents the flow of control. A DFD does not represent procedural information. The agent that performs the transformation of data from one state to another is called a process (denoted by Bubble). So the DFD basically shows the movement of data through the various transformations or processes in a system. This gives a clear understanding as to how the data flows through the system and therefore helps in improving the understanding of the working of the system.

4.3.1 DFD Level 0 Context level DFD represents the fundamental system model. The entire software element is represented as a single bubble. In this level 0 Data flow diagram has two entities like user and output. Valid Input Processed Input

1.0 INPUT
Invalid Input Fig 4.1 Level-0 Data flow diagram. In this diagram the system is shown to comprise of simple entities like the user, the whole system which is described as Amazon Web Services (AWS) and output are depicted in this DFD. Each of
Dept Of CSE/BTLIT/BATCH 5 23

AWS

OUTPUT

Blood bank management system using amazon elastic cloud computing

these entities will have a specific work assigned to it and they will have to be performed by the respective modules. User will be interacting with the system to provide the input which is to perform the operations on Amazon S3 & EC2. Here the validity of the input is checked and matched with the AWS access credentials and then accepted to be valid if not suitable action will be taken. The AWS module consists of many sub-modules to get register & login into Amazon S3 & Amazon EC2. Each of these sub-modules will have their own set of responsibilities which needs to be completed by respective modules. Finally, all these sub modules are represented to make an Amazon Web Service. Output will be shown to the user by the interface and this will be an implementation using java swing which consists of library functions. Thus the process can be completed using this comprised of above said modules.

4.3.2 DFD Level 1 The level-1 data flow diagram explains in greater details the various entities and modules of the USER project and their interoperability, as well as the various inputs and expected outputs. As can be seen from the figure, 4.2 there are entities like User, GUI, AWS, Check AWS credentials & output. Here each of modules work will be explained to understand the role of the system. User will be interacting with the system to provide the input which is to perform the operations on Amazon S3 & EC2. Here the validity of the input is checked and matched with the AWS access credentials and then accepted to be valid if not suitable action will be taken. 1.1 GUI 1.2 AWS 1.3 Files Containing AWS Access CHECK_ Credentials AWS Secret Key
24

Instruction

Interaction

Dept Of CSE/BTLIT/BATCH 5

OUTPUT

Blood bank management system using amazon elastic cloud computing

Access Valid Input Checking AWS Credentials

Key

Invalid Input

Data

Processed

Fig 4.2 Level-1 Data flow diagram. Graphical user interface GUI is made using the java swings which is designed to be small and efficient, but still flexible enough to allow the programmer freedom in the interfaces created. Java Swings allows the programmer to use a variety of standard user interfaces widgets such as push, radio & check buttons, menus, lists & frame in the system. Check_ AWS performs the checking of the users AWS credentials and verify the access key and secret key and allow user to perform the operations on Amazon S3 & Amazon EC2. By creating buckets in S3 and uploading few files into it. Output will be shown to user by the interfaces and this will be implemented using which again uses the interface GUI made using java swings. Thus the process can be completed using this strategy comprised of above said modules.

4.3.3 DFD Level 2

Dept Of CSE/BTLIT/BATCH 5

25

Blood bank management system using amazon elastic cloud computing

2.2 Sign Up for AWS \ 2.1 Sign Up For Amazon S3 Access S3 Services & Functionality CHECK AWS Access EC2 Services & Functionality Amazon EC2 ACCESS S3/EC2

Fig 4.3 Level-2 Data flow diagram.

The level-2 data flow diagram explains in greater details the various entities and modules and their interoperability, as well as the various inputs and expected outputs. As can be seen in the figure 4.3, the input to the diagram is from AWS management Console; here we need to sign up for the Amazon S3 & Amazon EC2 services. AWS credentials are verified before the user can sign up for any services, then once user have signed up for these services then they can access the functionality.

4.3.4 DFD Level 3


3.2 Upload Files / Objects into Bucket S3 Operations

S3

3.1 Create a Bucket

S3 Task

Select a Bucket name & Region

Working Write, read, and delete objects present in bucket

Fig 4.4 Level-3 Data flow diagram.

Dept Of CSE/BTLIT/BATCH 5

26

Blood bank management system using amazon elastic cloud computing

The level-3 data flow diagram explains in greater details the various entities and modules and their interoperability, as well as the various inputs and expected outputs. As can be seen in the figure 4.4, the input to the diagram is from Amazon S3 service, once user has signed up for the S3 service, they are allowed to perform many operations on it, Such as creation of bucket to store your data by specifying the Bucket Name & choose a region where your bucket and object(s) reside to optimize latency, minimize costs, or address regulatory requirements. Then user can perform many operations such as uploading objects into bucket; perform write, read & delete operations on the S3 objects & its contents. Few main features of Amazon S3 are

Write, read, and delete objects containing from 1 byte to 5 terabytes of data each. The number of objects you can store is unlimited.

Each object is stored in a bucket and retrieved via a unique, developer-assigned key. A bucket can be stored in one of several Regions. You can choose a Region to optimize for latency, minimize costs, or address regulatory requirements. Amazon S3 is currently available in the US Standard, EU (Ireland), US West (Northern California), Asia Pacific (Singapore), and Asia Pacific (Tokyo) Regions. The US Standard Region automatically routes requests to facilities in Northern Virginia or the Pacific Northwest using network maps.

Objects stored in a Region never leave the Region unless you transfer them out. For example, objects stored in the EU (Ireland) Region never leave the EU.

Authentication mechanisms are provided to ensure that data is kept secure from unauthorized access. Objects can be made private or public, and rights can be granted to specific users.

Uses standards-based REST and SOAP interfaces designed to work with any Internetdevelopment toolkit.

Dept Of CSE/BTLIT/BATCH 5

27

Blood bank management system using amazon elastic cloud computing

Built to be flexible so that protocol or functional layers can easily be added. The default download protocol is HTTP. A BitTorrent protocol interface is provided to lower costs for high-scale distribution.

4.3.5 DFD Level 4

EC2

4.1 Create Amazon EC2 Machine Image (AMI) Configure Security & Network Access for EC2. Task

4.2 Choose Instance EC2 Type & OS for your EC2 Launch & Start instance for selected AMI

Fig 4.5 Level-4 Data flow diagram. The level-2 data flow diagram explains in greater details the various entities and modules and their interoperability, as well as the various inputs and expected outputs. As can be seen in the figure 4.3, the input to the diagram is from Amazon EC2 service, once user has signed up for the EC2 service, we are allowed to create an Amazon Machine Image (AMI) containing your applications, libraries, data, and associated configuration settings by configuring security & network access for Amazon EC2 instance. Then user can choose Instance type, OS for the required EC2 & launch the configured instances for the selected AMI. Amazon EC2 presents a true virtual computing environment, allowing you to use web service interfaces to launch instances with a variety of operating systems, load them with your custom application environment, manage your networks access permissions, and run your image using as many or few systems as you desire. To use Amazon EC2, you simply:
Dept Of CSE/BTLIT/BATCH 5 28

Blood bank management system using amazon elastic cloud computing

Select a pre-configured, template image to get up and running immediately. Or create an Amazon Machine Image (AMI) containing your applications, libraries, data, and associated configuration settings.

Configure security and network access on your Amazon EC2 instance. Choose which instance type(s) and operating system you want, then start, terminate, and monitor as many instances of your AMI as needed, using the web service APIs or the variety of management tools provided.

Determine whether you want to run in multiple locations, utilize static IP endpoints, or attach persistent block storage to your instances. Pay only for the resources that you actually consume, like instance-hours or data transfer.

4.4 Structural Design


Structural design is a method that can be applied for both the preliminary and for the detailed design of the software. The objective of structure chart in the preliminary design is to structure both the higher ranking control sequences and the actual processing functions in form of a module hierarchy. The structure chart is the graphical means of representation for structured design. The basic elements of a structure chart are modules. In the case of software architecture, modules refer to individual subprograms. The representation differentiates between modules, predefined modules, data modules, Marcos, & multiple entry point modules. By means of structure charts, the calling structure between modules (functions, subprograms) can be represented. These representations include sequence, selection and iteration in connection with module calls. With each call, data & flow control flows can be listed separately. If required, the call parameter may be better specified in table-like footnotes. The structure charts also permits comments to the modules. In order to improve the arrangements of large diagrams, relations can be also represented by means of connectors. These make a representation of relations beyond the page margins possible. Structure charts show module structure and calling relationships. In a multi-threaded system, each task (thread of execution) is represented as structure chart. Large structure charts are leveled into a stack of connected diagrams.
Dept Of CSE/BTLIT/BATCH 5 29

Blood bank management system using amazon elastic cloud computing

Our project has modules namely Bucket Operations, Non Steal & Steal Algorithm & EC2 Configuration. The following figures show the various modules of my project, and their interaction with the other modules and its functionality.

AMAZON WEB SERVICES

IF AWS_CREDENTIALS MATCHES Sign Up For Amazon S3 Create a Bucket Select a Bucket name & Region

IF AWS_CREDENTIALS Sign Up For Amazon EC2 MATCHES Create Amazon Machine Image (AMI); Configure Security & Network Access for EC2.

Upload Files / Objects Into Bucket Operations - Write, read, and delete objects present in bucket

Choose Instance Type & OS for your EC2 Launch & Start instance for selected AMI

Dept Of CSE/BTLIT/BATCH 5

Perform the Non Steal & Steal algorithm in EC2 and get the objects from Bucket of S3

30

Blood bank management system using amazon elastic cloud computing

Fig: 4.6 Structure design for the project.

4.5 Detailed Design


Once the high level design is completed the next task is to perform details design of the software. While the high level design focuses on the tasks to be performed, the detailed design concentrates on how these can be performed. Detailed design starts after the module specifications are available as part of the system design. The goal of this activity is to develop the internal logic of the modules. Detailed design describes the modules in terms of the data structures used and the algorithms, which explain how the modules are implemented. A major task of detailed design is to spell out in detail, the input, output and functionality of each modules of the system. This forms a design document. The design document is a developers blueprint. It provides precise directions to software programmers about how basic control and data structures will be organized. The design documents are usually written before programming starts. It describes how the software will be structured, and what functionality will be included. This document forms the basis for all future design and coding my goal is to develop a model that achieves those functional requirements while operating within key constraints, such as performance goals and hardware. The following section describes the various modules that I have implemented in my project and their major functionality, along with the expected inputs and outputs. Defining these parameters will help the user as well as the developer get a clear understanding of the project, and this will also help in integral testing.
Dept Of CSE/BTLIT/BATCH 5 31

Blood bank management system using amazon elastic cloud computing

This project contains the following modules. Server Module for Non Steal & Steal Algorithm Client Module for Non- Steal Algorithm. Client Module for Steal Algorithm.

4.6 Server Module for Non Steal & Steal Algorithm


This module acts as an interface to the user. Here the user sets up AWS account & specifies the input in command window and perform operations on Amazon S3 and output the several details of S3 Server. This form is the starting point of the project. The purpose of this module is to provide the user with a user-friendly mechanism of interacting with the Amazon S3 Server. Functionality The server application allows user to displays the buckets present in the Amazon S3 account, create a new bucket by specifying region and the bucket name, user can upload objects into the selected bucket and perform several operation on that object and the bucket. This can be described using the following Pseudo code: Step 1: Trigger Amazon S3 to display the current bucket listing Step 2: Create a new bucket by specifying region name & bucket name. Step 3: Upload Objects in the bucket & view them. Step 4: Get the object list from S3 server & divide them among EC2 nodes.
Dept Of CSE/BTLIT/BATCH 5 32

Blood bank management system using amazon elastic cloud computing

Step 5: Run the algorithm for carrying out scatter & allgather phase on Object. Input The input to this module is user commands to carry Amazon S3 operations. Output The output is the results on GUI showing S3 tasks. Start Amazon Web Services

Displays buckets from my Amazon S3 account

Create a New Bucket & select a region

Upload Objects in the specified Bucket

Sends a list of Objects present in the selected Bucket

Divide & Distribute Object Content using Non-Steal & Steal Algorithm
Dept Of CSE/BTLIT/BATCH 5 33

Blood bank management system using amazon elastic cloud computing

Scatter Phase

Allgather Phase

1st EC2 Instance

2nd EC2 Instance

3rd EC2 Instance

1st EC2 Instance

2nd EC2 Instance

3rd EC2 Instance

Dept Of CSE/BTLIT/BATCH 5

34

Stop Amazon Web Services

Blood bank management system using amazon elastic cloud computing

Divide & Distribute Object Content to EC2 Nodes content of

Exchange the missing Object with Neighboring EC2

Nodes Fig 4.7 Flowchart on Server Module showing both Non-Steal & Steal Algorithm

4.7 Client Module for Non- Steal Algorithm


This module acts as client interface to the user. Here the user sets up clients accessing the objects from Amazon S3 server. All the clients display the content of object & the size of the object. The other side of the GUI screen shows the content of the object present at neighboring client nodes. This form displays the operations to be carried out at clients of the project. The purpose of this module is to provide the user with a user-friendly mechanism of interacting with the Amazon S3 Server & dividing the objects and exchanging the remaining objects among the neighboring clients. Functionality Clients Displays their content and size of the object of the Amazon S3 bucket. Neighboring client nodes contents are displayed on the GUI. The contents of the object stored in the specified bucket are exchanged by executing the non steal algorithm.

Input The input to this module is user commands to carry Amazon S3 operations to check Object contents. Output

Dept Of CSE/BTLIT/BATCH 5

35

Blood bank management system using amazon elastic cloud computing

The output is the creation of text files that are having all the contents of the objects. All the nodes will have the object contents after non-steal algorithm is executed.

Start Amazon Web Services

Store Access key and Secret key from Amazon Web Services

Create a New Bucket & select a region

Upload Objects in the specified Bucket

Retrieves list of Objects present in the selected Bucket

Select Object and download Object Content from S3 server


Dept Of CSE/BTLIT/BATCH 5 36

Blood bank management system using amazon elastic cloud computing

Exchange the missing content of Object with Neighboring EC2 Nodes

Stop Amazon Web Services

Fig 4.8 Flowchart of client module showing Non-Steal Algorithm.


Dept Of CSE/BTLIT/BATCH 5 37

Blood bank management system using amazon elastic cloud computing

4.8 Client Module for Steal Algorithm


This module acts as client interface to the user. Here the user sets up clients accessing the objects from Amazon S3 server. All the clients display the content of Object & the size of the object. The other side of the GUI screen shows the content of the object present at neighboring client nodes. This form displays the operations to be carried out at clients of the project. The purpose of this module is to provide the user with a user-friendly mechanism of interacting with the Amazon S3 Server & dividing the objects and exchanging the remaining objects among the neighboring clients. The fastest neighboring client will steal the work from their peers EC2 nodes & dynamically increase the download throughput. Functionality Clients Displays their content and size of the object of the Amazon S3 bucket. Neighboring client nodes contents are displayed on the GUI. Then finally running the steal algorithm to exchange the contents of the object of the specified bucket. The fastest neighboring client will steal the work from their peers EC2 nodes & dynamically increase the download throughput.

Input The input to this module is user commands to carry Amazon S3 operations to check Object contents. Output The output is the creation of text files that are having all the contents of the objects. All the nodes will have the object contents after steal algorithm is executed. Here the fastest node will steal some work from the neighboring nodes and download the contents of the object on their behalf and then exchange the contents.

Dept Of CSE/BTLIT/BATCH 5

38

Blood bank management system using amazon elastic cloud computing

Start Amazon Web Services

Store Access key and Secret key from Amazon Web Services

Create a New Bucket & select a region

Upload Objects in the specified Bucket

Retrieves list of Objects present in the selected Bucket

Select Object and download Object Content from S3 server


Dept Of CSE/BTLIT/BATCH 5 39

Blood bank management system using amazon elastic cloud computing

Steal work from Neighboring EC2 Nodes

Exchange the missing content of Object with Neighboring EC2 Nodes

Stop Amazon Web Services

Dept Of CSE/BTLIT/BATCH 5

40

Blood bank management system using amazon elastic cloud computing

Fig 4.9 Flowchart of client module showing Steal Algorithm

CHAPTER 5 IMPLEMENTATION
5.1 Programming language Selection
The language used in this project is JAVA and compilations are done in the terminal provided for the user is Eclipse and also it is necessary as front end is done using Swings. The major reason for using JAVA in project was the ease and the control, the language gives on system calls and interface design. The entire project stands on the JAVA API provided by Amazon Web Services to access the functionality of EC2 & S3. Hence it was important to choose a language which supports the design of a very good interface to interact with the users of the system. JAVA source code can be optimized much more than higher-level languages because the language set are relatively small & very efficient.
Dept Of CSE/BTLIT/BATCH 5 41

Blood bank management system using amazon elastic cloud computing

That leads to a third advantage that JAVA has is its application in developing the front end for the user is easy and efficient.

JAVA is an object oriented programming language and it was intended to serve as a new way to manage software complexity. JAVA refers to a number of computer software products and specifications from Sun Microsystems that together provides a system for developing applications software and deploying it in a cross-platform environment.

5.2 Details of the platform used


Platform necessary is mainly Windows XP or Windows 2003 / 2008 R2 Editions is used for implementing the project. The reasons for this choice are as follows: Amazon EC2 makes it easy to start and manage your Windows-based instances. Amazon EC2 provides several pre-configured AMIs that allow you to start running instances in minutes. Once you choose an AMI, you can use the AWS Management Console to configure, launch, terminate, and even bundle your Amazon EC2 Windows instances. Moreover, you can employ a graphical interface to utilize all of the features of Amazon EC2 The operating system comes with the java run time environment which is needed to run the java program and java development kit is used to compile the java source code. Windows comes with better GUI the user interface is more developed and it is familiar to most of the people. It also comes with the library that provides an array of system calls to programmer to interact with hardware through files. This library can help programmer in creating hardware interacting files to communicate and even front end creating functions are defined too. One of the best parts of windows is that due to its widespread use and popularity, there is a wealth of support information available.

Dept Of CSE/BTLIT/BATCH 5

42

Blood bank management system using amazon elastic cloud computing

5.3 Details of the Algorithms used


The algorithms used for the multicast operation on clouds must define some general requirements & fulfill them. Later we summarize the features of our proposed algorithms & describe the multicast algorithms in details. 5.3.1 Requirements The multicast operation on Amazon clouds has the following requirements:

Maximized utilization of available aggregate download throughput from cloud

storage: The available throughput should scale according to the number of nodes, so the multicast algorithm achieves maximum utilization of the available aggregate download throughput despite any rapid changes in the underlying throughputs of individual nodes.

Minimization of multicast completion time of each node: usually all the nodes not

only need to start to calculate as early as possible, but they also need to be able to commence the calculation by and large simultaneously to avoid the long-tail effect resulting in resource underutilization. A multicast operation should be stable, i.e. the data transfer finishes without any outliers even if the users allocation scales to tens of thousands of nodes.

Non-dependence on monitoring neither network throughput nor estimation of

network topology: various previous works we have mentioned usually tried to achieve the above policies by network monitoring and/or estimation of the physical network topologies. Such an approach is undesirable in clouds, as cloud compute resources are dynamically provisioned by the underlying cloud system. Any measurement results would generate significant overhead due to extensive monitoring requirements while they would likely be not be applicable across runs.

Table 5.1 shows the different requirements and characteristics of multicast algorithms for clusters, P2P systems, and clouds.
Dept Of CSE/BTLIT/BATCH 5 43

Blood bank management system using amazon elastic cloud computing

Multicast algorithms for clusters can achieve high performance by using an expensive algorithm that requires monitoring data to construct a high-throughput spanning tree, but they cannot adapt well to dynamic and unstable changes in network throughput. Multicast algorithms for P2P systems are scalable and can adapt well to network performance changes. However, it is difficult to use P2P algorithms to achieve high performance because they overly assume that not only the network but also the availability of the data and the nodes is unstable, resulting in significant overhead. Our algorithms achieve both high performance and scalability by combining the advantages of multicast algorithms or clusters and P2P systems. In particular, we do not perform network monitoring, and balance the network load by using work stealing technique.

Cluster Multicast topology Communication type Network performance Node proximity Node-to-node performance Storage-to-node performance Underlying network topology Correspond to dynamic change Spanning tree(s) Push High Dense Homogeneous Homogeneous Stable Bad

P2P Overlay Pull Low Sparse Heterogeneous Heterogeneous Unstable Good

Clouds Tree+ overlay Pull Middle Dense Heterogeneous Heterogeneous (un)stable Good

Table 5.1 Features of general and proposed multicast algorithms on each environment.

5.3.2 A brief overview of Non-Steal algorithm The multicast algorithm proposed by van de Geijn et al. is a well known algorithm for clusters and multi-clusters environments. It achieves high performance multicast operations, and is often used in efficient MPI collectives implementations.
Dept Of CSE/BTLIT/BATCH 5 44

Blood bank management system using amazon elastic cloud computing

The Non-Steal algorithm consists of two phases:


The scatter phase and The allgather phase.

In the scatter phase, the root node divides the data to be multicast into blocks of equal size depending on the number of nodes. These blocks are then sending to each corresponding node using a binomial tree. After all the nodes have receives the divided blocks, they start the allgather phase in which the missing blocks are exchanged and collected by using the recursive doubling technique.

Our non-steal algorithm is inspired by this algorithm. It also consists of a scatter phase and an allgather phase. All nodes cooperate to download and forward data from S3 to each EC2 node. Initially, none of the nodes has any parts of the data stored in S3, so S3 corresponds to a multicast root node. Figure 5.1 depicts the two phases the algorithm:

Fig. 5.1 Two Phases of the Algorithm.

Dept Of CSE/BTLIT/BATCH 5

45

Blood bank management system using amazon elastic cloud computing

Phase 1 (Non-Steal): The file to distribute is logically divided into P fixed-sized pieces, e.g. 32KB each, numbered from 0 to P 1. When the number of nodes is N, each node i is assigned a range of pieces:

------------------ (1) Node i then downloads all the pieces in its range from S3. Once a node finishes downloading the assigned range of pieces, it waits until all the other nodes have finished too.

Phase 2: After constructing a full overlay network between the all the nodes, each

node continuously exchanges information with its neighbors in the mesh about which pieces they already obtained, and fetches missing pieces from them until all pieces are downloaded.

5.3.2.1 Example of Non-Steal Algorithm Consider three nodes (A, B and C) that download the same 300 MB file from S3. Node B has a fast connection to S3 (10 MB /sec), while A and C have a slow connection to S3 (2 MB /sec). The file will first be logically split into 9600 pieces of 32KB each (9600 * 32KB = 300 MB). Initially, each node requests the assigned 100 MB from S3 (i.e. Node A, B, and C request pieces 0-3199, 3200-6399 and 6400-9599, respectively). After approximately 10 seconds, node B will finish downloading its range. Node A and C, on the other hand, achieve slower throughput and will finish after 50 seconds. Since all nodes wait until everybody has finished, the total completion time of phase 1 is 50 seconds. Once phase 1 is finished, all nodes start phase 2 and exchange the pieces using a BitTorrent-like protocol. Each node connects to some neighbor nodes and sends a possession list indicating which pieces it has. For example, node i send its possession list to node j. If the list contains piece p that
Dept Of CSE/BTLIT/BATCH 5 46

Blood bank management system using amazon elastic cloud computing

node j has not yet obtained, node j requests node i to send it piece p which is then returned by node i. After node j obtains piece p, node j informs all its neighbors that it now has piece p. The informed nodes then update their possession list of node j. All nodes communicate this way until they have obtained all pieces.

5.3.3. A brief overview of Steal algorithm In the non-steal algorithm, all nodes globally synchronize with each other once they have downloaded their assigned range of pieces from S3. As shown in Figure 5.1, however, the completion time may vary greatly among the nodes due to significant and unpredictable variance in the download throughput. The Steal algorithm resolves this issue by stealing: when a node has downloaded its assigned range of pieces, it actively asks other nodes whether they still have pieces to download. If the node asked is still downloading, it splits its own assignments, and returns some of its pieces. This approach is similar to work stealing, in that a fast node steals some download work from a slower node. To make the steal algorithm efficient, we adapt the amount of stolen work to the download bandwidth from S3 observed by both nodes. The steal algorithm consists of two phases:

The scatter phase and The allgather phase.

Phase 1 (Scatter): similar to the non-steal algorithm, The file to distribute is logically split into P equally-sized pieces, and each node i is assigned a range of pieces as shown in Equation 1. When node i has finished downloading its pieces, it asks other nodes whether they have any work remaining and reports its own download throughput Bi for the download just completed. Now assume that node j has W remaining pieces, and its download throughput is currently Bj. Node j then divides W into Wi and Wj such that:
Dept Of CSE/BTLIT/BATCH 5 47

Blood bank management system using amazon elastic cloud computing

Wi = [W * Bi / ( Bi + Bj)] Wj = [W * Bj / ( Bi + Bj)]

------------- (2) ------------- (3)

Node j then returns Wi pieces to node i. Node i and j can then concurrently download Wi and Wj pieces, respectively. Hence, the amount of work they download is

proportional to their download bandwidth. Phase 2 (allgather) : similar to the non-steal algorithm, each node exchanges pieces within EC2 by using a BitTorrent-like protocol until all the pieces have been obtained. Note that in phase 1, we seem to use the heuristic that the download throughput is stable over a short period of time, yet the instability we have demonstrated in our earlier experiment would render such a heuristic ineffective. However, we claim that it is an effective strategy for the following reason: even if the new bandwidth Bi turns out to be slower than the previous Bi, it would be relatively OK: if Wi is small it will quickly

terminate in any case, and if Wi is large it could be subject to further split by some alternative node that will have finished its work, i.e. it will be effectively in the position of node. 5.3.3.1 Example of Steal Algorithm Let us now apply the steal algorithm in the same example scenario we described previously in Section 5.3.2.1. Initially Node A, B and C are assigned ranges 0-3199, 3200-6399, and 6400-9599. From the start of the algorithm, after approximately 10 seconds the fast node B will finish its work. Node B will then request more work (i.e. steal) from, for example, node A. By then, node A will have downloaded about 20 MB of its assigned total 100MB (10 seconds * 2 MB/sec = 20 MB). This is equivalent to 640 pieces, and leaves 2560 remaining pieces to download from S3. Node B will then steal work from node A according to equations (2) and (3), i.e. [2560 * 10/ (2 + 10)] = 2134 pieces (indices 1066-3199). These pieces are returned to node B as
Dept Of CSE/BTLIT/BATCH 5 48

Blood bank management system using amazon elastic cloud computing

new work. Node A then restarts downloading [2560 * 2/ (2 + 10)] = 426 pieces (numbers 640-1065). This process is repeated until all pieces have been downloaded from S3. Finally, the nodes exchange all downloaded pieces with each other in phase 2 of the algorithm. Our steal algorithm can generally apply not only to the Amazon EC2/S3, but also to the other cloud systems (e.g. Windows Azure Storage), moreover, it is possible to apply to the some parallel distributed file systems (e.g. Lustre and GFS) in order to get higher throughput. This is because bandwidth fluctuation per connection potentially will occur in these environments. For example, GFS divides file into a lot of fixed chunks and distributes these chunks to multiple chunk servers, so user can achieve high I/O throughput to/from chunk servers by aggregating each I/O connection. When some nodes access to the same chunk server simultaneously, however, I/O contention occurring access or I/O contention to same chunk server, however, it is confirmed that bandwidth performance in some links decreases.

5.4 Coding Standard


The Project is coded in JAVA, by dividing the entire project into many modules based on its functionality. Object oriented approach and waterfall model is used in developing this project. Commenting is done to make the code more readable.

5.5 Challenges / Difficulties faced and the strategies used.


During implementation of the project several difficulties and problems are faced those are listed below with the documentation of problems along with the strategies used to tackle them so that these can be reused by others when similar challenges are faced. Some of the Challenges faced
1. High Internet Connectivity Service. 2. Choosing the right AMI & EC2 instance. 3. Creation of the correct S3 Bucket.

Dept Of CSE/BTLIT/BATCH 5

49

Blood bank management system using amazon elastic cloud computing 4. Launching the EC2 instance & configuring it as per our requirements. 5. Importing / Copying the Amazon AWS Jar Files & API.

Some of the strategies used to tackle the challenges were the following 1. Always run the project in availability of high speed internet for proper execution & getting results. 2. By choosing the right AMI & EC2 instance, We have selected the Windows 2008 Express Edition Server AMI & chose Micro EC2 instance as it satisfy the requirements of the project.
3. S3 Bucket should have unique name & region. While creating case should be taken to

avoid the creation of bucket with a Capital Alphabet letters. Always use small letters.
4.

By selecting the correct Security Group & providing it with secure communication ports which enables us to communicate easily & securely among EC2 nodes.

5. By importing the AWS JAVA API into codes ease our S3 operations & then by copying the jar files in the JAVA SDK modules enables us to work with AWS.

5.6 Amazon EC2 / S3 Registration & its various operations

Dept Of CSE/BTLIT/BATCH 5

50

Blood bank management system using amazon elastic cloud computing

Fig. 5.1 Webpage of Amazon Web Services.

Fig. 5.2 Registering for Amazon Web Services

Dept Of CSE/BTLIT/BATCH 5

51

Blood bank management system using amazon elastic cloud computing

Fig. 5.3 Access Key ID & Secret Key generated as Access credentials.

Dept Of CSE/BTLIT/BATCH 5

52

Blood bank management system using amazon elastic cloud computing

Fig. 5.4 X.509 Certificate Created as Access credentials

Dept Of CSE/BTLIT/BATCH 5

53

Blood bank management system using amazon elastic cloud computing

Fig. 5.5 Signing Up for Amazon S3

Fig. 5.6 Signing Up for Amazon EC2


Dept Of CSE/BTLIT/BATCH 5 54

Blood bank management system using amazon elastic cloud computing

Fig. 5.7 Uploading few files in Anicloud Bucket at S3

Fig. 5.8 S3 Showing all the uploaded files


Dept Of CSE/BTLIT/BATCH 5 55

Blood bank management system using amazon elastic cloud computing

Fig. 5.9 Amazon EC2 Console Dashboard

Fig. 5.10 Starting a Amazon EC2 Instance.


Dept Of CSE/BTLIT/BATCH 5 56

Blood bank management system using amazon elastic cloud computing

Fig. 5.11 Running EC2 Instance showing Public DNS.

Fig. 5.12 Security groups Configured for EC2 Instance.


Dept Of CSE/BTLIT/BATCH 5 57

Blood bank management system using amazon elastic cloud computing

Fig. 5.13 Key Pairs created for EC2 Instance.

5.7 Module implementation


5.7.1 Source code for insert module
6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. import import import import import import import import import import import import import import import import import import import import import import import java.awt.BorderLayout; java.awt.Container; java.awt.FlowLayout; java.awt.event.ActionEvent; java.awt.event.ActionListener; java.awt.event.KeyAdapter; java.awt.event.KeyEvent; java.io.BufferedReader; java.io.BufferedWriter; java.io.File; java.io.FileOutputStream; java.io.FileWriter; java.io.IOException; java.io.InputStream; java.io.InputStreamReader; java.io.OutputStreamWriter; java.io.PrintWriter; java.io.Writer; java.nio.file.Files; javax.swing.*; javax.swing.border.BevelBorder; javax.swing.border.LineBorder; javax.swing.border.SoftBevelBorder;

Dept Of CSE/BTLIT/BATCH 5

58

Blood bank management system using amazon elastic cloud computing


31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43. 44. 45. 46. 47. import import import import import import import import com.amazonaws.AmazonClientException; com.amazonaws.AmazonServiceException; com.amazonaws.auth.PropertiesCredentials; com.amazonaws.services.s3.AmazonS3; com.amazonaws.services.s3.AmazonS3Client; com.amazonaws.services.s3.model.GetObjectRequest; com.amazonaws.services.s3.model.PutObjectRequest; com.amazonaws.services.s3.model.S3Object;

/** * This code was edited or generated using CloudGarden's Jigloo * SWT/Swing GUI Builder, which is free for non-commercial * use. If Jigloo is being used commercially (ie, by a corporation, 48. * company or business for any purpose whatever) then you 49. * should purchase a license for each developer using Jigloo. 50. * Please visit www.cloudgarden.com for details. 51. * Use of Jigloo implies acceptance of these licensing terms. 52. * A COMMERCIAL LICENSE HAS NOT BEEN PURCHASED FOR 53. * THIS MACHINE, SO JIGLOO OR THIS CODE CANNOT BE USED 54. * LEGALLY FOR ANY CORPORATE OR COMMERCIAL PURPOSE. 55. */ 56. public class Insert extends javax.swing.JFrame implements ActionListener{ 57. 58. { 59. //Set Look & Feel 60. try { 61. javax.swing.UIManager.setLookAndFeel("com.sun.java.swing.plaf.windows.W indowsLookAndFeel"); 62. } catch(Exception e) { 63. e.printStackTrace(); 64. } 65. } 66. 67. private JMenuBar jMenuBar1; 68. private JMenu jMenu5; 69. private JMenuItem helpMenuItem; 70. private JLabel jLabel4; 71. private JLabel jLabel5; 72. private JComboBox cboxGender; 73. private JComboBox cboxCity; 74. private JComboBox cboxBGrp; 75. private JButton btnClear; 76. private JButton btnSubmit; 77. private JTextField txtContact; 78. private JTextField txtAge; 79. JTextField txtName; 80. private JTextField txtLoc; 81. private JLabel jLabel9;

Dept Of CSE/BTLIT/BATCH 5

59

Blood bank management system using amazon elastic cloud computing


82. private JLabel errLab; 83. private JLabel jLabel7; 84. private JLabel jLabel6; 85. private JLabel jLabel3; 86. private JLabel jLabel2; 87. private JLabel jLabel1; 88. private JLabel jLabel8; 89. private JLabel insert; 90. //private JIcon img; 91. private JMenuItem exitMenuItem; 92. private JSeparator jSeparator2; 93. private JMenuItem searchMenuItem; 94. private JMenuItem insertMenuItem; 95. private JMenu jMenu3; 96. 97. /** 98. * Auto-generated main method to display this JFrame 99. */ 100. public static void main(String[] args) { 101. SwingUtilities.invokeLater(new Runnable() { 102. public void run() { 103. Insert inst = new Insert(); 104. inst.setLocationRelativeTo(null); 105. inst.setVisible(true); 106. inst.setBounds(0, 0, 700, 550); 107. } 108. }); 109. } 110. 111. public Insert() { 112. super(); 113. initGUI(); 114. } 115. 116. 117. private void initGUI() { 118. try { 119. { 120. getContentPane().setLayout(null); 121. getContentPane().setBackground(new java.awt.Color(82,206,228)); 122. this.setIconImage(new ImageIcon(getClass().getClassLoader().getResource("GiveBloodGiveLifeNew copy.jpg")).getImage()); 123. //this.setIconImage(new ImageIcon(getClass().getClassLoader().getResource("GiveBloodGiveLifeNew copy.jpg")).getImage()); 124. //this.setIconImage(new ImageIcon(getClass().getClassLoader().getResource("purshi.jpg")).getIma ge()); 125. { 126. jLabel1 = new JLabel(); 127. getContentPane().add(jLabel1, "2, 0 1 1"); 128. jLabel1.setText("Blood Bank Management System Using Clouds");

Dept Of CSE/BTLIT/BATCH 5

60

Blood bank management system using amazon elastic cloud computing


129. java.awt.Font("Broadway",3,20)); 130. 131. java.awt.Color(255,0,0)); 132. java.awt.Color(0,128,128)); 133. jLabel1.setFont(new jLabel1.setBounds(60, 16, 556, 32); jLabel1.setForeground(new jLabel1.setBackground(new

jLabel1.setBorder(BorderFactory.createBevelBorder(BevelBorder.RAISED, new java.awt.Color(255,0,0), new java.awt.Color(128,255,0), null, new java.awt.Color(0,255,255))); 134. jLabel1.setDisabledIcon(new ImageIcon(getClass().getClassLoader().getResource("blood1.JPG"))); 135. //jLabel8.setIcon(new ImageIcon(getClass().getClassLoader().getResource("1.jpg"))); 136. } 137. { 138. insert = new JLabel(); 139. getContentPane().add(insert,"2,3,1,1"); 140. insert.setText("Add Details"); 141. insert.setFont(new java.awt.Font("Andalus",3,22)); 142. insert.setBounds(60, 68, 153, 32); 143. insert.setForeground(new java.awt.Color(255,0,0)); 144. 145. } 146. { 147. jLabel2 = new JLabel(); 148. getContentPane().add(jLabel2, "2, 4 1 1"); 149. jLabel2.setText("Donor Name"); 150. jLabel2.setFont(new java.awt.Font("Tahoma",1,11)); 151. jLabel2.setBounds(49, 110, 108, 24); 152. } 153. { 154. jLabel3 = new JLabel(); 155. getContentPane().add(jLabel3, "2, 6 1 1"); 156. jLabel3.setText("Gender"); 157. jLabel3.setFont(new java.awt.Font("Tahoma",1,11)); 158. jLabel3.setBounds(47, 149, 106, 19); 159. } 160. { 161. jLabel4 = new JLabel(); 162. getContentPane().add(jLabel4, "2, 6 1 1"); 163. jLabel4.setText("Age"); 164. jLabel4.setBounds(47, 182, 81, 23); 165. jLabel4.setFont(new java.awt.Font("Tahoma",1,11)); 166. }

Dept Of CSE/BTLIT/BATCH 5

61

Blood bank management system using amazon elastic cloud computing


167. { 168. 169. 170. 171. 172. java.awt.Font("Tahoma",1,11)); 173. } 174. { 175. 176. 177. 178. 179. java.awt.Font("Tahoma",1,11)); 180. } 181. { 182. 183. 184. 185. 186. java.awt.Font("Tahoma",1,11)); 187. } 188. { 189. 190. 191. 20); 192.

jLabel5 = new JLabel(); getContentPane().add(jLabel5); jLabel5.setText("Blood Group"); jLabel5.setBounds(47, 226, 81, 14); jLabel5.setFont(new

jLabel6 = new JLabel(); getContentPane().add(jLabel6); jLabel6.setText("Location/City"); jLabel6.setBounds(47, 266, 81, 14); jLabel6.setFont(new

jLabel7 = new JLabel(); getContentPane().add(jLabel7); jLabel7.setText("Contact No."); jLabel7.setBounds(47, 344, 88, 14); jLabel7.setFont(new

txtName = new JTextField(); getContentPane().add(txtName); txtName.setBounds(157, 112, 159,

txtName.setBorder(BorderFactory.createBevelBorder(BevelBorder.LOWERED, null, new java.awt.Color(128,128,0), null, null)); 193. } 194. { 195. ComboBoxModel cboxGenderModel = 196. new DefaultComboBoxModel( 197. new String[] { "<Select Gender", "M","F" }); 198. cboxGender = new JComboBox(); 199. getContentPane().add(cboxGender); 200. cboxGender.setModel(cboxGenderModel); 201. cboxGender.setBounds(158, 223, 158, 20); 202. cboxGender.setBounds(157, 148, 159, 20); 203. cboxGender.setBorder(BorderFactory.createBevelBorder(BevelBorder.LOWERE D, null, new java.awt.Color(128,128,0), null, null)); 204. } 205. { 206. txtAge = new JTextField(); 207. getContentPane().add(txtAge);

Dept Of CSE/BTLIT/BATCH 5

62

Blood bank management system using amazon elastic cloud computing


208. 209. txtAge.setBounds(157, 183, 159, 20);

txtAge.setBorder(BorderFactory.createBevelBorder(BevelBorder.LOWERED, null, new java.awt.Color(128,128,0), null, null)); 210. } 211. { 212. txtContact = new JTextField(); 213. getContentPane().add(txtContact); 214. txtContact.setBounds(158, 338, 156, 20); 215. txtContact.setBorder(BorderFactory.createBevelBorder(BevelBorder.LOWERE D, null, new java.awt.Color(128,128,0), null, null)); 216. } 217. { 218. btnSubmit = new JButton(); 219. getContentPane().add(btnSubmit); 220. btnSubmit.setText("Submit"); 221. btnSubmit.setBounds(158, 399, 77, 28); 222. btnSubmit.setFont(new java.awt.Font("Tahoma",1,11)); 223. btnSubmit.setBorder(BorderFactory.createBevelBorder(BevelBorder.RAISED, new java.awt.Color(255,255,0), new java.awt.Color(255,0,0), new java.awt.Color(255,255,0), new java.awt.Color(255,0,0))); 224. btnSubmit.addActionListener(this); 225. } 226. { 227. btnClear = new JButton(); 228. getContentPane().add(btnClear); 229. btnClear.setBounds(245, 399, 83, 27); 230. btnClear.setText("Clear"); 231. btnClear.setFont(new java.awt.Font("Tahoma",1,11)); 232. btnClear.setBorder(BorderFactory.createBevelBorder(BevelBorder.RAISED, new java.awt.Color(255,255,0), new java.awt.Color(255,0,0), new java.awt.Color(255,255,128), new java.awt.Color(255,0,0))); 233. btnClear.setForeground(new java.awt.Color(0,0,0)); 234. btnClear.addActionListener(this); 235. } 236. { 237. ComboBoxModel cboxBGrpModel = 238. new DefaultComboBoxModel( 239. new String[] { "<Select BloodGroup>", "Aplus","Aminus","Bplus","Bminus","Oplus","Ominus" }); 240. cboxBGrp = new JComboBox();

Dept Of CSE/BTLIT/BATCH 5

63

Blood bank management system using amazon elastic cloud computing


241. 242. 243. 20); 244. getContentPane().add(cboxBGrp); cboxBGrp.setModel(cboxBGrpModel); cboxBGrp.setBounds(158, 223, 158,

cboxBGrp.setBorder(BorderFactory.createBevelBorder(BevelBorder.LOWERED, null, new java.awt.Color(128,128,0), null, null)); 245. } 246. { 247. ComboBoxModel cboxCityModel = 248. new DefaultComboBoxModel( 249. new String[] { "<Select City>", "Bengaluru","Mysore","Hyderabad" }); 250. cboxCity = new JComboBox(); 251. getContentPane().add(cboxCity); 252. cboxCity.setModel(cboxCityModel); 253. cboxCity.setBounds(158, 260, 158, 20); 254. cboxCity.setBorder(BorderFactory.createBevelBorder(BevelBorder.LOWERED, null, new java.awt.Color(128,128,0), null, null)); 255. } 256. { 257. jLabel8 = new JLabel(new ImageIcon("b.jpg")); 258. getContentPane().add(jLabel8); 259. //jLabel8.setText("Image"); 260. //jLabel8.setIcon("purshi.jpg"); 261. jLabel8.setBounds(393, 154, 203, 180); 262. jLabel8.setBorder(BorderFactory.createBevelBorder(BevelBorder.RAISED, new java.awt.Color(255,0,0), new java.awt.Color(255,0,0), new java.awt.Color(0,255,0), new java.awt.Color(0,255,0))); 263. jLabel8.setIcon(new ImageIcon(getClass().getClassLoader().getResource("1.jpg"))); 264. 265. } 266. { 267. errLab = new JLabel(); 268. getContentPane().add(errLab); 269. errLab.setBounds(383, 93, 176, 29); 270. } 271. { 272. jLabel9 = new JLabel(); 273. getContentPane().add(jLabel9); 274. jLabel9.setText("Location"); 275. jLabel9.setBounds(49, 304, 88, 14); 276. jLabel9.setFont(new java.awt.Font("Tahoma",1,11)); 277. } 278. {

Dept Of CSE/BTLIT/BATCH 5

64

Blood bank management system using amazon elastic cloud computing


279. 280. 281. 282. txtLoc = new JTextField(); getContentPane().add(txtLoc); txtLoc.setBounds(158, 301, 156, 20);

txtLoc.setBorder(BorderFactory.createBevelBorder(BevelBorder.LOWERED)); 283. } 284. } 285. this.setSize(700, 550); 286. { 287. jMenuBar1 = new JMenuBar(); 288. setJMenuBar(jMenuBar1); 289. jMenuBar1.setBackground(new java.awt.Color(0,255,255)); 290. { 291. jMenu3 = new JMenu(); 292. jMenuBar1.add(jMenu3); 293. jMenu3.setText("Home"); 294. jMenu3.setBackground(new java.awt.Color(0,128,0)); 295. { 296. insertMenuItem = new JMenuItem(); 297. jMenu3.add(insertMenuItem); 298. insertMenuItem.setText("Insert"); 299. } 300. { 301. searchMenuItem = new JMenuItem(); 302. jMenu3.add(searchMenuItem); 303. searchMenuItem.setText("Search"); 304. searchMenuItem.addActionListener(this); 305. } 306. { 307. jSeparator2 = new JSeparator(); 308. jMenu3.add(jSeparator2); 309. } 310. { 311. exitMenuItem = new JMenuItem(); 312. jMenu3.add(exitMenuItem); 313. exitMenuItem.setText("Exit"); 314. exitMenuItem.addActionListener(this); 315. } 316. } 317. { 318. jMenu5 = new JMenu(); 319. jMenuBar1.add(jMenu5); 320. jMenu5.setText("Help"); 321. jMenu5.setBackground(new java.awt.Color(128,128,64));

Dept Of CSE/BTLIT/BATCH 5

65

Blood bank management system using amazon elastic cloud computing


322. { 323. helpMenuItem = new JMenuItem(); 324. jMenu5.add(helpMenuItem); 325. helpMenuItem.setText("Help"); 326. helpMenuItem.addActionListener(this); 327. } 328. } 329. } 330. } catch (Exception e) { 331. System.out.println(e.getMessage()); 332. } 333. } 334. 335. @Override 336. public void actionPerformed(ActionEvent e) { 337. // TODO Auto-generated method stub 338. if(e.getSource()==btnClear) 339. { 340. txtName.setText(""); 341. txtAge.setText(""); 342. txtContact.setText(""); 343. txtLoc.setText(""); 344. 345. } 346. else if(e.getSource()==exitMenuItem) 347. { 348. System.exit(0); 349. } 350. else if(e.getSource()==searchMenuItem) 351. { 352. Search s=new Search(); 353. s.setVisible(true); 354. this.setVisible(false); 355. } 356. else if(e.getSource()==btnSubmit) 357. { 358. //Writer output = null; 359. String city=(String)cboxCity.getSelectedItem(); 360. String bgrp=(String)cboxBGrp.getSelectedItem(); 361. String Gen=(String)cboxGender.getSelectedItem(); 362. String bucketName=(String)cboxBGrp.getSelectedItem(); 363. String key=(String) cboxCity.getSelectedItem(); 364. 365. //validate(); 366. String text = txtName.getText()+" "+Gen+" " +txtAge.getText()+" " + txtLoc.getText()+" " +txtContact.getText(); 367. //String my_file_name= "write.txt"; 368.

Dept Of CSE/BTLIT/BATCH 5

66

Blood bank management system using amazon elastic cloud computing


369. try { 370. AmazonS3 s3Client = new AmazonS3Client(new PropertiesCredentials( 371. S3Sample.class.getResourceAsStream( 372. "AwsCredentials.properties"))); 373. 374. System.out.println("Downloading an object"); 375. 376. GetObjectRequest rangeObjectRequest = new GetObjectRequest( 377. bucketName, key); 378. S3Object objectPortion = s3Client.getObject(rangeObjectRequest); 379. 380. System.out.println("Printing bytes retrieved."); 381. 382. 383. 384. displayTextInputStream(objectPortion.getObjectContent(),key); 385. 386. } 387. catch(IOException e1){} catch (AmazonServiceException ase) { 388. System.out.println("Caught an AmazonServiceException, which" + 389. " means your request made it " + 390. "to Amazon S3, but was rejected with an error response" + 391. " for some reason."); 392. System.out.println("Error Message: " + ase.getMessage()); 393. System.out.println("HTTP Status Code: " + ase.getStatusCode()); 394. System.out.println("AWS Error Code: " + ase.getErrorCode()); 395. System.out.println("Error Type: " + ase.getErrorType()); 396. System.out.println("Request ID: " + ase.getRequestId()); 397. } catch (AmazonClientException ace) { 398. System.out.println("Caught an AmazonClientException, which means"+ 399. " the client encountered " + 400. "an internal error while trying to " + 401. "communicate with S3, " + 402. "such as not being able to access the network.");

Dept Of CSE/BTLIT/BATCH 5

67

Blood bank management system using amazon elastic cloud computing


403. System.out.println("Error Message: " + ace.getMessage()); 404. } 405. 406. try{ 407. // Create file 408. FileWriter fstream = new FileWriter(key,true); 409. BufferedWriter out = new BufferedWriter(fstream); 410. out.write(text); 411. out.newLine(); 412. 413. AmazonS3 s3client = new AmazonS3Client(new PropertiesCredentials( 414. S3Sample.class.getResourceAsStream( 415. "AwsCredentials.properties"))); 416. try { 417. System.out.println("Uploading a new object to S3 from a file\n"); 418. 419. File file = new File(key); 420. s3client.putObject(bucketName, key, file); 421. 422. 423. 424. } catch (AmazonServiceException ase) { 425. System.out.println("Caught an AmazonServiceException, which " + 426. "means your request made it " + 427. "to Amazon S3, but was rejected with an error response" + 428. " for some reason."); 429. System.out.println("Error Message: " + ase.getMessage()); 430. System.out.println("HTTP Status Code: " + ase.getStatusCode()); 431. System.out.println("AWS Error Code: " + ase.getErrorCode()); 432. System.out.println("Error Type: " + ase.getErrorType()); 433. System.out.println("Request ID: " + ase.getRequestId()); 434. } catch (AmazonClientException ace) { 435. System.out.println("Caught an AmazonClientException, which " + 436. "means the client encountered " + 437. "an internal error while trying to " + 438. "communicate with S3, " +

Dept Of CSE/BTLIT/BATCH 5

68

Blood bank management system using amazon elastic cloud computing


439. "such as not being able to access the network."); 440. System.out.println("Error Message: " + ace.getMessage()); 441. } 442. 443. //Close the output stream 444. out.close(); 445. 446. }catch (Exception e1){//Catch exception if any 447. System.err.println("Error: " + e1.getMessage()); 448. } 449. 450. System.out.println("Your file has been written"); 451. 452. 453. jLabel8.setText("Inserted.......!"); 454. /*try 455. { 456. File file=File.createTempFile(city,".txt"); 457. Writer writer = new OutputStreamWriter(new FileOutputStream(file)); 458. }catch(Exception e2){}*/ 459. } 460. else if(e.getSource()==helpMenuItem) 461. { 462. this.setVisible(false); 463. Help h=new Help(); 464. h.setVisible(true); 465. } 466. } 467. 468. private static void displayTextInputStream(InputStream input,String key) throws IOException { 469. 470. PrintWriter pw=new PrintWriter(new FileWriter("key",true)); 471. 472. try{ 473. BufferedReader reader = new BufferedReader(new 474. InputStreamReader(input)); 475. while (true) { 476. 477. String line = reader.readLine(); 478. if (line == null) break; 479. 480. pw.println(line); 481. System.out.println("written successfully"+line); 482. //if (line == null) break; 483. line = reader.readLine(); 484. 485. // System.out.println(" " + line);

Dept Of CSE/BTLIT/BATCH 5

69

Blood bank management system using amazon elastic cloud computing


486. 487. 488. 489. 490. 491. 492. 493. 494. 495. 496. 497. 498. 499. } }catch(Exception e){} finally{ if (pw != null) pw.close(); } }

5.7.2 Source code for search module


import java.awt.BorderLayout; import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import javax.swing.*; import javax.swing.border.BevelBorder; import com.amazonaws.services.s3.model.GetObjectRequest; import com.amazonaws.services.s3.model.S3Object; import import import import import import import import import import import import import import import import import import import import import import import import import import java.io.BufferedInputStream; java.io.BufferedReader; java.io.BufferedWriter; java.io.File; java.io.FileInputStream; java.io.FileOutputStream; java.io.FileReader; java.io.FileWriter; java.io.IOException; java.io.InputStream; java.io.InputStreamReader; java.io.OutputStreamWriter; java.io.PrintWriter; java.io.Writer; java.net.ServerSocket; java.net.Socket; java.util.UUID; com.amazonaws.AmazonClientException; com.amazonaws.AmazonServiceException; com.amazonaws.auth.PropertiesCredentials; com.amazonaws.services.s3.AmazonS3; com.amazonaws.services.s3.AmazonS3Client; com.amazonaws.services.s3.model.GetObjectRequest; com.amazonaws.services.s3.model.ListObjectsRequest; com.amazonaws.services.s3.model.PutObjectRequest; com.amazonaws.services.s3.model.Bucket;

Dept Of CSE/BTLIT/BATCH 5

70

Blood bank management system using amazon elastic cloud computing


import com.amazonaws.services.s3.model.S3Object; import com.amazonaws.services.s3.model.ObjectListing; import com.amazonaws.services.s3.model.S3ObjectSummary; /** * This code was edited or generated using CloudGarden's Jigloo * SWT/Swing GUI Builder, which is free for non-commercial * use. If Jigloo is being used commercially (ie, by a corporation, * company or business for any purpose whatever) then you * should purchase a license for each developer using Jigloo. * Please visit www.cloudgarden.com for details. * Use of Jigloo implies acceptance of these licensing terms. * A COMMERCIAL LICENSE HAS NOT BEEN PURCHASED FOR * THIS MACHINE, SO JIGLOO OR THIS CODE CANNOT BE USED * LEGALLY FOR ANY CORPORATE OR COMMERCIAL PURPOSE. */ public class Search extends javax.swing.JFrame implements ActionListener{{

//Set Look & Feel try { javax.swing.UIManager.setLookAndFeel("com.sun.java.swing.plaf.windows.Windows LookAndFeel"); } catch(Exception e) { e.printStackTrace(); } } private private private private private private private private private private private private private private private private private JMenuBar jMenuBar1; JMenu jMenu5; JMenuItem helpMenuItem; JLabel jLabel5; static JLabel jLabel4; JLabel jLabel3; JComboBox cboxBGrp; JComboBox cboxCity; JButton btnClear; JButton btnSubmit; JLabel jLabel2; JLabel jLabel1; JMenuItem exitMenuItem; JSeparator jSeparator2; JMenuItem insertMenuItem; JMenuItem searchMenuItem; JMenu jMenu3;

/** * Auto-generated main method to display this JFrame */ public static void main(String[] args) {

Dept Of CSE/BTLIT/BATCH 5

71

Blood bank management system using amazon elastic cloud computing

SwingUtilities.invokeLater(new Runnable() { public void run() { Search inst = new Search(); inst.setLocationRelativeTo(null); inst.setVisible(true); inst.setBounds(0, 0, 700, 550); } }); } public Search() { super(); initGUI(); } private void initGUI() { try { { getContentPane().setLayout(null); getContentPane().setBackground(new java.awt.Color(21,193,230)); getContentPane().setForeground(new java.awt.Color(0,255,0)); this.setFont(new java.awt.Font("Arial",0,10)); //this.setIconImage(new ImageIcon(getClass().getClassLoader().getResource("purshi.jpg")).getImage()); { jLabel1 = new JLabel(); getContentPane().add(jLabel1, "2, 0 1 1"); jLabel1.setText("Blood Bank Management System Using Clouds"); jLabel1.setFont(new java.awt.Font("Andalus",3,22)); jLabel1.setBounds(60, 37, 538, 32); jLabel1.setForeground(new java.awt.Color(255,0,0)); } { jLabel2 = new JLabel(); getContentPane().add(jLabel2, "2, 4 1 1"); jLabel2.setText("Location"); jLabel2.setFont(new java.awt.Font("Tahoma",1,11)); jLabel2.setBounds(59, 150, 108, 24); } { jLabel5 = new JLabel(); getContentPane().add(jLabel5); jLabel5.setText("Blood Group"); jLabel5.setBounds(60, 200, 81, 14); jLabel5.setFont(new java.awt.Font("Tahoma",1,11));

Dept Of CSE/BTLIT/BATCH 5

72

Blood bank management system using amazon elastic cloud computing


} { btnSubmit = new JButton(); getContentPane().add(btnSubmit); btnSubmit.setText("Download"); btnSubmit.setBounds(134, 296, 100, 28); btnSubmit.setFont(new

java.awt.Font("Tahoma",1,11));

btnSubmit.setBorder(BorderFactory.createBevelBorder(BevelBorder.RAISED, null, new java.awt.Color(255,0,128), new java.awt.Color(0,128,0), null)); btnSubmit.addActionListener(this); } {

btnClear = new JButton(); getContentPane().add(btnClear); btnClear.setBounds(233, 297, 83, 27); btnClear.setText("Clear"); btnClear.setFont(new

java.awt.Font("Tahoma",1,11)); btnClear.setBorder(BorderFactory.createBevelBorder(BevelBorder.RAISED, null, new java.awt.Color(255,0,128), new java.awt.Color(0,128,0), null)); btnClear.addActionListener(this); } { ComboBoxModel cboxCityModel = new DefaultComboBoxModel( new String[] { "<Select City>", "Bengaluru","Mysuru","Hyderabad" }); cboxCity = new JComboBox(); getContentPane().add(cboxCity); cboxCity.setModel(cboxCityModel); cboxCity.setBounds(158, 152, 158, 20); cboxCity.setBorder(BorderFactory.createBevelBorder(BevelBorder.LOWERED, null, null, null, new java.awt.Color(0,128,0))); } { ComboBoxModel cboxBGrpModel = new DefaultComboBoxModel( new String[] { "<Select BloodGroup>", "Aplus","Aminus","Bplus","Bminus","Oplus","Ominus" }); cboxBGrp = new JComboBox(); getContentPane().add(cboxBGrp); cboxBGrp.setModel(cboxBGrpModel); cboxBGrp.setBounds(158, 196, 158, 20); cboxBGrp.setBorder(BorderFactory.createBevelBorder(BevelBorder.LOWERED, null, null, null, new java.awt.Color(0,128,0))); } {

Dept Of CSE/BTLIT/BATCH 5

73

Blood bank management system using amazon elastic cloud computing


jLabel3 = new JLabel(); getContentPane().add(jLabel3); jLabel3.setText("Searching:"); jLabel3.setBounds(60, 115, 98, 20); jLabel3.setFont(new jLabel3.setForeground(new

java.awt.Font("Tahoma",3,14)); java.awt.Color(255,0,0)); } {

jLabel4 = new JLabel(new ImageIcon("1.jpg")); getContentPane().add(jLabel4); jLabel4.setIcon(new ImageIcon(getClass().getClassLoader().getResource("1.jpg"))); jLabel4.setBounds(377, 122, 203, 189); jLabel4.setBorder(BorderFactory.createBevelBorder(BevelBorder.RAISED, new java.awt.Color(0,255,0), new java.awt.Color(255,0,0), new java.awt.Color(255,128,0), new java.awt.Color(255,255,0))); } { test", 10, 80); JScrollPane(resultTA);

JTextArea resultTA = new JTextArea("This is a JScrollPane scrollingResult = new getContentPane().add(scrollingResult); resultTA.setBounds(500,155,100,100);

} } this.setSize(700, 550); { jMenuBar1 = new JMenuBar(); setJMenuBar(jMenuBar1); jMenuBar1.setBackground(new java.awt.Color(0,255,255)); { jMenu3 = new JMenu(); jMenuBar1.add(jMenu3); jMenu3.setText("Home"); jMenu3.setBackground(new java.awt.Color(64,128,128)); { insertMenuItem = new JMenuItem(); jMenu3.add(insertMenuItem); insertMenuItem.setText("Insert"); insertMenuItem.addActionListener(this); } { searchMenuItem = new JMenuItem(); jMenu3.add(searchMenuItem); searchMenuItem.setText("Search"); searchMenuItem.addActionListener(this);

Dept Of CSE/BTLIT/BATCH 5

74

Blood bank management system using amazon elastic cloud computing


} { jSeparator2 = new JSeparator(); jMenu3.add(jSeparator2); } { exitMenuItem = new JMenuItem(); jMenu3.add(exitMenuItem); exitMenuItem.setText("Exit"); exitMenuItem.addActionListener(this); } { } jMenu5 = new JMenu(); jMenuBar1.add(jMenu5); jMenu5.setText("Help"); jMenu5.setBackground(new { helpMenuItem = new JMenuItem(); jMenu5.add(helpMenuItem); helpMenuItem.setText("Help"); helpMenuItem.addActionListener(this);

java.awt.Color(128,128,64));

} }

} } catch (Exception e) { e.printStackTrace(); }

@Override public void actionPerformed(ActionEvent e) { // TODO Auto-generated method stub if(e.getSource()==exitMenuItem) { System.exit(0); } else if(e.getSource()==insertMenuItem) { this.setVisible(false); Insert ins= new Insert(); ins.setVisible(true); }

else if(e.getSource()==btnSubmit) { String bucketName=(String)cboxBGrp.getSelectedItem(); String key=(String) cboxCity.getSelectedItem(); try { PropertiesCredentials( AmazonS3 s3Client = new AmazonS3Client(new

Dept Of CSE/BTLIT/BATCH 5

75

Blood bank management system using amazon elastic cloud computing


S3Sample.class.getResourceAsStream( "AwsCredentials.properties"))); System.out.println("Downloading an object"); GetObjectRequest rangeObjectRequest = new

GetObjectRequest(

bucketName, key); S3Object objectPortion = s3Client.getObject(rangeObjectRequest); System.out.println("Printing bytes retrieved."); displayTextInputStream(objectPortion.getObjectContent(),key); } catch(IOException e1){} catch (AmazonServiceException ase) { System.out.println("Caught an AmazonServiceException, which" + " means your request made it " + "to Amazon S3, but was rejected with an error response" + " for some reason."); System.out.println("Error Message: " + ase.getMessage()); System.out.println("HTTP Status Code: " + ase.getStatusCode()); System.out.println("AWS Error Code: " + ase.getErrorCode()); System.out.println("Error Type: " + ase.getErrorType()); System.out.println("Request ID: " + ase.getRequestId()); } catch (AmazonClientException ace) { System.out.println("Caught an AmazonClientException, which means"+ " the client encountered " + "an internal error while trying to " + "communicate with S3, " + "such as not being able to access the network."); System.out.println("Error Message: " + ace.getMessage()); } }

else if(e.getSource()==helpMenuItem) { this.setVisible(false); Help h=new Help();

Dept Of CSE/BTLIT/BATCH 5

76

Blood bank management system using amazon elastic cloud computing


h.setVisible(true);

} }

private static void displayTextInputStream(InputStream input, String key) throws IOException { PrintWriter pw=new PrintWriter(new FileWriter(key)); FileWriter fstream = new FileWriter(key,true); BufferedWriter out = new BufferedWriter(fstream); try{ BufferedReader reader = new BufferedReader(new InputStreamReader(input)); while (true) { String line = reader.readLine(); if (line == null) { break; } pw.println(line); //System.out.println("written successfully"+line); //if (line == null) break; //out.write(line); //out.newLine(); //line = reader.readLine(); // System.out.println(" } }catch(Exception e){} finally{ if (pw != null){ jLabel4.setText("Data Downloaded Successfully.......!"); pw.close(); //out.close(); } } } " + line);

Dept Of CSE/BTLIT/BATCH 5

77

Blood bank management system using amazon elastic cloud computing

CHAPTER 6 BLOOD BANK MANAGEMENT SYSTEM

6.1 Problem Definition


Every year our nation requires about 4 Crore units of blood, out of w h i c h o n l y a meagre 5 Lakh units of blood are available. It is not that, people do not want to donate blood. Often they are unaware of the need and also they do not have a proper facility to enquire about it. As a result, needy people end up going through a lot of pain. India has many blood banks, all-functioning in a decentralised fashion. In the current system,individual hospitals have their own blood banks and there is no interaction between blood b a n k s . T h e m a n a g e m e n t i s a d - h o c w i t h n o s e m b l a n c e o f o r g a n i s a t i o n o r s t a n d a r d operating procedures. Donors cannot access blood from blood banks other than the bank where they have donated blood.

6.2 Present System


All the blood banks are attached to hospitals and there is no stand-alone blood bank. As each hospital has its own systems and limitations, the co-ordination between the blood banks is practically impossible. Because of low number of donors and more number of blood banks, the efficiency and quality of blood banks are low, resulting in wastage of blood and blood components. The challenges in the present system are:1.Some of the hospitals are having individual blood banks2 . S o m e o f t h e h o s p i t a l s a r e n o t having blood banks3.Donors do not have any record of their donations or i n f o r m a t i o n r e l a t e d t o t h e i r blood diseases.
Dept Of CSE/BTLIT/BATCH 5 78

Blood bank management system using amazon elastic cloud computing

6.3 Proposed System


An efficient blood bank management system should be developed, with the aim of ensuring that every patient has access to an adequate quantity of safe blood in a centralised manner. The management system should solve the issue of demand and wastage and lead to self-sufficiency in blood requirement. This should encourage new donors and retain old donors to donate blood.

Dept Of CSE/BTLIT/BATCH 5

79

Blood bank management system using amazon elastic cloud computing

CHAPTER 7 RESULTS & CONCLUSION


7.1 RESULTS
On successful execution of the programs, which display various operations done on Amazon S3 Cloud. As we have noted earlier that phase 1 (Scatter) is same for both non-steal & steal algorithm, so we will be performing same bucket operations for both the algorithms. Various Text files are created to show the contents on the objects divided & distributed successfully to all Computational nodes.

Dept Of CSE/BTLIT/BATCH 5

80

Blood bank management system using amazon elastic cloud computing

Figure 7.1 Amazon S3 Cloud Operations. This Screen Shot shows the first point of the project execution.

Dept Of CSE/BTLIT/BATCH 5

81

Blood bank management system using amazon elastic cloud computing

Figure 7.2 Various Options for Amazon S3 Cloud Operations. This Screen Shot displays the various operations that can be done on Amazon S3 such as Bucket Creation & Deletion, Object Upload & Download, Listing the objects in the specified bucket.

Dept Of CSE/BTLIT/BATCH 5

82

Blood bank management system using amazon elastic cloud computing

Figure 7.3 Bucket Names listed at Amazon S3. This Screen Shot displays the list of buckets already present in the Users Amazon S3 Account. This is the result of clicking the Refresh Bucket Listing from the Menu List.

Dept Of CSE/BTLIT/BATCH 5

83

Blood bank management system using amazon elastic cloud computing

Figure 7.4 Creation of New Bucket at Amazon S3. This Screen Shot displays the creation of new bucket at S3. This is the result of clicking the Create Bucket from the Menu List.

Dept Of CSE/BTLIT/BATCH 5

84

Blood bank management system using amazon elastic cloud computing

Figure 7.5 shows the updated Bucket list at Amazon S3. This Screen Shot displays the updated list showing the new bucket bgs-sjbit. This is the result of clicking the Refresh Bucket Listing from the Menu List.

Dept Of CSE/BTLIT/BATCH 5

85

Blood bank management system using amazon elastic cloud computing

Figure 7.6 Deletion of a bucket from Amazon S3. This Screen Shot display the deletion of the already existing bucket at S3, the contents of the buckets should be deleted first then the bucket can be deleted. This is the result of clicking the Delete Bucket from the Menu List.

Dept Of CSE/BTLIT/BATCH 5

86

Blood bank management system using amazon elastic cloud computing

Figure 7.7 Uploading an Object at Bucket at Amazon S3. This Screen Shot display the Uploading of objects in the bucket at S3, This is the result of clicking the Object Update from the Menu List. User can upload any files / objects on to the bucket. Figure 7.8, 7.9 & 7.10 shows the detailed process of object updating.

Dept Of CSE/BTLIT/BATCH 5

87

Blood bank management system using amazon elastic cloud computing

Figure 7.8 choose an object to upload at Amazon S3.

Dept Of CSE/BTLIT/BATCH 5

88

Blood bank management system using amazon elastic cloud computing

Figure 7.9 Object Updating in process.

Figure 7.10 Object Updating completed at Amazon S3 Bucket.


Dept Of CSE/BTLIT/BATCH 5 89

Blood bank management system using amazon elastic cloud computing

Figure 7.11 Listing Objects at Amazon S3 bucket. This Screen Shot display the object list of the highlighted bucket. This is the result of clicking the Object Listing from the Menu List.

Dept Of CSE/BTLIT/BATCH 5

90

Blood bank management system using amazon elastic cloud computing

Figure 7.12 Objects Listed at Bucket named Animesh. This Screen Shot display the object list of the Animesh bucket showing its objects contents, currently this bucket has only one object named n1bucket.java.

Dept Of CSE/BTLIT/BATCH 5

91

Blood bank management system using amazon elastic cloud computing

Figure 7.13 Downloading Objects from Amazon S3 bucket. This Screen Shot displays the downloading the selected objects from the bucket. This is a result of clicking of Object Download from Main Menu.

Dept Of CSE/BTLIT/BATCH 5

92

Blood bank management system using amazon elastic cloud computing

Figure 7.14 Number to nodes to get downloaded objects. This Screen Shot displays the downloading the selected objects from the bucket to the computing nodes. Here user can to specify the number of nodes to which the contents of the objects to be downloaded. This after successful execution of algorithms, all the nodes will have the complete object content.

Dept Of CSE/BTLIT/BATCH 5

93

Blood bank management system using amazon elastic cloud computing

There are three EC2 nodes executing Non Steal Algorithm, the Design of all the three EC2 nodes are shown below from figure 7.15 to 7.17, These nodes has several buttons to support the execution of the algorithm. List Button displays the Objects range for the Specified node Download Button is activated once the object range is selected & then the contents present in that range are downloaded to the files. Show Button displays the objects range of the neighboring nodes Share Button is activated once the object range of the neighboring nodes is selected & then the contents are shared among all other nodes.

Figure 7.15 EC2 Node 1 Executing Non Steal Algorithm.

Dept Of CSE/BTLIT/BATCH 5

94

Blood bank management system using amazon elastic cloud computing

Figure 7.16 EC2 Node 2 Executing Non Steal Algorithm.

Figure 7.17 EC2 Node 3 Executing Non Steal Algorithm.


Dept Of CSE/BTLIT/BATCH 5 95

Blood bank management system using amazon elastic cloud computing

Figure 7.18 EC2 Node 1 showing the objects ranges.

Figure 7.19 EC2 Node 2 showing the objects ranges.


Dept Of CSE/BTLIT/BATCH 5 96

Blood bank management system using amazon elastic cloud computing

Figure 7.20 EC2 Node 3 Showing the Objects ranges.

There are three EC2 nodes executing Steal Algorithm, the Design of all the three EC2 nodes are shown below from figure 7.21 to 7.17, These nodes has several buttons to support the execution of the algorithm. List Button displays the Objects range for the Specified node Download Button is activated once the object range is selected & then the contents present in that range are downloaded to the files. Show Button displays the objects range of the neighboring nodes Share Button is activated once the object range of the neighboring nodes is selected & then the contents are shared among all other nodes.

Dept Of CSE/BTLIT/BATCH 5

97

Blood bank management system using amazon elastic cloud computing

Recv Button is only present at EC2 Node 1 & will receive the work request to download object content from Node 2 & Node 3. Req Button is present at both Node 2 & Node 3, which is activated to request the download from the Node 1. The range of the remaining object is send to Node 1, which download the contents and then send to the requesting node. Receive Button is present at both Node 2 & Node 3, which is activated to receive the downloaded contents from the Node 1.

Figure 7.21 EC2 Node 1 Executing Steal Algorithm.

Dept Of CSE/BTLIT/BATCH 5

98

Blood bank management system using amazon elastic cloud computing

Figure 7.22 EC2 Node 2 Executing Steal Algorithm.

Figure 7.23 EC2 Node 3 Executing Steal Algorithm.

Dept Of CSE/BTLIT/BATCH 5

99

Blood bank management system using amazon elastic cloud computing

Figure 7.24 EC2 Node 2 Requesting EC2 Node 1to download the specified range of objects contents

Figure 7.25 EC2 Node 1 showing the objects ranges.

Dept Of CSE/BTLIT/BATCH 5

100

Blood bank management system using amazon elastic cloud computing

Figure 7.26 EC2 Node 2 showing the objects ranges.

Dept Of CSE/BTLIT/BATCH 5

101

Blood bank management system using amazon elastic cloud computing

Figure 7.27 EC2 Node 3 showing the objects ranges.

Dept Of CSE/BTLIT/BATCH 5

102

Blood bank management system using amazon elastic cloud computing

Snapshot 7.28 Insert module

Dept Of CSE/BTLIT/BATCH 5

103

Blood bank management system using amazon elastic cloud computing

Snapshot 7.29 Search and retrieve module

7.2 CONCLUSION
We have addressed cloud characteristics and summarized different multicast algorithms for distributed parallel and P2P systems. We have focused on Amazon EC2/S3 which is the most commonly used cloud platform, and revealed some multicast performance problems on there when using the most simple algorithm by which all EC2 compute nodes download files from S3 directly. There are two main types of problems with optimizing multicast performance on clouds
1. The network performance within clouds is dynamic. It means the network performance

changes not only between compute nodes but also especially between the cloud storage and each compute node.

Dept Of CSE/BTLIT/BATCH 5

104

Blood bank management system using amazon elastic cloud computing 2. Keeping network monitoring data and topology information up-to-date is almost

impossible. It means it is difficult to know the underlying clouds physical infrastructures and construct one or more available bandwidth maximized spanning trees, which is well known optimization technique for cluster systems. Based o n these findings, we have presented two high performance multicast algorithm. These algorithms make it possible to transfer large Amounts of data stored in S3 to multiple EC2 nodes efficiently. The proposed algorithms combine optimization ideas from multicast algorithms used in parallel systems, and they have three salient features;
(a) They can construct an overlay network on clouds without network topology

distributed systems

and P2P

information,
(b) T hey can optimize the total throughput between compute nodes dynamically, and (c) They can increase the download throughput from cloud storage by letting nodes

cooperate with each other, resembling work stealing techniques. We have implemented the two proposed algorithms and evaluated their performance on Amazon EC2/S3, The steal algorithm achieved higher throughput than the non-steal algorithm since the steal algorithm could adapt to the dynamic changes of the download throughput from S3.

Dept Of CSE/BTLIT/BATCH 5

105