This action might not be possible to undo. Are you sure you want to continue?
SAMEER BANSOD NIKHIL RATHOD MUKESH BURADKAR KAMLESH ADHAU
UNDER THE GUIDANCE OF MR. SAGAR BADHIYE
DEPARTMENT OF COMPUTER TECHNOLOGY YESHWANTRAO CHAVAN COLLEGE OF ENGINEERING HINGNA ROAD, WANADONGRI, NAGPUR-441110 YEAR 2013-2014
Plan of Action Software & Hardware Requirements Page No.4 Application State 5.Table of Contents Title 1.1 Data flow diagram 6. 3.1 Advantage 6. 2.3 Cloud Infrastructure 4.2 Disadvantages 4. Scope 5. High Level Design 6. 8.1 Existing System 4. Problem Definition Abstract Aim & Objective Literature Review 4.2 System Architecture 7. 3 4 5 6 7 8 8 9 11 11 12 15 16 17 References . 4.
.g. for check pointing or off-line migration) to another cluster or cloud . . Problem Definition To provide the basic functionalities for the use of Virtual Machine’s (VM’s) over the cloud given as: • Multideployment : The operation of Infrastructure As A Service (IaaS) is the need to deploy a large number of VMs on many nodes of a datacentre at the same time. starting from a set of VM images previously stored in a persistent fashion • Multisnapshotting : Many VM images that were locally modified need to be concurrently transferred to stable storage with the purpose of capturing the VM’s state for later use (e.1.
it is important to enable efficient concurrent deployment and snapshotting that are at the same time hypervisor independent and ensure a maximum compatibility with different configurations. another challenge is to simultaneously take a snapshot of many images and transfer them to persistent storage to support management tasks.Abstract Infrastructure as a Service (IaaS) cloud computing has transform the way we think of acquiring resources by introducing a simple change: allowing users to lease computational resources from the cloud provider’s datacenter for a short time by deploying virtual machines (VMs) on these resources.2. This paper addresses these challenges by proposing a virtual file system specifically optimized for virtual machine image storage. . such as suspend-resume and migration. ensuring high portability for different configurations. One of those challenges is the need to deploy a large number (hundreds or even thousands) of VM instances simultaneously. With datacenters growing rapidly and configurations becoming heterogeneous. Once the VM instances 1are deployed. This new model raises new challenges in the design and development of IaaS middleware. It is based on a lazy transfer scheme coupled with object versioning that handles snapshotting transparently in a hypervisor-independent fashion.
by anticipating their requirements we provide them without confliction. One resource can be available to the all the users . .2. restart & suspend operation. A focus on the use of reusable frameworks to provide cost and times benefits. We want to reduce the contention on current system & allow maximum number of user to access VM’s with quick resume. They’re focused on coming up with solutions that serve customer requirements today and anticipate future needs. we are creating cloud infrastructure which allows users to lease computational resources from cloud provider. for managing numbers of users at the same time and within the context of time slices. Our aim is to create and implement load balancing mechanisms. Aim and Objectives In this project.
A typical deployment consists of hundreds or even thousands of such images. Since the user has complete control over the configuration of the VMs using on-demand deployments. Conventional snapshotting techniques rely on custom VM image file formats to store only incremental differences in a new file that depends on the original VM image as the backing file. For example. This emerging model leads to new challenges relating to the design and development of IaaS systems. With IaaS. We refer to this pattern as multisnapshotting. a process that can take tens of minutes to hours. When taking frequent snapshots for a large number of VMs. Conventional deployment techniques broadcast the images to the nodes before starting the VM instances. for check pointing or off-line migration to another cluster or cloud). We refer to this pattern as multideployment. This problem is particularly acute for VM images used in scientific computing where image sizes are large (from a few gigabytes up to more than 10 GB). by using external resources to complement their local resource base. such approaches generate a large number of files and interdependencies among them. starting from a set of VM images previously stored in a persistent fashion. Custom image formats are not standardized and can be used with specific hypervisors only. Furthermore. One of the commonly occurring patterns in the operation of IaaS is the need to deploy a large number of VMs on many nodes of a datacentre at the same time. Infrastructure as a Service (IaaS) cloud computing has emerged as a viable alternative to the acquisition and management of physical resources. This can make the response time of the IaaS installation much longer than acceptable and erase the ondemand benefits of cloud computing. IaaS leasing is equivalent to purchasing dedicated hardware but without the long-term commitment and cost. users can lease storage and computation time from large datacentre’s. since it enables users to expand or shrink their resources according to their computational needs. Such a large deployment of many VMs at once can take a long time. a similar challenge applies to snapshotting the deployment: many VM images that were locally modified need to be concurrently transferred to stable storage with the purpose of capturing the VM state for later use (e. The on-demand nature of IaaS is critical to making such leases attractive.. which limits the ability to easily migrate VMs among different hypervisors. Therefore. which are difficult to manage and which interfere with the ease-of-use rationale behind clouds. not counting the time to boot the operating system itself. configurations are becoming more and more heterogeneous. Once the VM instances are running.g. multisnapshotting must be handled in a transparent and portable . Leasing of computation time is accomplished by allowing users to deploy virtual machines (VMs) on the datacentre’s resources. Literature Survey In recent years. with growing datacentre trends and tendencies to federate clouds.4. this pattern occurs when the user wants to deploy a virtual cluster that executes a distributed application or a set of environments to support a workflow.
Cloud Computing is a “buzz word” around a wide variety of aspects such as deployment. One such requirement is the need to efficiently cope with massive unstructured data (organized as huge sequences of bytes . while handling snapshotting transparently and exposing standalone. Moreover.fashion that hides the interdependencies of incremental differences and exposes standalone VM images. This paper proposes a distributed virtual file system specifically optimized for both the multideployment and multisnapshotting patterns. these patterns may also generate high network traffic that interferes with the execution of applications on leased resources and generates high utilization costs for the user. we investigate them in conjunction. load balancing. In addition to incurring significant delays and raising manageability issues. 4. while keeping maximum portability among different hypervisor configurations.2 Disadvantages To give less performance and storage space. 4.BLOBs that can grow to TB) in very large-scale distributed systems while maintaining a very high data throughput for highly concurrent. and data and processing outsourcing. It is not possible to build a scalable. high-performance distributed data-storage service that facilitates data sharing at large scale.3 Cloud infrastructure IaaS platforms are typically built on top of clusters made out of loosely-coupled commodity hardware that minimizes per unit cost and favours low power over maximum speed. Since the patterns are complementary. Clouds have been defined just as virtualized hardware and software plus the previous monitoring and provisioning technologies. storage space. Disk storage (cheap hard-drives with .1 Existing system The huge computational potential offered by large distributed systems is hindered by poor data sharing scalability. fine-grain data accesses . Network traffic consumption also very high due to non concentrating on application status. The role of virtualization in Clouds is also emphasized by identifying it as a key component. 4. provisioning. Our proposal offers a good balance between performance. and network traffic consumption. We addressed several major requirements related to these challenges. raw image files (understood by most hypervisors) to the outside.
such as configuration files that describe the environment and temporary files that were generated by the application. while the machines are interconnected with standard Ethernet links. which is used to store only minimal information about the state. Therefore. The repository is responsible for storing the VM images persistently in a reliable fashion and provides the means for users to manipulate them: upload. Any in-transit network traffic is discarded. For example. such that they are able to host the VMs. saving 2 GB of RAM for 1. the general case is usually simplified such that the application state is reduced to the sum of states of the VM instances . Even so. and (ii) Portability since the VM can be restored on another host without having to worry about restoring the state of hardware devices that are not supported or are in.000 VMs consumes 2 TB of space. since the contents of RAM. Thus. the issue of capturing the global state of the communication channels is difficult and still an open problem.). there is an acute need for scalable storage. in the most general case . Such an approach has two important practical benefits: (i) Huge reductions in the size of the state. In order to avoid this issue. in-transit network packets. saving the application state implies saving both the state of all VM instances and the state of all active communication channels among them.store communication channels and resend lost information. in terms of both hardware and software. virtual topology.4 Application state The state of the VM deployment is defined at each moment in time by two main components: the state of each of the VM instances and the state of the communication channels between them (opened sockets. The machines are configured with proper virtualization technology. etc. under the assumption that a fault-tolerant networking protocol is used that is able to re. a dedicated repository is deployed either as centralized or as distributed storage service running on dedicated storage nodes. This information is then later used to reboot and reinitialize the software stack running inside the VM instance. which is unacceptable for a single one. and the like does not need to be saved. CPU registers. delete. Model 2 can further be simplified such that the VM state is represented only by the virtual disk attached to it (Model 3). While several methods have been established in the virtualization community to capture the state of a running VM (CPU registers. In order to provide persistent storage. the necessary storage space can explode to huge sizes. etc. RAM. download. for VM instances that need large amounts of memory. and so forth. state of devices.compatible between different hypervisors. . With the recent explosion in cloud computing demands.).capacities in the order of several hundred GB) is attached to each machine.point-in-time deployment checkpoint. 4.
1 ADVANTAGE A good balance between performance. we describe an implementation on top of Blob Seer. raw image files (understood by most hypervisors) to the outside.5. raw image files. . Scope A distributed virtual file system specifically optimized for both the multi deployment and multi snapshotting patterns. and network traffic consumption. storage space. a versioning storage service specifically designed for high throughput under concurrency. while handling snapshotting transparently and exposing standalone. Since the patterns are complementary. and network traffic consumption. 5. while handling snapshotting transparently and exposing standalone. To illustrate this point. we investigate them in conjunction. We introduce a series of design principles that optimize multi deployment and multi snapshotting patterns and describe how our design can be integrated with IaaS infrastructures. We show how to realize these design principles by building a virtual file system that leverages versioningbased distributed storage services. Our proposal offers a good balance between performance. storage space.
SYSTEM FLOW DIAGRAM RESOURCES RESOURCE1.6. RESOURCE 2 VM VM CENTRALIZED DATA STORAGE FIG 2. High Level Design User system REGISTER GETTING AUTHORIZATION TO STORE RESOURCES DATACENTER CONTROL API HYPERVISOR Request Requesting files LOCAL DISK FIG 1. SYSTEM ARCITECTURE .
CLONE and COMMIT can also be exposed by the cloud middleware at the user level through the control API for fine-grained control over snapshotting. subsequent global snapshots are performed by is. However. the application right before the bug happens. while the elements that are part of our proposal are highlighted by a darker background. Running the application repeatedly and waiting for it to reach the point where the bug happens might be prohibitively expensive. and the mirroring module. The reads and writes of the hypervisor are trapped by the mirroring module . This approach enables snapshotting to be leveraged in interesting ways. fully independent VM image that is globally accessible through the storage service and can be deployed on other compute nodes or manipulated by the client. Since all image snapshots are independent entities. telling it what image to mirror from the repository. Once a clone is created for each VM instance. let’s assume a scenario where a complex. Each compute node runs a hypervisor  that is responsible for running the VMs. The cloud middleware  in turn coordinates the compute nodes to achieve the afore-mentioned management tasks. is performed in the following fashion. CLONE is broadcast to all mirroring modules. distributed application needs to be debugged. dynamically adding or removing compute nodes from that set. For ex-ample. A distributed versioning storage service that supports cloning and shadowing is deployed on the compute nodes and consolidates parts of their local disks into a common storage pool.Plan of Action The simplified architecture of a cloud that integrates our approach is depicted in Figure 1. when to create a new image clone (CLONE) . followed by COMMIT. Furthermore. The typical elements found in the cloud are illustrated with a light background. and when to persistently store its local modifications (COMMIT) .7. the cloud client interacts with the cloud middleware through a control API that enables a variety of management tasks. Both CLONE and COMMIT are control primitives that result in the generation of a new. which involves taking a snapshot of all VM instances in parallel. Every uploaded image is automatically striped. they can be either collectively or independently analyzed and modified in an attempt to . The first time the snap-shot is taken. including deploying an image on a set of compute nodes.suing each mirroring module a COMMIT to its corresponding clone. A global snapshot of the whole application. The cloud middleware interacts directly with both the hypervisor. The cloud client has direct access to the storage service and is allowed to upload and download images from it. which is responsible for ondemand mirroring and snapshotting and relies on both the local disk and the distributed versioning storage service to do so. and snapshotting individual VM instances or the whole set. telling it when to start and stop VMs.
If the attempt was not successful. the approach can continue iteratively until a fix is found. . Once this fix is made. the application can safely resume from the point where it left. which is usually performed at smaller scale. Such an approach is highly useful in practice at large scale because complex synchronization bugs  tend to appear only in large deployments and are usually not triggered during the test phase.fix the bug.
6) : MS-Access : Ms-Office .8.65 GHz : 2 GB : 40 GB 8.2 Software requirement : Operating System: Windows Language Back End Documentation : JAVA(JDK-1. Software And Hardware Requirements 8.1 Hardware requirement : CPU type Clock speed Ram size Hard disk capacity : Dual Core : 2.
adaptive deployment of remote executions.fr/docs/00/57/06/82/PDF/final-paper.A. A view of cloud computing. Claudel. Konwinski. Keahey and T. Afshin Rostamizadeh. and O. 2008. Freeman.pdf M. 2000. the MIT Press ISBNhttp://hal. Atlanta. R. D. Zaharia. A. Huard. G. Richard.References : Going Back and Forth: Efficient Multi deployment and Multi snapshotting on Clouds: www. A. Fox. Stoica. Commun.pdf Mehryar Mohri.org/files/nicolae_hpdc2011. R.inria. In HPDC’09: Proceedings of the 18th ACM International Symposium on High Performance Distributed Computing. G.I. Katz. ACM.4. B.nimbusproject. Lee. Rabkin. Patterson.Ameet Talwalkar (2012) Foundations of Machine Learning. Taktuk.com/blog/2013/09/19/red-hat-teams-up-with-dotcloud-topromote-open-hyperviper-alternative/ .USENIX Association. Armbrust. In CCA’08: Proceedings of the 1st Conference on Cloud Computing and Its Applications. Joseph. and M.  Hypervisor Alternative: http://siliconangle. Science clouds: Early experiences in cloud computing for scientific applications.A. pages 91 K.53:50 327. Griffith. GA.
Bottom : 1. 2. Left : 1. should be at the center of bottom of each page. bold.5 Paragraph spacing – 2 lines 4. The paper should be A4 size with margins: Top: 1”. Page nos. Headings in the chapters should have size Times New Roman 14. Written matter – Times New Roman 12 (justified) Line spacing – 1. Number of copies -.Guidelines 1. 6. Right : 1. left aligned.2” . 3.02 (Project guide + Project Coordinator) . center.5” . bold. Subtitles Times New Roman 12.25” 5. Report should be spiral bound with white plastic cover pages only.
Seminar report should be as per the guidelines only. .7.