Sie sind auf Seite 1von 5

International Journal of Computer Trends and Technology (IJCTT) - volume4Issue4 April 2013

Dependable and Secure Storage Services in Cloud Computing


D.Veena Sanjitha#1, M.Himaswanthi#2, T.V.N.Sai Sindhura#3, K.V.V. Satyanarayana*
#

CSE Department, KL University Guntur, AndhraPradesh * K.V.V. Satyanarayana CSE Department, KL University Guntur, AndhraPradesh

Abstract Cloud storage allows users to remotely store their information and revel in the on-demand prime quality cloud applications while not the burden of native hardware and software system management although the advantages are clear, such a service is additionally relinquishing users physical possession of their outsourced information, that inevitably poses new security risks towards the correctness of the info in cloud. In order to handle this new downside and any deliver the goods a secure and dependable cloud storage service, we tend to propose during this paper a flexible distributed storage integrity auditing mechanism, utilizing the homomorphic token and distributed erasure-coded information. The proposed style permits users to audit the cloud storage with terribly light-weight communication and computation price. The auditing result not solely ensures robust cloud storage correctness guarantee, however additionally at the same time achieves quick information error localization, i.e., the identification of misbehaving server. Considering the cloud information are dynamic in nature, the projected style any supports secure and economical dynamic operations on outsourced information, as well as block modification, deletion, and append. Analysis shows the projected scheme is extremely economical and resilient against Byzantine failure, malicious information modification attack, and even server colluding attacks. Keywords Data integrity, dependable distributed storage, error
localization, data dynamics, Cloud Computing

1. INTRODUCTION
Several trends area unit gap up the time of Cloud Computing, that is associate degree Internet-based development and use of engineering. The ever cheaper and more powerful processors, beside the software system as a service (SaaS) computing design, area unit remodelling data centers into pools of computing service on a large scale. The increasing network information measure and reliable nevertheless flexible network connections build it even potential that users will currently subscribe top quality services from knowledge and software system that reside exclusively on remote knowledge centres. Moving knowledge into the cloud offers nice convenience to users since they dont need to care regarding the complexities of direct hardware management. The pioneers of Cloud Computing

vendors, Amazon straightforward Storage Service and Amazon Elastic reckon Cloud area unit each documented examples. Whereas these internet-based on-line services do offer vast amounts of space for storing and customizable computing resources, this computing platform shift, however, is eliminating the responsibility of native machines for knowledge maintenance at a similar time. As a result, users area unit at the mercy of their cloud service suppliers for the supply and integrity of their knowledge. On the one hand, although the cloud infrastructures area unit rather more powerful and reliable than personal computing devices, broad vary of both internal and external threats for data integrity still exist samples of outages and data loss incidents of noteworthy cloud storage services appear from time to time. On the opposite hand, since users could not retain a neighbourhood copy of outsourced data, there exist various incentives for cloud service providers (CSP) to behave unfaithfully towards the cloud users concerning the standing of their outsourced data. For instance, to increase the profit margin by reducing cost, it's possible for CSP to discard rarely accessed data without being detected in a timely fashion. Similarly, CSP could even attempt to hide data loss incidents thus as to maintain a reputation. Therefore, although outsourcing data into the cloud is economically enticing for the cost and complexity of long-run large-scale data storage, its lacking of providing sturdy assurance of data integrity and convenience could impede its wide adoption by both enterprise and individual cloud users. In order to attain the assurances of cloud data integrity and convenience and enforce the quality of cloud storage service, economical ways that enable ondemand data correctness verification on behalf of cloud users have to be designed. However, the fact that users no longer have physical possession of data within the cloud prohibits the direct adoption of traditional science primitives for the aim of data integrity protection. Hence, the verification of cloud storage correctness must be conducted without specific information of the full data files. Meanwhile, cloud storage is not just a third party data warehouse. The information keep within the cloud may not only be accessed but also be oft updated by the users, as well as insertion, deletion, modification, appending, etc. Thus, it's additionally

ISSN: 2231-2803

http://www.ijcttjournal.org

Page 608

International Journal of Computer Trends and Technology (IJCTT) - volume4Issue4 April 2013
imperative to support the integration of this dynamic feature into the cloud storage correctness assurance that makes the system style even more difficult. Last however not the least; the preparation of Cloud Computing is steam-powered by information centres running in a very coincidental, cooperated and distributed manner. Its additional blessings for individual users to store their information redundantly across multiple physical servers therefore on cut back the information integrity and handiness threats. Thus, distributed protocols for storage correctness assurance are going to be of most importance in achieving strong and secure cloud storage systems. However, such vital space remains to be fully explored within the literature. Recently, the importance of making certain the remote information integrity has been highlighted by the subsequent analysis works beneath totally different system and security models. These techniques, whereas is helpful to make sure the storage correctness while not having users possessing local data, square measure all specializing in single server situation. They may be helpful for quality-of-service testing, but doesn't guarantee the information handiness just in case of server failures. Though direct applying these techniques to distributed storage (multiple servers) can be straightforward, the resulted storage verification overhead would be linear to the amount of servers. As an complementary approach, researchers have conjointly planned distributed protocols for making certain storage correctness across multiple servers or peers. However, while providing economical cross server storage verification and information handiness insurance, these schemes square measure all specializing in static or deposit information. As a result, their capabilities of handling dynamic information remains unclear, which inevitably limits their full relevance in cloud storage situations. In this paper, we have a tendency to propose an efficient and versatile distributed storage verification theme with specific dynamic data support to make sure the correctness and handiness of users information within the cloud. we have a tendency to consider erasure correcting code within the file distribution preparation to supply redundancies and guarantee the information dependableness against Byzantine servers , wherever a storage server may fail in absolute ways that. This construction drastically reduces the communication and storage overhead as compared to the normal replication-based file distribution techniques. By utilizing the homomorphic token with distributed verification of erasure-coded information, our scheme achieves the storage correctness insurance as well as information error localization: whenever information corruption has been detected throughout the storage correctness verification, our theme will nearly guarantee the coincidental localization of knowledge errors, i.e., the identification of the misbehaving server(s). So as to strike an honest balance between error resilience and information dynamics, we further explore the pure mathematics property of our token computation and erasure-coded information, and demonstrate the way to efficiently support dynamic operation on information blocks, while maintaining an equivalent level of storage correctness assurance. So as to save lots of the time, computation resources, and even the connected on-line burden of users, we conjointly offer the extension of the planned main scheme to support third-party auditing, wherever users will safely delegate the integrity checking tasks to TPA and be free to use the services. Our work is among the primary few ones during this field to contemplate distributed information storage security in Cloud Computing. Our contribution is summarized as the following 3 aspects: 1) Compared to several of its predecessors, which only provide binary results regarding the storage standing across the distributed servers, the planned theme achieves the integration of storage correctness insurance and information error localization, i.e., the identification of misbehaving server(s). 2) In contrast to most previous works for making certain remote information integrity, the new theme any supports secure and efficient dynamic operations on information blocks, including update, delete and append. 3) The experiment results demonstrate the planned scheme is very economical. in depth security analysis shows our theme is resilient against Byzantine failure, malicious information modification attack, and even server colluding attacks.

2. PROBLEM STATEMENT
2.1 System Model A representative spec for cloud storage service design is completely different network entities will be known as follows: User: associate degree entity, WHO has information to be hold on within the cloud and depends on the cloud for information storage and computation, will be either enterprise or individual customers. Cloud Server (CS): associate degree entity that is managed by cloud service supplier (CSP) to produce information storage service and has important cupboard space and computation resources Third Party Auditor (TPA): associate degree no mandatory TPA, who has experience and capabilities that users might not have, is trusty to assess and expose risk of cloud storage services on behalf of the users upon request. In cloud information storage, a user stores his information through a CSP into a collection of cloud servers, that area unit running in a very simultaneous, cooperated and distributed manner. Data redundancy will be used with technique of erasure correcting code to more tolerate faults or server crash as users information grows in size and importance. Thereafter, for application functions, the user interacts with the cloud servers via CSP to access or retrieve his information. In some cases, the user may have to perform block level operations on his information. The foremost general sorts of these operations we tend to area unit considering area unit block update, delete, insert and append. Note that during this paper, we put additional specialize in the support of file-oriented cloud applications aside from non-file

ISSN: 2231-2803

http://www.ijcttjournal.org

Page 609

International Journal of Computer Trends and Technology (IJCTT) - volume4Issue4 April 2013
application information, such as social networking information. In different words, the cloud data we tend to area unit considering isn't expected to be quickly changing in a very relative short amount. As users now not possess their information domestically, it is of critical importance to confirm users that their information area unit being properly hold on and maintained. That is, users should be equipped with security means that in order that they can build continuous correctness assurance (to enforce cloud storage service-level agreement) of their hold on data even while not the existence of native copies. In case that user don't essentially have the time, practicableness or resources to watch their information on-line, they'll delegate the data auditing tasks to associate degree no mandatory trusty TPA of their individual selections. However, to firmly introduce such a TPA, any doable run of users outsourced data towards TPA through the auditing protocol ought to be prohibited. In our model, we tend to assume that the point-to point communication channels between every cloud server and therefore the user is reliable, which may be achieved in apply with very little overhead. This authentication handshakes area unit omitted within the following presentation. however at CSPs address domain, these threats will return from two totally different sources: internal and external attacks. For internal attacks, a CSP are often self-interested, untrusted and probably malicious. Not solely will it want to maneuver data that has not been or isn't accessed to a lower tier of storage than united for financial reasons, but it may also conceive to hide an information loss incident attributable to management errors, Byzantine failures and then on. For external attacks, knowledge integrity threats could return from outsiders United Nations agency on the far side the management domain of CSP, for example, the economically intended attackers. They may compromise variety of cloud knowledge storage servers in different time intervals and afterwards be ready to modify or delete users knowledge whereas remaining unobserved by CSP. Therefore, we tend to think about the resister in our model has the following capabilities, that captures each external and internal threats towards the cloud knowledge integrity. Specifically, the resister is fascinated by unceasingly corrupting the users knowledge files keep on individual servers. This corresponds to the threats from external attacks. Within the worst case situation, the resister will compromise all the storage servers thus that he will on purpose modify the info files as long as they are internally consistent. In fact, this can be equivalent to internal attack case wherever all servers assumed colluding along from the first stages of application or service preparation to cover an information loss or corruption incident.

3. ENSURING CLOUD DATA STORAGE


In cloud knowledge storage system, users store their knowledge within the cloud and now not possess the info regionally. Thus, the correctness and accessibility of the info files being held on the distributed cloud servers should be secured. One of the key problems is to effectively find any unauthorized data modification and corruption, presumably due to server compromise and/or random Byzantine failures. Besides, within the distributed case once such inconsistencies are with success detected, to search out that server the data error lies in is additionally of nice significance, since it will always be the primary steps to quick recover the storage errors and/or distinctive potential threats of external attacks. To address these issues, our main theme for guaranteeing cloud knowledge storage is bestowed during this section. The first a part of the section is dedicated to a review of basic tools from cryptography theory that's required in our scheme for file distribution across cloud servers. Then, the homomorphic token is introduced. The token computation function we tend to considering belongs to a family of universal hash perform, chosen to preserve the homomorphic properties, which may be dead integrated with the verification of erasure-coded knowledge. Subsequently, it's shown the way to derive a challenge response protocol for verificatory the storage correctness as well as distinctive misbehaving servers. The procedure for file retrieval and error recovery supported

Fig. 1: Cloud storage service architecture

2.2 Adversary model From users perspective, the resister model needs to capture all kinds of threats towards his cloud knowledge integrity. Because cloud knowledge don't reside at users native website

ISSN: 2231-2803

http://www.ijcttjournal.org

Page 610

International Journal of Computer Trends and Technology (IJCTT) - volume4Issue4 April 2013
erasure correcting code is additionally printed. Finally, we tend to describe however to extend our theme to 3rd party auditing with solely slight modification of the most style. So far, we tend to assumed that it represents static or archived data. This model might match some application eventualities, such as libraries and datasets. However, in cloud information storage, there are several potential eventualities where information keep within the cloud is dynamic, like documents, photos, or files etc. Therefore, it's crucial to 3.1 Challenge Token Pre-computation consider the dynamic case, wherever a user might need to In order to realize assurance of knowledge storage perform varied block-level operations of update, delete and correctness and information error localization at the same time, append to change the info file whereas maintaining the storage our scheme entirely depends on the pre-computed verification correctness assurance. tokens. The main plan is as follows: before file distribution Since information don't reside at users native web site the user pre-computes a particular variety of short verification however at cloud service providers address domain, tokens on individual vector. Each token covering a random set supporting dynamic information operation is quite difficult. of knowledge blocks. Later, when the user desires to create On the one hand, CSP must method the info dynamics request certain the storage correctness for the info within the cloud, he while not knowing the key keying material. On the other hand, challenges the cloud servers with a collection of haphazardly users have to be compelled to make sure that all the dynamic generated block indices. Upon receiving challenge, every data operation request has been reliably processed by CSP. to cloud server computes a short signature over the required deal with this downside, we tend to in short make a case for blocks and returns them to the user. The values of those our approach methodology here and supply the main points signatures ought to match the corresponding tokens pre- later. For any information dynamic operation, the user should computed by the user. Meanwhile, as all servers operate over first generate the corresponding resulted file blocks and a similar subset of the indices, the requested response values parities. This a part of operation should be meted out by the for integrity check should even be a sound codeword user, since solely he is aware of the key matrix P. Besides, to determined by secret matrix. confirm the changes of information blocks properly reflected within the cloud address domain, the user additionally needs 3.2 File Retrieval and Error Recovery to modify the corresponding storage verification tokens to accommodate the changes on information blocks. Only with Since our layout of file matrix is systematic, the user will the consequently modified storage verification tokens, the reconstruct the initial file by downloading the info vectors antecedent mentioned challenge-response protocol can be from the primary servers, forward that they come back the carried on with success even when information dynamics. In correct response values. Notice that our verification scheme is different words, these verification tokens facilitate guarantee predicated on random spot-checking, that the storage that CSP would properly execute the process of any dynamic correctness assurance could be a probabilistic one. However, information operation request. Given this style methodology, by selecting system parameters suitably and conducting the straightforward and trivial thanks to support these enough times of verification, we can guarantee the victorious operations are for user to transfer all the info from the cloud file retrieval with high likelihood. On the opposite hand, servers and re-compute the total parity blocks as well as whenever the info corruption is detected, the comparison of verification tokens. This may clearly be highly inefficient. pre-computed tokens and received response values will During this section, we are going to show however our theme guarantee the identification of misbehaving server(s) (again will expressly and expeditiously handle dynamic data with high probability), which will be mentioned shortly. operations for cloud information storage, by utilizing the Therefore, the user will always raise servers to remand blocks property of Reed-Solomon code and verification token of the r row specified in the challenge and regenerate the construction. proper blocks by erasure correction, shown in rule three, as CONCLUSION long because the range of known misbehaving servers is less than it. (Otherwise, there are no thanks to recover the In this paper, we tend to investigate the matter of corrupted blocks owing to lack of redundancy, even if we all knowledge security in cloud knowledge storage that is know the position of misbehaving servers.) The newly basically a distributed storage system. to realize the recovered blocks will then be decentralised to the assurances of cloud knowledge integrity and convenience and misbehaving servers to take care of the correctness of storage. enforce the standard of dependable cloud storage service for users, we propose an effective and versatile distributed theme with express dynamic knowledge support, together with block update, delete, and append. We tend to deem erasurecorrecting code within the file distribution preparation to produce redundancy parity vectors and guarantee the 4. Providing support for Dynamic Data Operations information responsibleness. By utilizing the homomorphic token with distributed verification of erasure-coded knowledge, our theme achieves the mixing of storage

ISSN: 2231-2803

http://www.ijcttjournal.org

Page 611

International Journal of Computer Trends and Technology (IJCTT) - volume4Issue4 April 2013
correctness insurance and knowledge error localization, i.e., whenever knowledge corruption has been detected during the storage correctness verification across the distributed servers, we will nearly guarantee the simultaneous identification of the misbehaving server(s). Considering the time, computation resources, and even the connected on-line burden of users, we tend to additionally give the extension of the projected main theme to support third-party auditing, wherever users will safely delegate the integrity checking tasks to third party auditors and be free to use the cloud storage services. Through detailed security and intensive experiment results, we show that out theme is extremely economical and resilient to Byzantine failures, malicious knowledge modification attacks and even sever colluding attacks. ACKNOWLEDGMENT This work was supported in part by everyone with the help of all the books that are referred and thank to every one for their contribution and the efforts they made in the paper. . REFERENCES [1] C. Wang, Q. Wang, K. Ren, and W. Lou, Ensuring data storage security in cloud computing, in Proc. of IWQoS09, July 2009, pp. 19. [2] Amazon.com, Amazon web services (aws), Online at http:// aws.amazon.com/, 2009. [3] Sun Microsystems, Inc., Building customer trust in cloud computing with transparent security, Online at https://www.sun. com/offers/details/sun transparency.xml, November 2009. [4] M. Arrington, Gmail disaster: Reports of mass email deletions, Online at http://www.techcrunch.com/2006/12/ 28/gmail-disasterreports-of-mass-email-deletions/, December 2006. [5] J. Kincaid, MediaMax/TheLinkup Closes Its Doors, Online at http://www.techcrunch.com/2008/07/10/ mediamaxthelinkup-closes-its-doors/, July 2008. [6] Amazon.com, Amazon s3 availability event: July 20, 2008, Online at http://status.aws.amazon.com/s320080720.html, July 2008. [7] S. Wilson, Appengine outage, Online at http://www. cio-weblog.com/50226711/appengine outage.php, June 2008. [8] B. Krebs, Payment Processor Breach May Be Largest Ever, Online at http://voices.washingtonpost.com/securityfix/2009/01/ payment processor breach may b.html, Jan. 2009. [9] A. Juels and J. Burton S. Kaliski, Pors: Proofs of retrievability for large files, in Proc. of CCS07, Alexandria, VA, October 2007, pp. 584597. [10] G. Ateniese, R. Burns, R. Curtmola, J. Herring, L. Kissner, Z. Peterson, and D. Song, Provable data possession at untrusted stores, in Proc. of CCS07, Alexandria, VA, October 2007, pp. 598 609. [11] M. A. Shah, M. Baker, J. C. Mogul, and R. Swaminathan, Auditing to keep online storage services honest, in Proc. of HotOS07. Berkeley, CA, USA: USENIX Association, 2007, pp. 16.

ISSN: 2231-2803

http://www.ijcttjournal.org

Page 612

Das könnte Ihnen auch gefallen