Sie sind auf Seite 1von 4

International Journal of Computer Trends and Technology (IJCTT) - volume4Issue4 April 2013

Secure and Dependable Storage in Cloud Computing


# 1 T.Bala Krishna , # 2 Y.Venkateshwara rao , # 3 K.V.V.Satyanarayana
Department of Computer Science Engineering, K.L.University, Vaddeswaram, India
# 1 Student , # 2 Student , # 3 Professor

Abstract:
Cloud computing as been developed by the next generation architecture of IT activity. This is an Internet-based development and use of computer technology. Cloud Computing moves the application software and databases to the large data centers, where the organization of the data and services may not be totally dependable. In this article, we focus on cloud data storage security, which has always been a significant aspect of quality of service. We propose an efficient and exible distributed storage verication scheme with explicit dynamic data support to ensure the correctness and

Keywords:
Data reliability, reliable distributed storage, error localization, data dynamics, Cloud Computing.

Introduction:
Cloud computing is an emerging computing model in which resources of the computing communications are provided as services over the Internet. Several trends are opening up the period of Cloud Computing, which is an Internet-based development and use of computer technology. The software as a service (SaaS) computing architecture, are

transforming data centers into pools of computing service on a huge scale. Moving data into the cloud offers great convenience to users since they dont have to care about the complexities of direct hardware organization. The initiate of Cloud Computing vendors, Amazon Simple Storage Service (S3) and Amazon Elastic Compute Cloud (EC2) are both well known examples. We propose an effective and flexible distributed scheme with explicit dynamic data support to ensure the correctness of users data in the cloud. We rely on erasure correcting code in the file allocation preparation to provide

accessibility of users data in the cloud. By utilizing the homomorphic token with distributed verication of erasure-coded data, our scheme achieves the storage accuracy assurance as well as data error localization. Error localization is a key requirement for eliminating errors in storage systems i.e., the identication of misbehaving server. Considering the cloud data are dynamic in nature, the proposed design additional supports secure and capable dynamic operations on outsourced data, including block adaptation, deletion, and append. Analysis shows the proposed scheme is highly capable and resilient against Byzantine failure, malicious data alteration attack, and even server colluding attacks.

redundancies and assurance the data dependability [5]. This construction significantly reduces the communication and storage overhead as compared to

ISSN: 2231-2803

http://www.ijcttjournal.org

Page 594

International Journal of Computer Trends and Technology (IJCTT) - volume4Issue4 April 2013
the traditional replication-based file distribution techniques [8]. By utilizing the homomorphic token with distributed verification of erasure-coded data, our scheme achieves the storage correctness uniquely combining techniques and algorithms (attribute-based encryption (ABE), Correctness

Verification and Error Localization, traditional replication-based file distribution, adding random perturbations). Considering the cloud data are dynamic in nature, the proposed design further supports secure and efficient dynamic operations on outsourced data, including block modification, deletion, and append. Our proposed scheme also has salient properties of user access privilege

insurance as well as data error localization: whenever data corruption has been detected during the storage correctness verification, our scheme can almost guarantee the simultaneous localization of data errors, i.e., the identification of the misbehaving server(s).Since cloud service providers (CSP) are separate organizational entities, data outsourcing actually relinquishes the owners ultimate control over the fate of their data. There are various motivations for CSPs to behave unfaithfully toward cloud customers regarding the status of their outsourced data. Traditional cryptographic primitives for the purpose of data security protection cannot be directly adopted. It is often insufficient to detect data corruption only when accessing the data. The tasks of auditing the data correctness in a cloud environment can be formidable and expensive for data owners. To fully ensure data security and save data owners computation resources, we propose to enable publicly auditable cloud storage services TPA (Third Party Auditor).TPA provides a transparent yet costeffective method for establishing trust between data owner and cloud server [3]. This article is intended as a call for action, aiming to motivate further research on dependable cloud storage services and enable public auditing services to become a reality. We plan a set of building blocks, including recently developed cryptographic primitives (e.g.homomorphic

confidentiality and user secret key accountability and achieves fine - graininess, scalability and data confidentiality for data access control in cloud computing. Extensive analysis shows that our proposed scheme is highly capable and provably secures under accessible security models.

Implementation:

System Architecture:

authenticator). Our proposed scheme enables the data owner to delegate tasks of data file re-encryption and user secret key update to cloud servers without disclosing data contents or user access privilege information. We achieve this goal by exploiting and

ISSN: 2231-2803

http://www.ijcttjournal.org

Page 595

International Journal of Computer Trends and Technology (IJCTT) - volume4Issue4 April 2013 1. System Model:
User: users, who have data to be stored in the cloud and rely on the cloud for data computation, consist of both individual consumers and party auditor, making the cloud storage publicly verifiable [7]. However, as pointed out by the recent work, to securely introduce an effective TPA, the auditing process should bring in no new

organizations. Cloud Service Provider (CSP): a CSP, who has significant resources and expertise in building and managing distributed cloud storage servers, owns and operates live Cloud Computing systems [1]. Third Party Auditor (TPA): an optional TPA, who has expertise and capabilities that users may not have, is trusted to assess and expose risk of cloud storage services on behalf of the users upon request.

vulnerabilities towards user data privacy. Namely, TPA should not learn users data content through the delegated data auditing.

4. Cloud Operations
(1) Update Operation: In cloud data storage, sometimes the user may need to adjust some data block(s) stored in the cloud, we refer this operation as data update. In other words, for all the unused tokens, the user needs to exclude every occurrence of the old data block and

2. File Retrieval and Error Recovery:


Since our layout of file matrix is systematic, the user can reconstruct the original file by downloading the data vectors from the first m servers, assuming that they return the correct response values [4]. Notice that our verification scheme is based on random spot-checking, so the storage correctness assurance is a probabilistic one. We can guarantee the successful file retrieval with high possibility. On the other hand, whenever the data corruption is detected, the comparison of precomputed tokens and conventional response values can guarantee the identification of misbehaving server(s).

replace it with the new one. (2) Delete Operation: At times, after being stored in the cloud, certain data blocks may need to be deleted. The delete operation we are considering is a general one, in which user replaces the data block with zero or some special reserved data symbol. From this point of view, the delete operation is actually a special case of the data update operation, where the original data blocks can be replaced with zeros or some predetermined special blocks [2, 6].

(3) Append Operation: In some cases, the user may want to increase the size of his stored data by adding blocks at the end of the data file, which we refer as data append. We

3. Third Party Auditing:


As discussed in our architecture, in case the user does not have the time, feasibility or resources to perform the storage correctness verification, he can optionally delegate this task to an independent third

anticipate that the most frequent append operation in cloud data storage is bulk append, in which the user needs to upload a large number of blocks (not a single block) at one time.

ISSN: 2231-2803

http://www.ijcttjournal.org

Page 596

International Journal of Computer Trends and Technology (IJCTT) - volume4Issue4 April 2013 Conclusion:
The Scope of the project is that in the cloud data storage system, users store their data in the cloud and no longer possess the data locally. Thus, the correctness and availability of the data files being stored on the distributed cloud servers must be guaranteed. One of the key issues is to effectively detect any unauthorized data modification and corruption, possibly due to server compromise and/or random Byzantine failures. Besides, in the distributed case when such inconsistencies are successfully detected, to find which server the data error lies in is also of great significance, since it can always be the first step to fast recover the storage errors and/or identifying potential threats of external attacks. The homomorphic token is introduced. The token computation function we are considering belongs to a family of universal hash function, chosen to preserve the homomorphic properties, which can be perfectly integrated with the verification of erasure-coded data. Consequently, it is shown how to derive a challenge response protocol for verifying the storage [5] G. Ateniese, R. Burns, R. Curtmola, J. Herring, L. Kissner, Z. Peterson, and D. Song, Provable data possession at untrusted stores, in Proc. of CCS07, Alexandria, VA, October 2007, pp. 598609. [6] M. A. Shah, M. Baker, J. C. Mogul, and R. Swaminathan, Auditing to keep online storage services honest, in Proc. of HotOS07 . Berkeley, CA, USA: USENIX Association, 2007, pp. 16. [7] M. A. Shah, R. Swaminathan, and M. Baker, Privacy-preserving audit and extraction of digital contents, Cryptology ePrint Archive, Report 2008/186, 2008, http://eprint.iacr.org/. [8] G. Ateniese, R. D. Pietro, L. V. Mancini, and G. Tsudik, Scalable and efficient provable data possession, in Proc. of SecureComm08 , 2008, pp. 110. [3] B. Krebs, Payment Processor Breach May Be Largest Ever, Online at http://voices.washingtonpost.com/securityfix/2009/01 / payment processor breach may b.html, Jan. 2009. [4] A. Juels and J. Burton S. Kaliski, Pors: Proofs of retrievability for large files, in Proc. of CCS07, Alexandria, VA, October 2007, pp.584597. [1] Amazon.com, Amazon s3 availability event: July 20, 2008,Online at http://status.aws.amazon.com/s3-20080720.html, July 2008. [2] S. Wilson, App engine outage, Online at http://www.cio-weblog.com/50226711/appengine outage.php, June 2008.

References:

correctness as well as identifying misbehaving servers. The procedure for file retrieval and error recovery based on erasure correcting code is also outlined.

ISSN: 2231-2803

http://www.ijcttjournal.org

Page 597

Das könnte Ihnen auch gefallen