Beruflich Dokumente
Kultur Dokumente
ABSTRACT:
In the EXISTING SYSTEM, there is no big security provided in the Cloud server for data safety. If at all security exists, the third party auditor should be allowed to access the entire data packets for verification. In the PROPOSED SYSTEM, Cloud server spilt the file into batches and allowed for encryption. The corresponding encrypted batches are kept in different Cloud servers and their keys are distributed in different key server. These encrypted batches are kept in replica servers as a backup. This encrypted data are converted into bytes and added parity bit process by the data owner in order to restrict TPA by accessing the original data. The Cloud server generates the token number from the parity added encrypted data and compared with the signature provided to the TPA to verify the Data Integrity. We also implement Erasure Code for the back-up of the data. The MODIFICATION that we propose is the encryption process of the data by the data owner before it reaches the Cloud server.
Virtualization: Cloud computing virtualizes systems by pooling and sharing resources. Systems and storage can be provisioned as needed from a centralized infrastructure, costs are assessed on a metered basis, multi-tenancy is enabled, and resources are scalable with agility. Several trends are opening up the era of cloud computing, which is an Internet-based development and use of computer technology. The ever cheaper and more powerful processors, together with the Software as a Service (SaaS) computing architecture, are transforming data centers into pools of computing service on a huge scale. The increasing network bandwidth and reliable yet flexible network connections make it even possible that users can now subscribe high quality services from data and software that reside solely on remote data centers. In order to achieve the assurances of cloud data integrity and availability and enforce the quality of cloud storage service, efficient methods that enable on-demand data correctness verification on behalf of cloud users have to be designed. However, the fact that users no longer have physical possession of data in the cloud prohibits the direct adoption of traditional cryptographic primitives for the purpose of data integrity protection. Hence, the verification of cloud storage correctness must be conducted without explicit knowledge of the whole data files, Meanwhile, cloud storage is not just a third party data warehouse. The data stored in the cloud may not only be accessed but also be frequently updated by the users, including insertion, deletion, modification, appending, etc. Thus, it is also imperative to support the integration of this dynamic feature into the cloud storage correctness assurance, which makes the system design even more challenging. Last but not the least, the deployment of cloud computing is powered by data centers running in a simultaneous, cooperated, and distributed manner. It is more advantages for individual users to store their data redundantly across multiple physical servers so as to reduce the data integrity and availability threats. Thus, distributed protocols for storage correctness assurance will be of most importance in achieving robust and secure cloud storage systems. However, such important area remains to be fully explored in the literature.
EXISTING SYSTEM:
In the EXISTING SYSTEM, there is no big security provided in the Cloud server for data safety. If at all security exists, the third party auditor should be allowed to access the entire data packets for verification. There is no backup process.
DISADVANTAGES: 1. General encryption schemes protect data confidentiality, but also limit the functionality
of the storage system because few operations are supported over encrypted data.
3. Data robustness is a major drawback in the Existing Cloud Storage Systems. 4. Storing data in a third partys cloud system causes serious concern on data
confidentiality.
PROPOSED SYSTEM:
In the PROPOSED SYSTEM, Cloud server will spilt the file into batches and allowed for encryption. The corresponding encrypted batches are kept in different Cloud servers and their keys are distributed in different key server. These encrypted batches are kept in replica servers as a backup. This encrypted data are converted into bytes and added parity bit process by the data owner in order to restrict TPA by accessing the original data. The Cloud server generates the token number from the parity added encrypted data and compared with the signature provided to the TPA to verify the Data Integrity. We also implement Erasure Code for the back-up of the data.
MODIFICATION :
The MODIFICATION that we propose is the encryption process of the data by the data owner before it reaches the Cloud server. This Ensures Proper Double time security.
MODULES: 1. DATA OWNER 2. MAIN CLOUD SERVER 3. DATA SPLITTING AND ENCRYPTION 4. KEY SERVER 5. PARTY BIT ADDITION AND ERASURE CODE 6. TRUSTED PARTY AUDITOR 7. REPLICA SERVER ALGORITHMS USED:
RSA ALGORITHM:
The RSA algorithm is used for both public key encryption and digital signatures. It is the most widely used public key encryption algorithm. The basis of the security of the RSA algorithm is that it is mathematically infeasible to factor sufficiently large integers. The RSA algorithm is believed to be secure if its keys have a length of at least 1024-bits.
the public key is (n, e) and the private key is (n, d) the values of p, q and (n) are private e is the public or encryption exponent d is the private or decryption exponent
HARDWARE REQUIREMENTS:
Processor RAM HDD : : : Pentium IV 512 MB 80 GB
ARCHITECTURE DIAGRAM
AUDIO EXPLANATION OF THE PROJECT: https://www.box.com/s/vfqn0gveptcbkv50pzrd REAL TIME APPLICATION OF THE PROJECT:
College Information Storage Organization Information Storage
CONCLUSION:
In this paper, we investigate the problem of data security in cloud data storage, which is essentially a distributed storage system. To achieve the assurances of cloud data integrity and availability and enforce the quality of dependable cloud storage service for users, we propose an effective and flexible distributed scheme with explicit dynamic data support, including block update, delete, and append. We rely on erasure-correcting code in the file distribution preparation to provide redundancy parity vectors and guarantee the data dependability. By utilizing the homomorphic token with distributed verification of erasurecoded data, our scheme achieves the integration of storage correctness insurance and data error localization, i.e., whenever data corruption has been detected during the storage correctness verification across the distributed servers, we can almost guarantee the simultaneous identification of the misbehaving server(s).
REFERENCES:
[1] C. Wang, Q. Wang, K. Ren, and W. Lou, Ensuring Data Storage Security in Cloud Computing, Proc. 17th Intl Workshop Quality of Service (IWQoS 09), pp. 1-9, July 2009. [2] Amazon.com, Amazon Web Services (AWS), http://aws. amazon.com, 2009. [3] Sun Microsystems, Inc., Building Customer Trust in Cloud Computing with Transparent Security, https://www.sun.com/ offers/details/sun_transparency.xml, Nov. 2009. [4] K. Ren, C. Wang, and Q. Wang, Security Challenges for the Public Cloud, IEEE Internet Computing, vol. 16, no. 1, pp. 69-73, 2012. [5] M. Arrington, Gmail Disaster: Reports of Mass Email Deletions, http://www.techcrunch.com/2006/12/28/gmail-disasterreportsof-mass-email-deletions, Dec.2006. [6] J. Kincaid, MediaMax/TheLinkup Closes Its Doors, http://www.techcrunch.com/2008/07/10/mediamaxthelinkup-closesits-doors, July 2008. [7] Amazon.com, Amazon S3 Availability Event: July 20, 2008, http://status.aws.amazon.com/s3-20080720.html, July 2008.