Sie sind auf Seite 1von 5

1.

Reclaiming Space from Duplicate Files in a Server less Distributed File System
Authors Name: John R. Douceur, Atul Adya, William J. Bolosky, Dan Simon,
Year of Publication: 2002
They present a mechanism to reclaim space from this incidental duplication to make it
available for controlled file replication. Their mechanism includes 1) convergent encryption,
which enables duplicate files to coalesced into the space of a single file, even if the files are
encrypted with different users keys, and 2) SALAD, a Self Arranging, Lossy, Associative
Database for aggregating file content and location information in a decentralized, scalable,
fault-tolerant manner.
Disadvantages:

It cant overcome the following problems. Relocating the replicas of files with

identical content to a common set of storage machines.


Coalescing the identical files to reclaim storage space, while maintaining the
semantics of separate files.

2. DupLESS: Server-Aided Encryption for Deduplicated Storage


Authors Name: Mihir Bellare, Sriram Keelveedhi and Thomas Ristenpart
Year of Publication: 2013
They propose an architecture that provides secure deduplicated storage resisting
brute-force attacks, and realize it in a system called DupLESS. In DupLESS, clients encrypt
under message-based keys obtained from a key-server via an oblivious PRF protocol. It
enables clients to store encrypted data with an existing service, have the service perform
deduplication on their behalf, and yet achieves strong confidentiality guarantees.
Disadvantages:

Several works have looked at the general problem of enterprise network security, but

none provide solutions that meet all requirements from the above threat model.
The attacker must still attempt an offline brute-force attack, matching the guarantees
of traditional MLE schemes.

3. Message-Locked Encryption and Secure Deduplication


Authors Name: Mihir Bellare, Sriram Keelveedhi, and Thomas Ristenpart
Year of Publication: 2013

Message-Locked Encryption (MLE) provides a way to achieve secure deduplication


(space-e cient secure outsourced storage), a goal currently targeted by numerous cloudstorage providers. We provide definitions both for privacy and for a form of integrity that we
call tag consistency. Based on this foundation, we make both practical and theoretical
contributions. On the practical side, we provide ROM security analyses of a natural family of
MLE schemes that includes deployed schemes. On the theoretical side the challenge is
standard model solutions, and we make connections with deterministic encryption, hash
functions secure on correlated inputs and the sample-then-extract paradigm to deliver
schemes under different assumptions and for different classes of message sources.
Disadvantages:

However, in attempting to build MLE from these primitives, several problems arise. A

related problem is that it is not clear how an MLE scheme might decrypt.
CI-H functions are not required to be efficiently invertible. D-PKE does provide
decryption, but it requires the secret key, and it is not clear how this can yield
message-based decryption.

4. Secure Deduplication and Data Security with Efficient and Reliable Convergent Key
Authors Name: Nikhil O. Agrawal, Prof.S and S.Kulkarni
Year of Publication: 2015
In this paper is that they can eliminate duplicate copies of storage data and limit the
damage of stolen data if we decrease the value of that stolen information to the attacker. This
paper makes the first attempt to formally address the problem of achieving efficient and
reliable key management in secure deduplication. We first introduce a baseline approach in
which each user holds an independent master key for encrypting the convergent keys and
outsourcing them. However, such a baseline key management scheme generates an enormous
number of keys with the increasing number of users and requires users to dedicatedly protect
the master keys. To this end, we propose Dekey, User Behavior Profiling and Decoys
technology.
Disadvantages:

Masquerade attacks (such as identity theft and fraud) are a serious computer security
problem.

They conjecture that individual users have unique computer search behaviour which
can be profiled and used to detect masquerade attacks.

5. Fast and Secure Laptop Backups with Encrypted De-duplication


Authors Name: Paul Anderson and Le Zhang
Year of Publication: 2010
This paper describes an algorithm which takes advantage of the data which is
common between users to increase the speed of backups, and reduce the storage
requirements. This algorithm supports client-end per-user encryption which is necessary for
confidential personal data. It also supports a unique feature which allows immediate detection
of common subtrees, avoiding the need to query the backup system for every file. We
describe a prototype implementation of this algorithm for Apple OS X, and present an
analysis of the potential effectiveness, using real data obtained from a set of typical users.
Uploading data directly to a cloud storage can be very slow while requiring a reliable
network connection. Backing up directly to a cloud can be very costly.
Disadvantages:

This algorithm does have some disadvantages. In particular, a change to any node

implies a change to all of the ancestor nodes up to the root.


It is extremely difficult to estimate the impact of this in a production environment, but
preliminary testing seems to indicate that this is not a significant problem.

6. A secure data deduplication scheme for cloud storage


Authors Name: Jan Stanek, Alessandro Sorniotti, Elli Androulaki, and Lukas Kencl
Year of Publication: 2014
This paper presents a novel idea that differentiates data according to their popularity. Based
on this idea, we design an encryption scheme that guarantees semantic security for unpopular
data and provides weaker security and better storage and bandwidth benefits for popular data.
This way, data deduplication can be effective for popular data, whilst semantically secure
encryption protects unpopular content. We show that our scheme is secure under the
Symmetric External Decisional Diffie-Hellman Assumption in the random oracle model.

Disadvantages:

It does not address the problem of low min-entropy files.


The design of storage efficiency functions in general and of deduplication functions in
particular that do not lose their effectiveness in presence of end-to-end security is
therefore still an open problem.

7. A Secure Cloud Backup System with Assured Deletion and Version Control
Authors Name: Arthur Rahumed, Henry C. H. Chen, Yang Tang, Patrick P. C. Lee
Year of Publication: 2011
We present FadeVersion, a secure cloud backup system that serves as a security layer on top
of todays cloud storage services. FadeVersion follows the standard version-controlled
backup design, which eliminates the storage of redundant data across different versions of
backups. On top of this, FadeVersion applies cryptographic protection to data backups.
Specifically, it enables fine-grained assured deletion, that is, cloud clients can assuredly
delete particular backup versions or files on the cloud and make them permanently
inaccessible to anyone, while other versions that share the common data of the deleted
versions or files will remain unaffected.
Disadvantages:

Deleting an old version may make the future versions unrecoverable.


An attacker wants to recover specific files that have been deleted. This type of attack
may occur if there is a security breach in the cloud data center.

8. Tahoe: The least-authority filesystem


Authors Name: Zooko Wilcox-OHearn and Brian Warner
Year of Publication: 2008
Tahoe is a system for secure, distributed storage. It uses capabilities for access control,
cryptography for confidentiality and integrity, and erasure coding for fault-tolerance. It has
been deployed in a commercial backup service and is currently operational. The
implementation is Open Source.

Disadvantages:

If the integrity check fails, the client needs to know which erasure code share or

shares were wrong, so that it can reconstruct the file from other shares.
If the integrity check applied only to the cipher text, then the client wouldn't know
which share or shares to replace.