Sie sind auf Seite 1von 7

International Journal of Computer Trends and Technology (IJCTT) volume 4 Issue 6June 2013

ISSN: 2231-2803 http://www.ijcttjournal.org Page 1797



Ensuring Cloud Security Using Hybrid Encryption
Scheme and Disaster Recovery Mechanism
R.Sinduja
1
, G.Sumathi
2
1,
PG Scholar,
2,
Professor,
Information Technology
Sri Venketeswara College of Engineering
Chennai
India


Abstract Cloud computing, as a budding computing
hypothesis, which provides users on-demand scalable services
by allowing them to store their data in remote servers. As this
new computing paradigm requires users to delegate their
valuable data to cloud providers, it increases security and
isolation concerns on outsourced data. Conversely, allowing
cloud service providers (CSPs), which are not in the same
trustworthy domains as endeavour users, to take care of
confidential data, may increase latent security and
confidentiality issues. Several schemes employing hierarchical
attribute based encryption (HASBE) have been proposed for
access control of outsourced data in cloud computing;
however, most of them suffer from rigidity in implementing
complex access control policies, To keep the sensitive user data
confidential against un-trusted CSPs and disasters, a natural
way is to apply cryptographic approaches to enhance the
security of cloud database using hybrid encryption scheme and
disaster recovery mechanism. The proposed scheme not only
achieves scalability due to its hierarchical structure, but also
provides flexible multilevel and hybrid security. It uses RSA,
DES and AES algorithms as an encrypting tool. In addition,
enhanced HASBE employs multiple value assignments for
access expiration time to deal with user revocation more
efficiently than existing schemes and also it recovers the data
in case of any natural or manmade disasters. We implement
our scheme and show that it is both efficient and flexible in
dealing with access control for outsourced data in cloud
computing with comprehensive experiments.

Keywords Cloud Computing, Data Security, Hybrid
Encryption Scheme, Hierarchical attribute based encryption,
Access control.
I. INTRODUCTION
CLOUD computing is the delivery of computing services
over the Internet. It is not like other computing models such
as utility computing, grid computing or autonomic
computing. In fact, it is a very autonomous platform in
terms of computing. Google Apps is the greatest example of
cloud computing where any application can be accessed
using a browser and it can be deployed on thousands of
computer through the Internet. Cloud services allow
individuals and business people to use software and
hardware that are managed by third parties at isolated
locations. Examples of cloud services incorporate online
data storage, communal networking sites, webmail, and
online trade applications. This cloud computing model
promotes availability and is composed of essential
characteristics, service models, deployment models,
Computer processing power, specialized corporate and user
applications.
The characteristics of cloud computing paradigm consist of
on-demand services, broad network access services,
resource pooling, measured service and rapid elasticity. On-
demand self service means that customers can request their
requirements to the cloud service providers and manage
their own computing resources. Broad network access
allows services are offered over the private networks or
Internet in mobiles and laptops using the broad network
services. In cloud the shared resources of the customers are
drawn from a pool of computing resources, usually placed
in remote data centres. Services can be scaled according to
the needs of the users and billed accordingly to the usage.
The cloud computing service models are Software as a
Service (SaaS), Platform as a Service (PaaS) and
Infrastructure as a Service (IaaS). In Software as a Service
model, sometimes referred to as "on-demand software", in
which software and associated data are centrally hosted
in cloud. SaaS is usually accessed by users using a thin
client via a web browser.In PaaS, customer installs or
develops their own software and applications. PaaS provides
operating system, hardware, and network. The customer
develops their own operating systems or software and
applications using IaaS model.
Cloud services are usually made available via a community
cloud, private cloud, hybrid cloud or public cloud.
Generally speaking, services provided by a public cloud are
accessible over the Internet and are owned and operated by
a cloud service provider. Some examples include services
aimed at the general public, such as online data storage
services, social networking sites or e-mail. However,
services for enterprises can also be accessible in a public
cloud. In a private cloud, specific organization solely
operates the cloud infrastructure, and is managed by the
organization or a third party. In a community cloud, the
service provided by the cloud providers is pooled by several
organizations and made accessible only to those groups. The
International Journal of Computer Trends and Technology (IJCTT) volume 4 Issue 6June 2013


ISSN: 2231-2803 http://www.ijcttjournal.org Page 1798

cloud infrastructure may be owned and operated by the
organizations or by a cloud service provider. A hybrid
cloud is a combination of public and community clouds or
different methods of resource pooling.
However, a CP-ABE system may not work well when
enterprise users outsource their data for sharing on cloud
servers, due to the following reasons: First, one of the
biggest merits of cloud computing is that users can access
data stored in the cloud anytime and anywhere using any
device, such as thin clients with limited bandwidth, CPU,
and memory capabilities. Therefore, the encryption system
should provide high performance. Second, in the case of a
large-scale industry, a delegation mechanism in the
generation of keys inside an enterprise is needed. Although
some CP-ABE schemes support delegation between users,
which enables a user to generate attribute secret keys
containing a subset of his own attribute secret keys for other
users, we hope to achieve a full delegation, that is, a
delegation mechanism between attribute authorities (AAs),
which independently make decisions on the structure and
semantics of their attributes. Third, in case of a large-scale
industry with a high turnover rate, a scalable revocation
mechanism is a must. The existing CP-ABE schemes
usually demand users to heavily depend on AAs and
maintain a large amount of secret keys storage, which lacks
flexibility and scalability [12]. In many applications, the
high cost of encrypting long messages in a public-key
cryptosystem can be prohibitive. A hybrid cryptosystem is
one which combines the convenience of a public-key
cryptosystem with the efficiency of a symmetric-key
cryptosystem.
Motivation: Our main design goal is to help the enterprise
users to efficiently share confidential data on cloud servers.
Specifically, we want to make our scheme more applicable
in cloud computing by simultaneously achieving fine-
grained access control, high performance, confidentiality,
integrity and security.
Our Contribution: In this paper, we first propose a
hierarchical hybrid encryption scheme (HHES) model by
combining a hierarchical attribute based system and
multiple encryption system(AES,DES and RSA) to provide
enhanced security for the data stored in cloud and efficient
user revocation. Based on the HES model, we construct a
hybrid encryption scheme by making a performance-
expressivity trade-off, to achieve high performance. Finally,
we propose a scalable revocation scheme by delegating to
the CSP most of the computing tasks in revocation, to
achieve a dynamic set of users efficiently.
The contribution of the paper is multifold. First, we show
how Enhanced HASBE extends the ASBE algorithm with a
hierarchical structure to improve scalability and flexibility
while at the same time inherits the feature of fine-grained
access control of ASBE. Second, we demonstrate how to
implement a full-fledged access control scheme for cloud
computing based on HHES. The scheme provides full
support for hierarchical user grant, file creation and user
revocation in cloud computing. Third, we formally prove
the security of the existing scheme based on the security of
the hybrid encryption scheme. [HASBE] and analyze its
performance in terms of computational overhead. Lastly, we
implement enhanced HASBE and conduct comprehensive
experiments for performance evaluation, and our
experiments demonstrate that Enhanced HASBE has
satisfactory performance. The rest of the paper is organized
as follows. Section II provides essential characteristics of
cloud computing and Section III provides an overview on
related work. Then we present the existing scheme in
Section IV. In Section V, we describe the proposed scheme
and prove the security of Enhanced HASBE and analyze its
security by comparing with Yu et al.s scheme and
describes the construction of enhanced HASBE and show
how it is used in access control of outsourced data in cloud
computing.
In Section VI, operational steps for the algorithm are
defined. In Section VII, disaster recovery mechanisms in
cloud are analyzed. Section VIII deals with analysis
schemes in cloud. Lastly, we conclude the paper in Section
IX.
II. ESSENTIAL CHARACTERISTICS OF CLOUD COMPUTING

On demand self services: computer services such as email,
applications, network or server service can be provided
without requiring human interaction with each service
provider. Cloud service providers providing on demand self
services include Amazon Web Services (AWS), Microsoft,
Google, IBM and Salesforce.com. New York Times and
NASDAQ are examples of companies using AWS (NIST)
[9].
Broad network access: Cloud Capabilities are available
over the network and accessed through standard
mechanisms that promote use by heterogeneous thin or thick
client platforms such as mobile phones, laptops and PDAs.
Resource pooling: The providers computing resources are
pooled together to serve multiple consumers using multiple-
tenant model, with different physical and virtual resources
dynamically assigned and reassigned according to consumer
demand. The resources include among others storage,
processing, memory, network bandwidth, virtual machines
and email services. The pooling together of the resource
builds economies of scale (Gartner).
Rapid elasticity: Cloud services can be rapidly and
elastically provisioned, in some cases automatically, to
quickly scale out and rapidly released to quickly scale in. To
the consumer, the capabilities available for provisioning
often appear to be unlimited and can be purchased in any
quantity at any time.
Measured service: Cloud computing resource usage can be
measured, controlled, and reported providing transparency
for both the provider and consumer of the utilised service.
Cloud computing services use a metering capability which
enables to control and optimise resource use. This implies
International Journal of Computer Trends and Technology (IJCTT) volume 4 Issue 6June 2013


ISSN: 2231-2803 http://www.ijcttjournal.org Page 1799

that just like air time, electricity or municipality water IT
services are charged per usage metrics pay per use.
Multi Tenacity: is the 6th characteristics of cloud
computing advocated by the Cloud Security Alliance. It
refers to the need for policy-driven enforcement,
segmentation, isolation, governance, service levels, and
chargeback/billing models for different consumer
constituencies. Consumers might utilize a public cloud
providers service offerings or actually be from the same
organization, such as different business units rather than
distinct organizational entities, but would still share
infrastructure [9].

III. RELATED WORK

The Public Key Cryptography (PKC) was proposed by
Rivest et al. principally to overcome the limitation in
ensuring secure group communications in the Symmetric
Key Cryptography based cryptosystems [4]. However, the
PKC based cryptosystems involve costly and complex
public key authentication framework known as the public
key infrastructure. In 1984, Shamir in [5] proposed Identity
Based Encryption (IBE) to reduce the complexity associated
with the pure PKC based systems. The IBE emphasizes
using a users identifier such as an e-mail address or an IP
address as his Public key instead of using digital certificates
for the public key authentication. However, in PKC as well
as in IBE based cryptosystems, if one requires to multicast a
message, then it has to be encrypted using different public
keys and this unnecessarily increases the associated
computational overhead. In [6], Sahai et al. pro-posed a
fuzzy identity based encryption approach, which aimed
overcoming this limitation. In fuzzy identity based
encryption, only the recipient whose attributes match
defined on a set overlap distance metric can decrypt a
message encrypted with the same identity.
Sahais work is further extended in the form of Key Policy
Attribute Based Encryption (KP-ABE), in which attributes
are attached to the ciphertext and a monotonic formula is
attached with the secret key of user [10]. The KP-ABE was
complemented with the Ciphertext-Policy Attribute Based
Encryption (CP-ABE) in [5] that aimed to give more power
to the sender as compared to KP-ABE. CP-ABE uses the
approach of threshold secret sharing [8]. However, all these
approaches support only the static attributes. It is
emphasized that in a typical CP-ABE implementation,
attributes play an important role because they essentially
determine a users secret key. Now, in the real world, the
attributes of any entity often undergo periodic updates.
However, as per our observations, these approaches either
entail significant overhead or violate the mandatory
requirements for the support of the dynamic attributes or
lack the flexibility required in periodic updates to the
attributes [12].
Motivated by these limitations, we propose in this paper an
improved approach for supporting the dynamic attributes in
a CP-ABE that overcomes the limitations, mentioned above.
To the best of our knowledge, this is a simple yet unique
approach for handling the dynamic attributes.
Our contribution: As we observed that the approaches till
now has some kind of limitation so that they are not
trustworthy as well as not applicable in real life. To deal
with this problem we proposed a new approach in which
CA(Certificate Authority) extract the old values from the
secret key and change the value of required attribute and
replace the new value with the old value and give secret key
to user.

IV. EXISTING SYSTEM

With the emergence of sharing confidential
corporate data on cloud servers, it is imperative to adopt an
efficient encryption system with a fine-grained access
control to encrypt outsourced data. Cipher text-policy
attribute-based encryption (CP-ABE), as one of the most
promising encryption systems in this field, allows the
encryption of data by specifying an access control policy
over attributes, so that only users with a set of attributes
satisfying this policy can decrypt the corresponding data.
However, a CP-ABE system may not work well when
enterprise users outsource their data for sharing on cloud
servers, due to the following reasons: First, one of the
biggest merits of cloud computing is that users can access
data stored in the cloud anytime and anywhere using any
device, such as thin clients with limited bandwidth, CPU,
and memory capabilities.



Figure 1: A three-level Enhanced HASBE model

Although some CP-ABE schemes support delegation
between users, which enables a user to generate attribute
secret keys containing a subset of his own attribute secret
keys for other users, we hope to achieve a full delegation,
that is, a delegation mechanism between attribute authorities
(AAs), which independently make decisions on the structure
and semantics of their attributes. Third, in case of a large-
scale industry with a high turnover rate, a scalable
International Journal of Computer Trends and Technology (IJCTT) volume 4 Issue 6June 2013


ISSN: 2231-2803 http://www.ijcttjournal.org Page 1800

revocation mechanism is a must. The existing CP-ABE
schemes usually demand users to heavily depend on AAs
and maintain a large amount of secret keys storage, which
lacks flexibility and scalability [12].
V. SYSTEM MODEL AND PROPOSED SCHEME
The cloud service provider manages a cloud to provide data
Storage service. Data owners encrypt their data files and
store them in the cloud for sharing with data consumers. To
access the shared data files, data consumers download
encrypted data files of their interest from the cloud and then
decrypt them. Data owners, data consumers, domain
authorities, and the trusted authority are organized in a
hierarchical manner as shown in Fig. 1.The trusted authority
is the root authority and responsible for managing top-level
domain authorities. Each top-level domain Authority
corresponds to a top-level organization, such as a federated
enterprise, while each lower-level domain authority
corresponds to a lower-level organization, such as an
affiliated company in a federated enterprise. Data
owners/consumers may correspond to employees in an
organization. Each domain authority is responsible for
managing the domain authorities at the next level or the data
owners/consumers in its domain. In our system, neither data
owners nor data consumers will be always online. They
come online only when necessary, while the cloud service
provider, the trusted authority, and domain authorities are
always online. The cloud is assumed to have abundant
storage capacity and computation power. In addition, we
assume that data consumers can access data files for reading
only. But still there are some security concerns that are to be
redressed. Especially cloud users have no choice but to rely
on the service provider. Amongst the possible solutions one
can keep a local copy of its data which is not feasible as we
are taking the benefit of the services of the CSP (cloud
service provider) [1].



Fig.2. System model

As depicted in Fig. 2, the cloud computing system under
consideration consists of five types of parties: a cloud
service Provider, data owners, data consumers, a number of
domain Authorities and trusted authority [2]. After the
failure, clients must be redirected to the disaster cloud
database.
In cryptography, public-key cryptosystems are
convenient in that they do not require the sender and
receiver to share a common secret in order to communicate
securely (among other useful properties). However, they
often rely on complicated mathematical computations and
are thus generally much more inefficient than
comparable symmetric-key cryptosystems. In many
applications, the high cost of encrypting long messages in a
public-key cryptosystem can be prohibitive. A hybrid
cryptosystem is one which combines the convenience of a
public-key cryptosystem with the efficiency of a symmetric-
key cryptosystem [3].
A hybrid cryptosystem can be constructed using any two
separate cryptosystems:
A key encapsulation scheme, which is a public-
key cryptosystem, and
A data encapsulation scheme, which is a
symmetric-key cryptosystem.
The hybrid cryptosystem is itself a public-key system,
whos public and private keys are the same as in the key
encapsulation scheme. Another factor of concern is that the
cloud is still under development process and there are no set
standards for the data storage and application
communication. So one couldnt move his data by changing
service provider though some organizations are working
towards this direction and will soon come out with a
solution but till that time, we must have some mechanism to
provide security to the critical and private data stored in the
cloud like credit card information and passwords. Keeping
in view this fact, some application must be developed that
will implement multi-level hybrid encryption mechanism by
using some strong cryptographic algorithms viz. RSA, AES
and DES. Cloud database information is usually added to
mass storage devices to enable recovery of corrupted data.
The redundancy allows the receiver to detect a limited
number of errors that may occur anywhere in the message,
and often to correct these errors without retransmission.

A. RSA

RSA is a commonly adopted public key cryptography
algorithm was developed by Rivest, Shamir, and Adleman.
Hundreds of software products and can be used for key
exchange, digital signatures, or encryption of small blocks of
data. RSA uses a variable size encryption block and a
variable size key. The key pair is derived from a very large
number, n, that is the product of two prime numbers chosen
according to special rules. Since it was introduced in 1977,
RSA has been widely used for establishing secure
communication channels and for authentication the identity
of service provider over insecure communication medium. In
the authentication scheme, the server implements public key
authentication with client by signing a unique message from
the client with its private key, thus creating what is called a
International Journal of Computer Trends and Technology (IJCTT) volume 4 Issue 6June 2013


ISSN: 2231-2803 http://www.ijcttjournal.org Page 1801

digital signature. [4] The signature is then returned to the
client, which verifies it using the servers known public key.
The most commonly used asymmetric algorithm is Rivest
Shamir-Adleman (RSA). It was introduced by its three
inventors, Ronald Rivest, Adi Shamir and Leonard Adleman
in 1977. It is mostly used in key distribution and digital
signature processes. RSA is based on a one-way function in
number theory, called integer factorisation. A one-way
function is a function, which is easy to compute one way,
but hard to compute the inverse of it. Here easy and hard
should be understood with regard to computational
complexity, especially in terms of polynomial time
problems. For instance, it is easy to compute the function
f(x) =y, but it is hard or unfeasible to compute the inverse of
f, which is f 1(y) =x. The RSA algorithm contains three
steps, namely key generation, encryption and decryption.
The key generation process is done by first choosing two
random prime numbers, p and q. Then the number n should
be computed: n =pq.
Thereafter a function (n) is computed: (n) =(p 1)(q
1).
Moreover an integer e is chosen such that 1 <e <(n).
that: de mod (n) =1, and e and (n) are co-prime.
As a result (n,e) is the public key, and (n,d) is the private
key.
Encrypting a message m is done by computing: c =me mod
n, and decrypting the message is done by computing: m =cd
mod n.
The two keys, private key and public key, can be used
interchangeably. It means that a user can decrypt, what has
been encrypted with the corresponding public key, and the
inverse of that: He can use the private key to encrypt a
message, which can only be decrypted by the corresponding
public key.
B. DES

Data Encryption Standard is a widely-used method of
data encryption using a private (secret) key that was judged
so difficult to break by the U.S. government that it was
restricted for exportation to other countries. There are
72,000,000,000,000,000 (72 quadrillion) or more possible
encryption keys that can be used. For each given message,
the key is chosen at random from among this enormous
number of keys. Like other private key cryptographic
methods, both the sender and the receiver must know and
use the same private key .The DES has a 64-bit block size
and uses a 56-bit key during execution (8 parity bits are
stripped off from the full 64-bit key). DES is a symmetric
cryptosystem, specifically a 16-round Feistel cipher. When
used for communication, both sender and receiver must
know the same secret key, which can be used to encrypt and
decrypt the message, or to generate and verify a Message
Authentication Code (MAC). The DES can also be used for
single-user encryption, such as to store files on a hard disk in
encrypted form.
DES makes use of substitutions and transpositions on top of
each other in 16 cycles in a very complex way. The key
length for this algorithm is fixed to 56 bits, which appeared
to be too small as the computing recourses became more
and more powerful. The main reason for this algorithm to be
breakable is its key size
However it is worth mentioning that 3DES, also called triple
DES, is an approach to make DES more difficult to break.
3DES uses DES three times on each block of data, and in
this way the length of the key is increased. It actually uses a
key bunch containing three DES keys, K1, K2 and K3,
which are 56 bits each. The encryption algorithm works in
the following way: ciphertext =EK3(DK2(EK1(plaintext))),
i.e. encrypt with K1, then decrypt with K2, and finally
encrypt with K3. The decryption process is the reverse of
encryption: plaintext = DK1(EK2(DK3(ciphertext))), i.e.
decrypt with K3, then encrypt with K2, and finally decrypt
with K1. In this way the algorithm would have a good
strength, but the drawback of this approach is decrease in
performance.
C.AES
After the weaknesses of DES were accepted, in January
1997 NIST (National Institute of Standards and
Technology) announced that they wanted to replace DES,
and the new approach would be known as AES (Advanced
Encryption Standard). It led to a competition among the
open cryptographic community, and during nine months,
NIST received fifteen different algorithms from several
countries.
AES is a block cipher with a block size of 128 bits. The key
length for AES is not fixed, so it can be 128, 192, 256 and
possibly more bits. The encryption techniques such as
substitutions and transpositions are mainly used in AES.
The same as DES, AES makes use of repeated cycles, which
are 10, 12 or 14 cycles (called rounds in AES). In order to
achieve perfect confusion and diffusion, every round
contains four steps. These steps consist of, transpositions;
shift the bits and applying exclusive OR to the bits. The
Advanced Encryption Standard (AES) is a symmetric-key
encryption standard. Each of these ciphers has a 128-bit
block size, with key sizes of 128, 192 and 256 bits,
respectively.
The AES cipher is specified as a number of repetitions of
transformation rounds that convert the input plaintext into
the final output of cipher text. Each round consists of several
processing steps, including one that depends on the
encryption key. A set of reverse rounds are applied to
transform cipher text back into the original plaintext using
the same encryption key.

VI. ALGORITHM FOR OPERATIONAL STEPS

There are mainly three active components in the system:
International Journal of Computer Trends and Technology (IJCTT) volume 4 Issue 6June 2013


ISSN: 2231-2803 http://www.ijcttjournal.org Page 1802

(i) Data Owner (DO), who stores data on Cloud can allow
accessing of its data to other Cloud users.
(ii) Data requesters (DR), who use the data based on
credentials received from the cloud owner.
(iii) Cloud server (CS) is a central component that provides
storage as a service and works as a bridge between Data
Owner and Data requester. We wish to achieve following
goals. First, Cloud server neither should learn any
information from Cloud users data nor should misuse the
same. Second, we also wish to offer an option to Cloud user
for selecting encryption option for their data. Third, aim to
achieve light weight integrity verification process for
checking unauthorized change in original data, without
requiring local copy of the data.
During Phase 1 the DO/DR generate a key pair using a
public key encryption scheme in single Step which is used
for encrypt the data during transmission. Aim of Phase2 is
to get ID from Cloud server and send Registration detail on
Cloud server.
VII. DISASTER RECOVERY MECHANISM
Disaster Recovery is primarily a form of long distance state
replication combined with the ability to start up applications
at the backup site after a failure is detected. Replication at
the application layer can be the most optimized, only
transferring the crucial state of a specific application. For
example, some high-end database systems replicate state by
transferring only the database transaction logs, which can be
more efficient than sending the full state modified by each
query. [13] In general, DR services fall under one of the
following categories:
Hot Backup Site: A hot backup site typically provides a
set of mirrored stand-by servers that are always available to
run the application once a disaster occurs, providing
minimal RTO and RPO. Hot standbys typically use
synchronous replication to prevent any data loss due to a
disaster.
Warm Backup Site: A warm backup site may keep state up
to date with either synchronous or asynchronous replication
schemes depending on the necessary RPO. Standby servers
to run the application after failure are available, but are only
kept in a warm state where it may take minutes to bring
them online.
Cold Backup Site: In a cold backup site, data is often only
replicated on a periodic basis, leading to an RPO of hours or
days. In addition, servers to run the application after failure
are not readily available, and there may be a delay of hours
or days as hardware is brought out of storage or repurposed
from test and development systems, resulting in a high
RTO.
VIII. ANALYSIS OF SCHEME
Following paragraph illustrates the security and general
analysis of the system and how we achieve the goals
mentioned earlier [13].
1) Data Confidentiality: As we store the data in
encrypted form on cloud, and keep the keys and the
algorithm itself, unknown from Cloud server, it is next to
impossible for the server to either learn the data or to misuse
them.
2) Efficient User Revocation: To deal with user
revocation in cloud computing, we add an attribute to each
users key and employ multiple value assignments for this
attribute. So we can update users key by simply adding a
new expiration value to the existing key. We just require a
domain authority to maintain some state information of the
user keys and avoid the need to generate and distribute new
keys on a frequent basis, which makes our scheme more
efficient than existing schemes.
3) No data duplication: Without asking local copy of
data, correctness can be measured even data is in encrypted
form. The decryption is also done offline at the site of
DO/DR. Here data are not moving from one position to
another in unencrypted format.
IX. CONCLUSION
In this paper, we introduced the Enhanced HASBE scheme
for attaining data confidentiality, data integrity and data
recovery for outsourced data in cloud computing. The
Enhanced HASBE scheme flawlessly incorporates a
hierarchical structure of system users by applying a hybrid
encryption algorithm to ASBE and it is highly efficient in
recovering the singleton data losses almost immediately and
recovers from bursty data losses. Enhanced HASBE not
only supports multiple attributes due to flexible attribute set
combinations, but also achieves efficient user revocation
because of manifold value assignments. We planned to
envisage several possible directions for future research on
this area.
REFERENCES
[1] V. Goyal, O. Pandey, A. Sahai, and B.Waters, Attibute-based
encryption for fine-grained access control of encrypted data, in Proc.
ACM Conf. Computer and Communications Security (ACM CCS),
Alexandria, VA, 2006.
[2] Zhiguo Wan, J une Liu, and Robert H. Deng, HASBE: A Hierarchical
Attribute-Based Solution for Flexible and Scalable Access Control in
Cloud Computing, IEEE Transactions on Information Forensics and
Security, VOL. 7, NO. 2, APRIL 2012.
[3] http://en.wikipedia.org/wiki/Hybrid_cryptosystem
[4] Rivest, R., Shamir, A., and Adleman, L. A method for obtaining
digital signatures and public-key cryptosystems. Comm. A CM 21, 2
(Feb. 1978), 120-126.
[5] Shamir. Identity Based Cryptosystems and Signature Schemes. In
Advances in Cryptology CCRYPTO, volume 196 of LNCS, pages 47-
53.Springer, 1984.
[6] Sahai A,Waters B. Fuzzy identity-based encryption. Proceeding of
EUROCRYPT 2005. Berlin : Springer 2005,LNCS 3494:457-473.
[7] Goyal V,Pandey O,Sahai A, et al. attribute based encryption for fine-
grained access control of encrypted data. Proceedings of the 13
th
ACM
International Journal of Computer Trends and Technology (IJCTT) volume 4 Issue 6June 2013


ISSN: 2231-2803 http://www.ijcttjournal.org Page 1803

conference on Computer and communications security. New York: ACM,
2006:89-98.
[8] Shamir. How to share a secret. Communications of ACM,
22(11):612C613, 1979.
[9]Amazon Elastic Compute Cloud (Amazon EC2) [online]Available
http://aws.amazon.com/ec2/
[10] J .Li,N.Li, and W.H. Winsborough, Automated trust negotiation using
cryptographicb credentials,in Proc. ACM Conf.Computer and
communications security (CCS), Alexandria,VA,2005.
[11] A.Sahai and B.Waters,Fuzzy identity based encryption,Proc.
Acvances in Cryptography Eurocrypt 2005,vol.3494,LINCS,pp 457-473.
[12] G.Wang,Q.Liu, and J .Wu,Hierarchical attribute based encryption for
fine grained access control in cloud storage devices, in Proc.ACM
Conf.Computer and communications security (ACM CCS) ,
Chicago,IL,2010.
[13] Krunal Suthar, Parmalik Kumar and Hitesh Gupta , SMDS: Secure
Model for Cloud Data Storage, International Journal of Computer
Applications (0975 8887) Volume 56 No.3, October 2012.

Das könnte Ihnen auch gefallen