Beruflich Dokumente
Kultur Dokumente
Cloud Computing
Authors: Priodyuti Pradhan, P. Syam Kumar, Gautam Mahapatra, and R.
Subramanian
Introduction
Motivation:
The usage of cloud to store important data has become a modern trend. After
storing the data on cloud, most users delete the local copy of files. Therefore users
lose the previous complete control which they had over files. This lack of control
introduces some security concern, one of the most important concern of remotely
stored data is its integrity. The integrity of the data may be lost due to disk failure at
server side or some unauthorized user tinkering with the data and deleting or
modifying important files. Thus, by keeping huge amount of data into the Clouds,
user cannot ensures the integrity of the stored data. Therefore, user needs some
scheme to check the integrity of the outsourced data without downloading the
whole files and in a periodic manner. This paper proposes a distributed algorithmic
approach to address this problem with publicly probabilistic verifiable theme.
Previous work done in the same field use a single TPA for verifying the integrity of
the data.This paper proposes the use of multiple SUBTPAs controlled by a main TPA
for the verification task. In single auditor system if TPA system will crash due to
heavy workload then all the verification process will be aborted. On the other hand,
during verification process the network traffic will be very high near the TPA
organization and may create network congestion. Thus, the performance will be
degrading in single TPA scheme. The paper proposes a distributed verification
scheme, where the Main TPA will distributes the verification task uniformly among a
number of SUBTPAs. Main TPA is acting as a Coordinator and all the SUBTPAs are
working under the Coordinator. Hence, all the SUBTPAs are performing this
verification tasks concurrently and giving the verification result to the Coordinator.
Thus, concurrency increases the performance of the model proposed in the paper.
Description of the Algorithm
Theoretical Background:
Sobol Sequence - Sobol Sequence [3], [4] is a low discrepancy, semi-random
sequences that generates sequences between the interval [0, 1). One salient
features of this sequence is that the sequences are uniformly distributed over the
interval [0, 1). Also, it maintains uniformity for any segment out of the sequence.
That means Sobol Sequence is segment wise uniform.
Approach:
The coordinator or the main TPA will generate a bit string for every SUBTPA known
as the Task Distribution Key(TDK).Each SUBTPA will successively apply their TDK on
the generated Sobol sequence as a mask upto the sequence will exhaust and take
the corresponding sequence number as block number for verification.
Example:
suppose the TDK for the SUBTPA1 and SUBTPA2 are 10101 and 01010 respectively.
Let, the generated Sobol random sequence is {1216, 5312, 3264, 7360, 704, 4800,
2752, 6848, 1728}, where file blocks are numbered from 0 to 8191.
Now the blocks which SBTPA1 will verify will be decided by SUBTPA1 masking the
above sequence with 10101
{1216, 5312, 3264, 7360, 704, 4800, 2752, 6848, 1728}
1 , 0 , 1 , 0 , 1 ,, 1 , 0 , 1 , 0
So the blocks that SUBTPA1 will verify are{1216,3264,704,4800,6848}
The Verification Step:
In the distributed challenge and verification phase, each SUBTPA independently
communicates to the Cloud Servers for proof. Here, the SUBTPA at a time sends
10% of the subsequence to the Cloud Servers as a challenge, instead of sending the
whole subsequence. Therefore, it reduces the workload at the Server side as well as
reduce network congestion. After sending each 10% challenge, the SUBTPA will
waits for the proof from any Server because Cloud Computing is based on
Distributed Control and Distributed Data paradigm. If the proof matches with the
stored metadata then store TRUE in its own table, Report, and send next 10%
subsequence, and waits for the next proof. If any mismatch will occur during proof
verification, then the each SUBTPA will immediately send a signal to the Coordinator
for fault region and store FALSE in Report table.
Conclusion
The paper proposes an efficient Distributed Verification protocol based on the Sobol
Random Sequence. The generation of TDK is such that the protocol uniformly
distribute the task among SUBTPAs. Most importantly, the protocol can handle
failures of SUBTPAs due to its uniform nature and also gives better performance in
case of unreliable communication link. The main focus of the paper is the uniform
task distribution among SUBTPAs to detect the erroneous blocks as soon as
possible. In addition, the protocol reduces the workload at the Server side and also
reduces the chance of network congestion at the Server side as well as Coordinator
side by distributing the task. Thus, the Distributed Verification Protocol increases the
efficiency and robustness of data integrity in Cloud Computing.