Sie sind auf Seite 1von 3

Dynamic Replication In Data-Centers connected over IPFS

Nilesh Rathi Awadhesh Singh


CSA, IISc Bangalore CDS, IISc Bangalore

Abstract— Default replication rate in IPFS-Cluster[5] some- the potential peers and place the CID or pin the CID to the
times create bottlenecks in no. of request a cluster could handle top peer obtained after that sorting metric. The replication
when multiple Data-Centers are connected to each other over factor however remain constant.
IPFS[4] to share data using IPFS-Cluster. Dynamic placement
and replication of data is necessary both inside the cluster and Our main contribution to this project would be to support
among the connected cluster to deal with the overload of request dynamic replication in IPFS Cluster and to use dynamic
on one node or cluster in the given setting i.e multiple cluster replication to load balance the no of request for a CID within
connected over IPFS using IPFS-Cluster. and among clusters by modifying the no. of replica it has
both within and among the cluster.
I. P ROBLEM D ESCRIPTION
To achieve the above stated goal we need to consider the
IPFS is a distributed file-system that seeks to connect following
all computing devices with the same system of files. IPFS-
Cluster on the other hand is a software to orchestrate IPFS 1) Finding Hot-spots : These are the CIDs experiencing
daemons running on different hosts. An IPFS-Cluster is the most no. of read/write requests. This could be
formed by a number of Peers, each of them associated to achieve by looking at the request to the cluster and
one IPFS daemon. The peers share a pin-set (also known setting a threshold for which the CID would be termed
as shared state) which lists the CIDs (Content Identifiers) as hot. All the request eventually goes to the leader
which are cluster-pinned and their properties (allocations, elected by the consensus protocol. So changes are
replication factor etc.). Multiple Data-Centers could also required in how this leader processed the request.
connect there clusters by installing IPFS over there clusters 2) Supporting Dynamic replication : Right now the
and sharing the Secret key(used by new nodes to connect replication is done based on the default replication
with the peer of the cluster).IPFS Cluster uses Raft[3] value field in the configuration file. This need to be
consensus to maintain a consistent view of peer-set(peers changed to support dynamic replication on the go.
participating the cluster) and pin-set. 3) Replication Metric : This would be the core of the
problem to decide which CID need to be replicated
how much times based on the no. of request it get.
The replication metric need to consider both the no of
replicas of CID and request it gets within and across
data-centers.
Some Changes to the shared pin-set state need to be
done to track the no of request for each CID per cluster
basis. Based on this no. we can track which hot CID
is under-replicated and perform dynamic replication of
that respective CID.
4) Cold CIDs : Since we are pinning more instances of
hot CIDs. Cold CIDs (CIDs with no. of request below
min threshold) need to be under-pinned. This could
also be done by analysing shared state from time to
time.

The above considerations would help to achieve the target.


The analysis of shared pin-set and peer-set need to be done
from time to time, this could be done by sending peer-set
Fig. 1. IPFS Cluster Architecture and pin-set info to leader from time to time like heartbeat but
less frequently. The leader would then perform the respected
The replication factor in the IPFS Cluster decide how orchestration based on the info to load balance the no. of
many replicas to place and based on a sorting metric it sort replicas of CIDs.
II. N OVELTY /S CALABILITY access, Last access for each CID per peer per cluster.
This work require to change the CID replication mecha- This metadata would be useful to find hot-spots and
nism of the IPFS-cluster so as for it to handle the large no also to point out the cold CIDs. They will be out
of request for the CID which would previously be causing candidate for over and under replications.
contention over the network. 2) Algorithm for replication : This would be the next
This work help IPFS-cluster to dynamically change replica- major task after finding the candidate CID for load
tion factor and placement of CID within and among data balancing
centers to handle the I/O request for that CID. Moreover For a CID consider DSi to be the no. of replicas ith
by efficiently replicating the CID the IPFS-Cluster would data-center is holding. And Reqi be the no. of request
handle request more efficiently and perform better. Also for that CID in the ith data-center. Then
better replication scheme would use the underlying storage {no.of DCs
X
more efficiently. Reqi = Reqtotal
i=1
III. G APS /R ELATED W ORK {no.of DCs
X
Since IPFS is relatively new and so is the IPFS-cluster. DSi = DStotal
So not much done in this domain. During the initial release i=1
of IPFS-Cluster the policy was to replicate everything So the no of replicas for data center DSi should be in
everywhere which was clearly not a scalable solution. Later accordance with the no of request for that data center.
release allows to set the replication factor but it is constant
Reqi
throughout the run of cluster. Dynamic replication factor is DSinew = ∗ DStotal
still not supported which this work will address. Similar Reqtotal
problem have been solved in other file-systems like HDFS
CRDM[1],ERMS[2]. The common practice all of them Initial stages of the cluster run with constant
follow is to identify the hot-spots and replicate file blocks replication factor. Formula may be modified as
from there to different node on the cluster so as to reduce required. After obtaining the no. of replicas of CID
load per node. per Cluster. We need to distribute it withing the cluster
evenly so that load would be balanced. For now to
We would also be following the similar approach but achieve the above task we would sort the peer inside
difference came in the overall architecture. HDFS being a the cluster based on the no of request and place the
distributed file system has a single node of control called replica on the top CIDs obtained. Other measures like
name-node where as in IPFS-CLuster there is no notion sorting based on the no. of CID of that particular
of name-node, although CFT Consensus protocols such file to which our target CID belong could also be
as RAFT provide us with the leader which could act as considered. Distributing replicas inside the cluster
name-node but that’s the property of RAFT and not IPFS. could be achieve by modifying the allocate function.
Also in distributed file systems like HDFS name node
contain all the info about the state of the cluster, other nodes Various things need to be considered like space on a peer,
are aware of nothing, whereas in IPFS there is a notion of minimum and maximum replication factor of a cluster and
shared state which is with every peer of network, so every liveliness of the cluster while implementing the above mea-
peer is aware of what other peers are holding or pinning. surements. Only after performing these checks the CID is
dynamically replicated. For Eg. The minimum replication
IV. P ROBLEM D EFINITION A ND A PPROACH factor for a CID in a cluster cant go below the min-
The Problem could be stated as follows replication-value even if the above formula states so.
Problem Statements : The aim of Project is- There may be slight modifications in the above stated mea-
• Extend support for dynamic replication factor in IPFS-
sures during development phase but more or less it would be
Cluster. similar to the above. The approach followed is based on the
• Finding hot CIDs i.e the CIDs for which most of the
law of equal proportion and is most sensible the replicate
request are for. CID in the order of request. The approach seems feasible
• Using Dynamic replication to load balance the request
and should work fine.
for a CID both within the cluster and across multiple V. P ROPOSED E XPERIMENTS
IPFS-cluster connected with each other. Since different P2P file systems have different architecture
To achieve the above task following things need to be so there is no common benchmark to perform the comparison
considered. but on the high level we need to measure the following-
1) Shared pin-set: This contain shared state information Performance :The performance of the IPFS cluster could be
like items pinned by the cluster, no. of replicas, peer-id measured by the read and write latency over the proposed
of the peer pinning them etc. To deal with our problem setup. Workloads containing large I/O request could be
we need to extend it so that it also record the no of used to perform the evaluation. Moreover IPFS-Cluster itself
contain some test suite to test the cluster performance. The
baseline for this experiment would be the default IPFS-
Cluster setup.
Storage : The Storage overhead as compared to default
replication case need to be considered and measure. This
could be found out by adding the size of ./ipfs directory of
each peer.
Fault tolerant : This is another area we are interested
in. This is not included in our goal but we would try to
include it. To measure this we have to simulate the faults by
disconnecting network, Check if integrity of stored data is
protected.
The Success of the system would depend if it gives better
performance over normal setting with similar of less storage
overhead.
R EFERENCES
[1] CDRM: A cost-effective dynamic replication management scheme for
cloud storage cluster
[2] Erms: An elastic replication management system for hdfs
[3] In search of an understandable consensus algorithm
[4] IPFS-Cluster: https://github.com/ipfs/ipfs-cluster
[5] IPFS : https://github.com/ipfs/ipfs

Das könnte Ihnen auch gefallen