Beruflich Dokumente
Kultur Dokumente
Abstract: In large scale datasets, mining frequent using traditional data processing techniques or
softwares. Major challenges in big data are
itemsets using existing parallel mining algorithm is to
information safekeeping, distribution, searching,
balance the load by distributing such enormous data
revelation, querying, updating such data. Data
between collections of computers. But we identify
analyzation is another big apprehension need to
high performance issue in existing mining algorithms
concentrate while dealing with big data. It involves
[1]. To handle this problem, we introduce a new
data which is formed by different types of data and
approach called data partitioning using Map Reduce
applications like social media data, online auctions.
programming model.In our proposed system, we have
Data is differentiated into 3 major types structured,
introduced new technique called frequent itemset
unstructured and semi-structured data. It also defines 3
ultrametric tree rather than conservative FP-trees. An
major Vs Volume, Velocity, and Variety which gives
investigational outcome tells us that, eradicating
us apparent notion on what is big data.
redundant transaction results in improving the
performance by reducing computing loads.
Now a days data is growing very fast, consider an
example: many hospitals have trillions of data facets
Keywords: Frequent Itemset, MapReduce, Data
partitioning, parallel computing, load balance of ECG data. Twitter alone collects around 170million
temporal data, every now and then, serves as much as
200million queries/day. Most important limitations
1 INTRODUCTION with the existing systems are handling larger datasets;
our databases can handle only structured data but not
Big data is an emerging technology in modern world.
varieties of data, fault tolerance, scalability. Thats
It is a greater amount of data, which is hard to process
why big data consign an important role in these days.
which is a foremost challenge. To handle this scenario scalability when distributed data is been executed in a
we need to design a distributed storage system. In big large scale clusters. Many algorithms have been defined
data, this can be conceded by a system called Hadoop to process FIM, built in Hadoop which aims at
stores and processing big data. It includes 2 balancing the load by equally distributed [4] among
nodes. When such data is divided into different parts
important techniques called HDFS (storing big data)
and MapReduce framework (processing big data). Big need to maintain the connection between the data thus it
leads poor data locality and Parallely it increases data
data process deals with 3 different techniques data
shuffling costs and network overhead. In order to
ingestion, data storage, and data analysis.
improve data locality in this we are introducing a
parallel FIM technique, where bulk of data is distributed
If data is distributed it is tough to find the locality of
such files in view of bigger datasets. Better solution to across Hadoop clusters.
this problem is to follow Master-Slave architecture, in In this paper they have implemented FIM on Hadoop
which single machine acts as a Master and remaining
[10] clusters using Map Reduce framework. This project
machines are treated as Slave. Master knows the
aims is to boost the performance of parallel FIM on
location of file being stored on different Slave
Hadoop clusters and this can be achieved with the help
machines. So whenever a client sends a request, of Map and Reduce job.
Master machine processes it by finding out the
requested file in any of the underlined slave machines. 3 METHODOLOGY
Hadoop follows same architecture.
Traditional mining algorithms [2] are not enough to
handle large data sets. Thus we have introduced a new
2OBJECTIVES
data partitioning technique. Parallel computing [7] is
The main goal of the project is to eliminate the one more method which we have introduced here to
redundant transactions on Hadoop nodes to improve the compute the redundant transactions parallely. So that
performance and this can be achieved by reducing the we can achieve better performance compared with the
computing and networking load. It mainly gives traditional mining algorithms.
attention to grouping highly significant transactions into
a data partitioning. In the area of big data processing,
MR framework has been used to develop parallel data
mining algorithms which includes FIM, FP-growth [3]
In this project, we are trying to show how to achieve Step 4: Accumulating: the outcomes which are
better performance measure by comparing existing generated in Step.3are combined to produce final
result.
parallel mining algorithm with data partitioning
system using some cluster algorithms. First we will
load large datasets into HDFS [6], once it is uploaded 5 OUTCOMES
into the main web server where parallel FIM [5]
Bringing together both new parallel mining algorithm
application is running. Based on the minimum support,
and data partitioning yields to better performance by
it partitions the data among 2 different servers and
comparing with the traditional mining algorithms like
runs two map reduce jobs. Finally, result will be sent
Apriori , MLFPT [9] etc. which is showcased in below
back to the main server which conducts another map
graph.
and reduce job to mining further frequent itemsets.
Thus here we are running 3 map and reduce job.
ACKNOWLEDGEMENT
I would also like to thank Mrs. Swathi,
Assoc. Professor andHOD, Department of Computer
Science and Engineering, CMRIT, Bangalore who
shared her opinions and experiences through which I
received the required information crucial for the
project.
Fig 5.2 Speed up performance
REFERENCES
CONCLUSION AND FUTURE SCOPE
[1].Fast Parallel ARM without Candidacy generation.
Any area if we consider can realize huge level of Osmar R. ZaYane, Mohammad El-Hajj , Paul Lu.
records will be generated in a fraction of a second. Canada : IEEE, 2001. 7695-1 119-8.
Processing such info Apache Hadoop provides [2]. Cloud Data Mining based on Association Rule.
different framework like MapReduce etc. In CH.Sekhar, S ReshmaAnjum. 2091-2094,
AndraPradesh : International journal of computer
Traditional parallel mining algorithms for frequent
science and information technology, 2014, Vol. 5 (2).
itemset mining it takes more time to process such data, 09759646.
system performance and balancing the load was major
[3]. An enhanced FP growth based on MapReduce for
challenges. This experiment introduces a new parallel
mining association rules. ARKAN A. G. AL-
mining algorithm called FIUT using Map Reduce HAMODI, SONGFENG LU, YAHYA E. A. AL-
programming paradigm; it divides the input data SALHI. China : IJDKP, 2016, Vol. 6.
across multiple Hadoop nodes and start doing parallel