9 views

Uploaded by Jijeesh Baburajan

Data Mining

- mahmon2014
- IJERTV2IS1001
- 10.5923.j.ajis.20150501.02
- Datamining Pres
- Article_4
- Fulltext Soft Springer 2006
- D ESIGN A ND I MPLEMENTATION F OR A UTOMATED N ETWORK T ROUBLESHOOTING U SING D ATA M INING
- iiuu
- CS2351 AI UNIT5 Learning
- Prediction of the Comprei,l7kssive Strength of Vacuum
- APPLYING ROUGH SET THEORY IN MULTIMEDIA DATA CLASSIFICATION
- Learning
- Neural networks for the prediction and forecasting of water.pdf
- DATA, TEXT Mining Chap7
- Final PDF
- Smartness in Code Review
- ijtra140765
- Report of AI
- Support Vector Machine for Wind Speed Prediction
- 1-s2.0-S0307904X09002339-main

You are on page 1of 77

Computer Science and Engineering Department Nirma University, Ahmedabad

STTP On DATA MINING: TECHNIQUES AND APPLICATIONS At SVNIT, Surat

Outline

Learning Classification Hypothesis generation Decision Tree Induction Classification by Backpropagation Bayesian classification

Learning vs Memory

Learning is how you acquire new information about the world, and memory is how you store that information over time

-Eric R. Kandel, M.D., vice chairman of The Dana Alliance for Brain Initiatives and recipient of the 2000 Nobel Prize

Mining

Extracting useful information from huge collection of available data Knowledge Discovery in Databases (KDD), Data Mining, Knowledge Extraction, Data Archaeology are other terms used to denote the process Mining can take following forms

Concept Description: Characterisation and Discrimintion Association Analysis Classification and Prediction Cluster Analysis Outlier Analysis Evolution Analysis

Learning

Supervised Learning

The training data are accompanied by labels indicating the class of the observations New data is classified based on the training set Training set is needed Ex. class room teaching, teacher upon feedback modifies the pace and flow of the talk

Unsupervised Learning

The class labels of training data is unknown Given a set of measurements, observations, etc. with the aim of establishing the existence of classes or clusters in the data Like video lecture No feedback for error Used for clustering

Classification

Classification:

predicts categorical class labels classifies data (constructs a model) based on the training set and the values (class labels) in a classifying attribute and uses it in classifying new data It is a kind of supervised learning

1.

Each tuple/sample is assumed to belong to a predefined class, as determined by the class label attribute The set of tuples used for model construction: training set The model is represented as classification rules, decision trees, or mathematical formulae

2.

Estimate accuracy of the model If accuracy is found acceptable, use the model for further classification of unknown samples

Classification

Classification can be achieved with numerous techniques Hypothesis Learning

Find-S Candidate Elimination

ID3 C4.5 CART

Neural Networks

Backpropagation

Bayesian Network

9

Hypothesis Learning

Inferring a boolean-valued function from training examples of its input and output. Task of searching through a large space of hypotheses defined by the hypothesis representation Each hypothesis consists of a conjunction of constraints on the instance attributes

10

Hypothesis Representation

For each attribute, the hypothesis will either indicate by a "?' that any value is acceptable for this attribute, specify a single required value (e.g., Warm) for the attribute, or indicate by a "" that no value is acceptable

Ex. < Sunny, Warm, Normal, Strong, Warm, Same> <Sunny, Warm, ?, ?, Warm, Same> <?, ?, ?, ?, ?, ?> - Most general hypothesis <, , , , , > - Most specific hypothesis

11

Training Data

12

General-to-specific ordering

Various hypothesis can be ordered in terms of generality and specialty

13

Find-S

Finds the maximally specific hypothesis consistent with training data The algorithm

14

Find-S

Start with h being initialized to <,,,,,> For first positive training example generalize h to cover the example h = <Sunny, Warm, Normal, Strong, Warm,

Same>

Warm, Same>

3rd example is negative so ignore it 4th example (positive) will lead to further generalization of h <Sunny, Warm, ?, Strong, ?, ?) At each stage, the hypothesis is the most specific hypothesis consistent with seen training examples

15

Find-S

16

Has the learner converged to the correct hypothesis? Why to prefer the most specific hypothesis? Are training examples consistent? What if there are multiple maximally specific hypothesis consistent with training data

17

Candidate-Elimination algorithm

Idea is to output a description of the set of all hypotheses consistent with the training Examples It represents the set of all hypotheses consistent with the observed training examples called as Version Space Version space can be represented by most general and most specific member These members form general and specific boundary sets that delimit the version space within the partially ordered hypothesis space

18

19

Candidate-Elimination algorithm(1/2)

20

Candidate-Elimination algorithm(2/2)

21

Candidate-Elimination algorithm

22

Candidate-Elimination algorithm

23

Candidate-Elimination algorithm

24

25

Practical applications of the CandidateElimination Find-S algorithms are limited by the fact that they both perform poorly when given noisy training data They allow only conjunctive hypothesis With candidate elimination, if no of training examples are less, it might lead to partially learned hypothesis

26

An internal node is a test on an attribute A branch represents an outcome of the test, e.g., Color=red A leaf node represents a class label or class label distribution At each node, one attribute is chosen to split training examples into distinct classes as much as possible A new case is classified by following a matching path to a leaf node

27

Outlook sunny sunny overcast rain rain rain overcast sunny sunny rain sunny overcast overcast rain 28 Temperature hot hot hot mild cool cool cool mild cool mild mild mild hot mild Humidity high high high high normal normal normal high normal normal normal High Normal High Windy Weak Strong Weak Weak Weak Strong Strong Weak Weak Weak Strong Strong Weak Strong Play? No No Yes Yes Yes No Yes No Yes Yes Yes Yes Yes No

Outlook sunny overcast Humidity high No

29

rain

normal Yes

Classification Rules

Other way of representing tree Ex.

If Outlook=sunny and Humidity=High then play = No If Outlook=sunny and Humidity=normal then play = Yes If Outlook=overcast then play= Yes If Outlook=rainy and Wind = Strong then play= No If Outlook=rainy and Wind = Weak then play=Yes

30

Top-down tree construction

At start, all training examples are at the root. Partition the examples recursively by choosing one attribute each time.

Remove subtrees or branches, in a bottom-up manner, to improve the estimated accuracy on new cases.

31

Basic algorithm (a greedy algorithm) Tree is constructed in a top-down recursive divide-and-conquer manner At start, all the training examples are at the root Attributes are categorical (if continuous-valued, they are discretized in advance) Examples are partitioned recursively based on selected attributes Test attributes are selected on the basis of a heuristic or statistical measure (e.g., information gain) Conditions for stopping partitioning All samples for a given node belong to the same class There are no remaining attributes for further partitioning majority voting is employed for classifying the leaf There are no samples left

32

At each node, available attributes are evaluated on the basis of their capabilities in separating the classes of the training examples. A Goodness function is used for this purpose. Typical goodness functions:

information gain (ID3/C4.5) information gain ratio gini index

33

witten&eibe

Which is the best attribute?

The one which will result in the smallest tree Heuristic: choose the attribute that produces the purest nodes

Information gain increases with the average purity of the subsets that an attribute produces

34

witten&eibe

Information Gain

Computing information

measures how well a given attribute separates the training examples according to their target classification It measures the how well an attribute can classify a training samples alone.

Entropy

characterizes the (im)purity of an arbitrary collection of examples. Given a collection of samples with n classes with pi being the probability of occurrence of ith class

n

entropy(S ) = pi log pi

i =1

35

witten&eibe

Entropy

Consider the training example containing 14 samples (S) The probability distribution of its two classes: yes and no is 9 and 5 Then Entropy of S relative to this Boolean classification is

36

witten&eibe

Entropy

Entropy is zero if all members are of same class Entropy is 1 if it contains all classes in equal number Entropy is between 0 and 1 if classes are in unequal number

37

witten&eibe

Information Gain

It is expected reduction in entropy caused by partitioning the collection according to a attribute

Gain(S, Windy) = Entropy(S) 8/14 Entropy(Strue) 6/14 Entropy(Sfalse) = 0.94 (8/14) 0.811 (6/14) 1 = 0.048 On similar grounds, Gain(S, Outlook) = 0.246 Gain(S, Humidity) = 0.151 Gain(S, Temperature) = 0.029

38

witten&eibe

39

witten&eibe

Decision Tree

Information Gain of outlook attribute is highest among other attributes It provides the best prediction of target class Select outlook as the test attribute Create branches below the node for each possible value of outlook

40

Decision Tree

41

Continuing to split

42

witten&eibe

Decision Tree

43

Overfit tree Incorporating Continuous valued attributes Alternative attribute selection measure Handling Missing values

44

Tree Overfitting

Consider error of tree T over Training data: errortrain (T) Entire Data: errordata (T) A tree T overfits the training data if there is an alternative tree T such that errortrain (T) < errortrain (T) and errordata (T) > errordata (T)

45

Tree Overfitting

As complexity of tree increases, its error rate tends to increase with unobserved samples. Consider adding a erroneous tuple to the set

Outlook = Sunny, Temperature = Hot, Humidity = Normal, Wind = Strong, PlayTennis = No What is the effect of this 15th tuple on tree ?

This new tree T will fit the training data than existing tree T but may not perform well for entire distribution

46

Avoid Overfitting

Broadly classified in to two classes

approaches that stop growing the tree earlier, when further splitting is not statistically significant (Prepruning)

Less successful as it is difficult to estimate precisely when to stop growing the tree

approaches that allow the tree to overfit the data, and then post-prune the tree (post-pruning)

Reduced Error Pruning Rule Post Pruning (C4.5)

47

ID3 requires that the test attribute and target attribute should be discrete valued A continuous valued attribute should be converted into discrete valued attribute by partitioning the values into discrete intervals Partitioning requires splitting the range of values Find best possible splitting points

Information Gain X2 analysis

48

information gain measure favors attributes with many values over those with few values Consider adding attribute date with different values to the training data As date field alone is capable of classifying tuples, information gain will select it and it will lead to pure classification

This may result in overfitting (selection of an attribute that is non-optimal for prediction)

49

It results into tree with single node having 14 branches all leading to leaf node would fare poorly on subsequent examples This is not a good classifier

Use other measure to select attribute like gain ratio

May choose an attribute just because its split information is very low Standard fix:

First, only consider attributes with greater than average information gain Then, compare them on gain ratio

50

Outlook Info: Gain: 0.940-0.693 Split info: info([5,4,5]) Gain ratio: 0.247/1.577 0.693 0.247 1.577 0.156 Temperature Info: Gain: 0.940-0.911 Split info: info([4,6,4]) Gain ratio: 0.029/1.362 0.911 0.029 1.362 0.021

Humidity Info: Gain: 0.940-0.788 Split info: info([7,7]) Gain ratio: 0.152/1 0.788 0.152 1.000 0.152

Windy Info: Gain: 0.940-0.892 Split info: info([8,6]) Gain ratio: 0.048/0.985 0.892 0.048 0.985 0.049

51

witten&eibe

available data may be missing values for some attributes. it is common to estimate the missing attribute value based on other examples for which this attribute has a known value Strategies

assign it the value that is most common among training examples assign it the most common value among examples at node n that have the classification c(x)

52

Discussion

Algorithm for top-down induction of decision trees (ID3) was developed by Ross Quinlan

Gain ratio just one modification of this basic algorithm Led to development of C4.5, which can deal with numeric attributes, missing values, and noisy data

Similar approach: CART There are many other attribute selection criteria! (But almost no difference in accuracy of result.)

53

Summary

Top-Down Decision Tree Construction Choosing the Splitting Attribute Information Gain biased towards attributes with a large number of values Gain Ratio takes number and size of branches into account when choosing an attribute

54

Classification by Backpropagation

Backpropagation: A neural network learning algorithm A neural network: A set of connected input/output units where each connection has a weight associated with it During the learning phase, the network learns by adjusting the weights so as to be able to predict the correct class label of the input tuples Also referred to as connectionist learning due to the connections between units

55

Weakness

Long training time Require a number of parameters typically best determined empirically, e.g., the network topology or ``structure." Poor interpretability: Difficult to interpret the symbolic meaning behind the learned weights and of ``hidden units" in the network

Strength

High tolerance to noisy data Ability to classify untrained patterns Well-suited for continuous-valued inputs and outputs Successful on a wide array of real-world data Algorithms are inherently parallel Techniques have recently been developed for the extraction of rules from trained neural networks

56

It consists of one input layer, one or more hidden layer, and an output layer Each layer consists of units Input layer corresponds to attributes in training data These inputs pass through input layer, weighted and fed to second layer known as hidden layer Subsequently it is forwarded to next layer The output layer emits networks prediction There are no clear rules regarding the topology design like number of hidden layers, initial weights

57

A two-layer neural network

58

First decide the network topology: # of units in the input layer, # of hidden layers (if > 1), # of units in each hidden layer, and # of units in the output layer Normalizing the input values for each attribute measured in the training tuples to [0.01.0] One input unit per domain value, each initialized to 0 Output, if for classification and more than two classes, one output unit per class is used Once a network has been trained and its accuracy is unacceptable, repeat the training process with a different network topology or a different set of initial weights

59

Backpropagation

Iteratively process a set of training tuples & compare the network's prediction with the actual known target value For each training tuple, the weights are modified to minimize the mean squared error between the network's prediction and the actual target value Modifications are made in the backwards direction: from the output layer, through each hidden layer down to the first hidden layer, hence backpropagation Steps Initialize weights (to small random #s) and biases in the network Propagate the inputs forward (by applying activation function) Backpropagate the error (by updating weights and biases) Terminating condition (when error is very small, etc.)

60

Backpropagation

Propagate the inputs forward Inputs are applied to input layer It passes through input layer unchanged Net input and output of each hidden layer is calculated I j = wij Oi + j

i

Oj = 1

This process is repeated till output of output layer is computed which gives networks prediction

61

1+ e

I j

Backpropagation

Backpropagate the error The error is backpropagated to update weights and biases to reflect error in networks prediction For a unit j in output layer, the Errj

Err j = O j (1 O j )(T j O j )

Err j = O j (1 O j ) Errk w jk

k

wij = wij + (l ) Err j Oi

wij = wij + wij

j = j + (l ) Err j

62

j = j + j

Backpropagation

The weights and biases updated after each tuple known as case updating Other strategy is to store the updates in variable and update actual weights and biases after all tuples are presented known as epoch updating

63

Backpropagation

Terminating conditions: All wij in previous iteration was less than threshold Percentage of tuple misclassified in previous iteration was less than threshold Prespecified number of iterations has expired

64

Example Problem

AND gate

XOR Gate

65

A statistical classifier: performs probabilistic prediction, i.e., predicts class membership probabilities Foundation: Based on Bayes Theorem. Performance: A simple Bayesian classifier, nave Bayesian classifier, has comparable performance with decision tree and selected neural network classifiers Incremental: Each training example can incrementally increase/decrease the probability that a hypothesis is correct prior knowledge can be combined with observed data Standard: Even when Bayesian methods are computationally intractable, they can provide a standard of optimal decision making against which other methods can be measured

66

Let X be a data sample (evidence): class label is unknown Let H be a hypothesis that X belongs to class C Classification is to determine P(H|X), the probability that the hypothesis holds given the observed data sample X P(H) (prior probability), the initial probability E.g., X will buy computer, regardless of age, income, P(X): probability that sample data is observed P(X|H) (posteriori probability), the probability of observing the sample X, given that the hypothesis holds E.g., Given that X will buy computer, the prob. that X is 31..40, medium income

67 Data Mining: Concepts and Techniques

Bayesian Theorem

Given training data X, posteriori probability of a hypothesis H, P(H|X), follows the Bayes theorem

P ( X | H ) P ( H ) P(H | X) = P (X )

Informally, this can be written as posteriori = likelihood x prior/evidence Predicts X belongs to C2 iff the probability P(Ci|X) is the highest among all the P(Ck|X) for all the k classes Practical difficulty: require initial knowledge of many probabilities, significant computational cost

68 Data Mining: Concepts and Techniques

Let D be a training set of tuples and their associated class labels, and each tuple is represented by an n-D attribute vector X = (x1, x2, , xn) Suppose there are m classes C1, C2, , Cm. Classification is to derive the maximum posteriori, i.e., the maximal P(Ci|X) This can be derived from Bayes theorem

P(X | C )P(C ) i i P(C | X) = i P(X)

Since P(X) is constant for all classes, only P(C | X) = P(X | C )P(C ) i i i needs to be maximized

69

A simplified assumption: attributes are conditionally independent (i.e., no dependence relation between attributes): n

P( X | C i ) = P( x | C i ) = P( x | C i ) P( x | C i ) ... P ( x | C i ) k 1 2 n k =1

This greatly reduces the computation cost: Only counts the class distribution If Ak is categorical, P(xk|Ci) is the # of tuples in Ci having value xk for Ak divided by |Ci, D| (# of tuples of Ci in D) If Ak is continuous-valued, P(xk|Ci) is usually computed based on Gaussian distribution with a mean and standard deviation (x )

2

g ( x, , ) =

and P(xk|Ci) is

70

1 e 2

age <=30 <=30 31 40 >40 >40 >40 31 40 <=30 <=30 >40 <=30 31 40 31 40 >40 income student credit_rating buys_computer high no fair no high no excellent no high no fair yes medium no fair yes low yes fair yes low yes excellent no low yes excellent yes medium no fair no low yes fair yes medium yes fair yes medium yes excellent yes medium no excellent yes high yes fair yes medium no excellent no February 8, 2010

Class: C1:buys_computer = yes C2:buys_computer = no Data sample X = (age <=30, Income = medium, Student = yes Credit_rating = Fair)

71

P(Ci):

P(buys_computer = yes) = 9/14 = 0.643 P(buys_computer = no) = 5/14= 0.357

P(age = <=30 | buys_computer = yes) = 2/9 = 0.222 P(age = <= 30 | buys_computer = no) = 3/5 = 0.6 P(income = medium | buys_computer = yes) = 4/9 = 0.444 P(income = medium | buys_computer = no) = 2/5 = 0.4 P(student = yes | buys_computer = yes) = 6/9 = 0.667 P(student = yes | buys_computer = no) = 1/5 = 0.2 P(credit_rating = fair | buys_computer = yes) = 6/9 = 0.667 P(credit_rating = fair | buys_computer = no) = 2/5 = 0.4 X = (age <= 30 , income = medium, student = yes, credit_rating = fair) P(X|Ci) : P(X|buys_computer = yes) = 0.222 x 0.444 x 0.667 x 0.667 = 0.044 P(X|buys_computer = no) = 0.6 x 0.4 x 0.2 x 0.4 = 0.019 P(X|Ci)*P(Ci) : P(X|buys_computer = yes) * P(buys_computer = yes) = 0.028 P(X|buys_computer = no) * P(buys_computer = no) = 0.007 Therefore, X belongs to class (buys_computer = yes)

72

Nave Bayesian prediction requires each conditional prob. be nonzero. Otherwise, the predicted prob. will be zero

P ( X | C i) = n P ( x k | C i) k =1

Ex. Suppose a dataset with 1000 tuples, income=low (0), income= medium (990), and income = high (10), Use Laplacian correction (or Laplacian estimator) Adding 1 to each case Prob(income = low) = 1/1003 Prob(income = medium) = 991/1003 Prob(income = high) = 11/1003 The corrected prob. estimates are close to their uncorrected counterparts

73

Advantages Easy to implement Good results obtained in most of the cases Disadvantages Assumption: class conditional independence, therefore loss of accuracy Practically, dependencies exist among variables

E.g., hospitals: patients: Profile: age, family history, etc. Symptoms: fever, cough etc., Disease: lung cancer, diabetes, etc. Dependencies among these cannot be modeled by Nave Bayesian Classifier

74

References

1. 2. 3. 4.

Data Mining Concepts and Techniques, J. Han and M. Kamber Machine Learning, Tom Mitchell Knowledge Acquisition from Data Mining, Xindong Wu An Implementation of ID3 Decision Tree Learning Algorithm, Wei Peng, Juhua Chen, Haiping Zhou

75

Discussion

76

Thank You

77

- mahmon2014Uploaded byYusraRahmatSurya
- IJERTV2IS1001Uploaded bysayantan2210
- 10.5923.j.ajis.20150501.02Uploaded bygustavogfp
- Datamining PresUploaded byachrefchaabani
- Article_4Uploaded bysjalum22
- Fulltext Soft Springer 2006Uploaded byMari Coccia
- D ESIGN A ND I MPLEMENTATION F OR A UTOMATED N ETWORK T ROUBLESHOOTING U SING D ATA M ININGUploaded byLewis Torres
- iiuuUploaded byRavi Shankar
- CS2351 AI UNIT5 LearningUploaded bykalia_prml
- Prediction of the Comprei,l7kssive Strength of VacuumUploaded bynadirshah
- APPLYING ROUGH SET THEORY IN MULTIMEDIA DATA CLASSIFICATIONUploaded byInternational Journal of New Computer Architectures and their Applications (IJNCAA)
- LearningUploaded byKiếp Lãng Du
- Neural networks for the prediction and forecasting of water.pdfUploaded byMuhammad Arphan
- DATA, TEXT Mining Chap7Uploaded bygoforjessica
- Final PDFUploaded byDevansh
- Smartness in Code ReviewUploaded byJexia
- ijtra140765Uploaded byAkshay Kumar Pandey
- Report of AIUploaded byJagdeep
- Support Vector Machine for Wind Speed PredictionUploaded byInternational Journal of Research in Science & Technology
- 1-s2.0-S0307904X09002339-mainUploaded byDaniel Worku
- DeepLearning_LeCun.pdfUploaded bySafiullah Mubashir
- lUploaded byPriyaprasad Panda
- Neural Network Algorithm 2Uploaded byrockeysuseelan
- COMP484 Machine Learning Syllabus Undergraduate Fall 2018Uploaded byConor
- Prediction of Surface RoughnessUploaded byג'ון ירוק
- Modelling of NTC Thermistor Using Artificial Neural Network for Non-Linearity CompensationUploaded byjournalieij
- Visual Image Elements and Image-ClassificationUploaded byIsrar Khan
- 06665000Uploaded byKal El
- 1- NN Elchabib2005Uploaded byGhazi Eng
- 26Uploaded byShravan Gawande

- cloud securityUploaded byJijeesh Baburajan
- Activity Diagrams ExplUploaded byJijeesh Baburajan
- data miningUploaded byJijeesh Baburajan
- data miningUploaded byJijeesh Baburajan
- homomorphic encryptionUploaded byJijeesh Baburajan
- Design Issues in DcaUploaded byJijeesh Baburajan
- Security Issues in CloudUploaded byJijeesh Baburajan
- Security ModelsUploaded byJijeesh Baburajan
- Security ModelsUploaded byJijeesh Baburajan
- ubuntu_manualUploaded byvishalgupta_nitk
- p18 Survey GaberUploaded byJijeesh Baburajan
- Cloud SecurityUploaded byJijeesh Baburajan

- Smartness in Code ReviewUploaded byJexia
- Data Mining AlgorithmsUploaded bysatishgw
- PHYSICAL FEATURES BASED SPEECH EMOTION RECOGNITION USING PREDICTIVE CLASSIFICATIONUploaded byAnonymous Gl4IRRjzN
- Weka TutorialUploaded byyamen ajjour
- Basic Data Mining TutorialUploaded byDimitrije Paunovic
- Kuliah Evaluating Network Design Decisions Using Decision TreesUploaded bysigit f
- International Journal of Computer Science IJCSIS Vol. 10 No. 6 June 2012Uploaded byijcsis
- 3 Day Masterclass in Predictive AnalyticsUploaded bySushayan Hunsasuk
- RANDOM-FORESTS-FOR-BEGINNERS.pdfUploaded byamitag007
- Data Mining for the Internet of ThingsUploaded bylcm3766l
- 22 - Boundary Cutting for Packet ClassificationUploaded bypremkumarbujji
- CN2Uploaded byАлександр Гринчук
- pa_sp1_user_enUploaded bypattabhikv
- Data+Scientist+Nanodegree+SyllabusUploaded byRio Ardian
- Ruhul Sarker, Hussein a. Abbass, Charles Newton - Heuristic and Optimization for Knowledge Discovery1Uploaded byvladislav
- 212625_1806.03208Uploaded byEly John Karimela
- Estimation of Incremental Haulage Costs by Mining Historical Data and Their Influence in the Final Pit DefinitionUploaded byGaluizu001
- sensors-18-00623Uploaded bySurya Prakash
- Big DataUploaded bykhaleddanaf
- Mantas - Credal C45 Based on Imprecise Probabilities to Classify Noisy Data - 2014Uploaded byRaedadobeAdobe
- A Smart Pill Box with Remind and Consumption using IOTUploaded byIRJET Journal
- Project Report Team SAUR (2)Uploaded bySoumya Roy
- Randomized AlgorithmsUploaded byRaji Pillai
- BI Mini ProjectUploaded bySajan K Sakharia
- Information Theoretic SOP Expression MinimizationUploaded byAli Ahmad
- 10.1.1.113Uploaded byR.r. Susilo Murtiningsih
- Clustering Optimisation Techniques in Mobile Networks.pdfUploaded bychayhofmes
- Reconstruction Methods for Providing Privacy in Data MiningUploaded byEditor IJRITCC
- BIDM Assignment No1Uploaded byee052022
- DTminer-AbstractUploaded byapi-19728676