Sie sind auf Seite 1von 18

Image - Structured or Unstructured?

An image is considered as an unstructured data.

Even though every digital image is stored in structured formats such as jpg, png, gif,
etc., it doesn't contain relevant information, which is of interest to human or computer
system. It can be converted into a structured form through image analysis.

 CIFAR-10 is a widely used dataset for Machine Learning research, which is


created by A. Krizhevsky et al.

 It consists of 60,000 - 32x32 color images in 10 classes (airplane, automobile,


bird, cat, deer, dog, frog, horse, ship, and truck) with 50,000 training images and
10,000 testing images.

 Each class has 6,000 images. The classes in a CIFAR-10 dataset are mutually
exclusive.

At a glance:

 Number of classes: 10
 Size of image: 32 x 32 x 3

Note: In this course, we use only a subsetof the above dataset due to memory
constraints in online cloud platform. We will be explaining the generation of
subset in the upcoming cards

Subset Generation
As explained in dataset description, we use only a subset of CIFAR-10 dataset.

1. The dataset with 50,000 samples is split in the ratio 92:8. This split is done to
take a smaller portion of 50000 samples (i.e the 8% contains only 4000 images).

2. These 4000 samples are used for generating the train and test sets for
classification.

Here, StratifiedShuffleSplit is used to split the dataset. It splits the data by taking
equal number of samples from each class in a random manner.
#Splitting the whole training set into 92:8

seed=7

from sklearn.cross_validation import StratifiedShuffleSplit

data_split = StratifiedShuffleSplit(labels_all,1, test_size=0.08,random_state=seed) #


creating data_split object with 8% test size

for train_index, test_index in data_split:

split_data_92, split_data_8 = data_all[train_index], data_all[test_index]

split_label_92, split_label_8 = labels_all[train_index], labels_all[test_index]

4000 samples are split in the ratio 7:3. (i.e., 2800 for training and 1200 for testing)
using StratifiedShuffleSplit.

#Splitting the training set into 70 and 30

train_test_split = StratifiedShuffleSplit(split_label_8,1, test_size=0.3,random_state


=seed) #test_size=0.3 denotes that 30 % of the dataset is used for testing.

for train_index, test_index in train_test_split:

train_data_70, test_data_30 = split_data_8[train_index], split_data_8[test_index]

train_label_70, test_label_30 = split_label_8[train_index], split_label_8[test_in


dex]
train_data = train_data_70 #assigning to variable train_data

train_labels = train_label_70 #assigning to variable train_labels

test_data = test_data_30

test_labels = test_label_30

You can see the size of the above variables using:

print 'train_data : ', train_data.shape

print 'train_labels : ', train_labels.shape

print 'test_data : ', test_data.shape

print 'test_labels : ', test_labels.shape

Need for Preprocessing


 Using the Data preprocessing step, the raw data is converted into a form
suitable for subsequent analysis. All the steps before data training (model
creation) can be considered as a pre-processing step.

 The quality of an image is greatly influenced by its clarity and the device used to
capture it.

 The captured image may contain noise and irregularities, which can be removed
via preprocessing steps.

Need for Preprocessing


Some of the common preprocessing techniquesinclude:

 Normalization

 Dimensionality reduction (eg. PCA, SVD)

 Feature Extraction (e.g. SIFT, HOG)

 Whitening

 Denoising

 Contrast Stretching

 Background subtraction

 Image Enhancement

 Smoothing

In the following cards, we will describe some of the preprocessing techniques that can
be applied to images.

Normalization
Normalization is the process of converting the pixel intensity values to a normal state.

 It follows a normal distribution.

 A normalized image has mean = 0 and variance = 1

# definition of normalization function

def normalize(data, eps=1e-8):

data -= data.mean(axis=(1, 2, 3), keepdims=True)

std = np.sqrt(data.var(axis=(1, 2, 3), ddof=1, keepdims=True)) # calculating stan


dard deviation

std[std < eps] = 1.


data /= std

return data

# calling the function

train_data = normalize(train_data)

test_data = normalize(test_data)

# prints the shape of train data and test data

print 'train_data: ', train_data.shape

print 'test_data: ', test_data.shape

ZCA Whitening
Normalization is followed by a ZCA whitening process.

The main aim of whitening is to reduce data redundancy, which means the features are
less correlated and have the same variance.

ZCA stands for zero-phase component analysis. ZCA whitened images resemble
the normal image.

Principle Component Analysis (PCA)


 The major function of PCA is to decompose a multivariate dataset into a set of
successive orthogonal components. These orthogonal components explain a
maximum amount of the variance.

 PCA is a dimensionality reduction technique.

The whitened data is given as the input to PCA.

from sklearn.decomposition import PCA

# n_components specify the no.of components to keep


train_data_pca = PCA(n_components=train_data_flat.shape[1]).fit_transform(train_data_
flat)

test_data_pca = PCA(n_components=test_data_flat.shape[1]).fit_transform(test_data_fla
t)

train_data_pca = train_data_pca.T

test_data_pca = test_data_pca.T

To explore more on PCA, refer this link.

Singular Value Decomposition (SVD)


 SVD is a dimensionality reduction techniquethat has been used in several fields
such as image compression, face recognition, and noise filtering.

 In this method, a digital image (generally considered as a matrix) is decomposed


into three other matrices.

 The singular values (less in number) obtained from this refactoring process can
preserve useful features of the original image without utilizing high storage space
in the memory.

Singular Value Decomposition (SVD)


The below code for SVD may not work in the available online cloud playground due to
package issues. So, it is better to try this out in a local Python environment.

from skimage import color

# definition for SVD

def svdFeatures(input_data):

svdArray_input_data=[]

size = input_data.shape[0]

for i in range (0,size):

img=color.rgb2gray(input_data[i])

U, s, V = np.linalg.svd(img, full_matrices=False);

S=[s[i] for i in range(30)]

svdArray_input_data.append(S)

svdMatrix_input_data=np.matrix(svdArray_input_data)

return svdMatrix_input_data

# apply SVD for train and test data


train_data_svd=svdFeatures(train_data)

test_data_svd=svdFeatures(test_data)

Scale-Invariant Feature Transform for Feature


Generation (SIFT)
SIFT is mainly used for images that are less simple and less organized.

Even the photographs of the same material will undergo scale change corresponding to
the distance from the material, focal length etc. This is one of the reasons for not
considering the raw pixel values as useful features for images.

The main aim of using SIFT for feature extraction is to obtain features that are not
sensitive to changes in scale, rotation, image resolution, illumination, etc.

The major steps involved in SIFT algorithm are:

 Scale-space Extrema Detection

 Keypoint Localization

 Orientation Assignment

 Keypoint Descriptor

How does Classifier Work?


The following are the steps involved in building a classification model:

1. Initialize the classifier to be used.

2. Train the classifier - All classifiers in scikit-learn utilizes a fit(X, y) method to fit
the model (training) for the given train data X and train label y.

3. Predict the target - Given an unlabeled observation X, the predict(X) returns the
predicted label y.

4. Evaluate the classifier model - The score(X,y) returns the score for the given test
data X and test label y.
Classification Algorithms
There are various algorithms to solve the classification problems.

Few of them are as follows:

 Support Vector Machine Classifier (SVM)

 Naive Bayes Classifier

 Stochastic Gradient Descent Classifier

Note: The explanation for the algorithms are given in the Machine Learning
Axioms course. Refer this for further details.

In this course, let's see SVM in detail.

Support Vector Machine (SVM)

Support Vector Machine (SVM) is effective in:

 High-dimensional spaces.

 In cases, where, the number of dimensions > the number of samples.

 In cases with a clear margin of separation.

Given below is the code snippet for training in SVM:

from sklearn import svm #Creating a svm classifier model

clf = svm.SVC(gamma=.001,probability=True) #Model training

clf.fit(train_data_flat_t, train_labels) #After being fitted, the model can then be u


sed to predict the output.

Here, train_data_flat_t can be replaced with train_data_pca or train_data_svd for


PCA and SVD respectively.
Support Vector Machine (SVM) (Contd..)
For Prediction :

predicted=clf.predict(test_data_flat_t)

score= clf.score(test_data_flat_t,test_labels) #classification score.

print("score",score)

Similarly, test_data_flat_t can be replaced with test_data_pca or test_data_svd.

Above mentioned conventional classification algorithms could not give significant


accuracy. But, a better performance can be achieved by using deep learning techniques
like Convolutional Neural Networks (CNN).

Convolutional Neural Networks (CNN)


Deep learning has become more important for learning complex algorithms. It is a more
refined form of machine learning, which is based on neural networks that emulate the
brain.

Neural network consists of:

 input layer

 hidden layers

 output layer

Each layer is composed of nodes, where the computation happens.

Neural Network consists of interconnected neurons that passes

messages between each other.

 CNN is a special case of neural networks that consists of multiple convolutional


layers, pooling layers and finally, fully connected layers.
 The improved network structure helps in saving memory and computational
complexity. They are mainly used in pattern and image recognition problems.

Cross Validation
 Cross validation is considered as a model validation technique to evaluate the
performance of a model on unseen data.

 It is a better estimate to evaluate testing accuracy than training accuracy on


unseen data.

Points to remember:

 Cross validation gives high variance if the testing set and training set are not
drawn from the same population.

 Allowing training data to be included in testing data will not give actual
performance results.

In cross validation, the number of samples used for training the model is reduced, and
the results depend upon the choice of the pair of training and testing sets.

You can refer to the various cross validation approaches from here.

Partitioning the Data


It is a methodological mistake to test and train on the same dataset because the
classifier would fail to predict correctly for any unseen data. This could result
in overfitting.

To avoid this problem,

 The data is split into train set, validation set, and test set.

o Training Set: The data used to train the classifier.

o Validation Set: The data used to tune the classifier model parameters i.e.,
to understand how well the model has been trained (as part of training
data).

o Testing Set: The data used to evaluate the performance of the classifier
(unseen data by the classifier).
 This will help us to know the efficiency of our model.

 Since the online platform used in this course doesn't support huge dataset, only a
few samples are taken for training and testing.

Confusion Matrix

The above image is a confusion matrix for a two class classifier.

In the table,
 TP (True Positive) - The number of correct predictions that the occurrence is
positive

 FP (False Positive) - The number of incorrect predictions that the occurrence is


positive

 FN (False Negative) - The number of incorrect predictions that the occurrence is


negative

 TN (True Negative)- The number of correct predictions that the occurrence is


negative

 TOTAL - The total number of occurrence

Confusion Matrix is a technique used to evaluate the performance of a classifier.

 It visually depicts the performance in a tabular form that has two dimensions
namely, actual and predicted sets of data.

 The rows and columns of the table show the count of false positives, false
negatives, true positives, and true negatives.

The first parameter shows true values and the second parameter shows predicted
values.

from sklearn import metrics

conf_matrix=metrics.confusion_matrix(test_labels,predicted)

print("Confusion matrix:",conf_matrix)

In the above code, test_labels are the actual labels and predicted are the predicted
labels.

 Here, the diagonal elements of the confusion matrix shows the number of
correctly classified labels.

Classification Accuracy
Classification accuracy is defined as the percentage of correct predictions.

 To calculate class wise accuracy,

 CA = (correctly predicted images of a class/(Total images of the clas


s)) * 100

Class-wise accuracy is given by:

#To see the accuracy of each class.

accuracy=[]

leng = len(conf_matrix) #finding the length of confusion matrix

for i in range(leng):

#each diagonal element (conf_matrix[i,i]) is divided by the sum of the

elements of that particular row (conf_matrix[i].sum()).

ac=(conf_matrix[i,i]/((conf_matrix[i].sum())+.0000001))*100

accuracy.append(ac)

print accuracy

Overall accuracy is given by, OA = Sum of class-wise accuracy/no of classes


The code is as follows:

summation=0

no_of_classes = 10

for i in range(0,len(accuracy)):

summation+=accuracy[i]

overall_accuracy = summation/no_of_classes

print overall_accuracy

High classification accuracy always indicates a good classifier.


False

TF-IDF is a common methodology used in pre-processing of images.


True

The improvement of the image data that suppresses distortions or enhances


image features is called ____________.
Image preprocessing

Classification where each data is mapped to more than one class is called
____________.
Binary

In Supervised learning, class labels of the training samples are


Known

Choose the correct sequence for classifier building from the following:
initia – train- predict- evaluate
Select the correct option that directly achieves multi-class classification
(without support of binary classifiers).
K nearest

Which algorithm can be used for matching local regions in two images?
SIFT

Pruning is a technique associated with ______________.


Decision tree

Higher value of which of the following hyperparameters is better for decision


tree algorithm?
Can’t say

Which one of the following is not a classification technique?


Stratifiedshufflepoint

The first layer in a CNN is never a Convolutional Layer.


true

Das könnte Ihnen auch gefallen