Sie sind auf Seite 1von 16

Neural Networks for Image

Classification
Ben Quachtran and Daniel Lau
Overview
● Inspired by biological nervous system

● Consist of neurons that fire electrical signals

http://physicsworld.com/cws/article/news/2011/sep/13/physicists-in-tune-with-neurons

https://sites.google.com/site/secretsofthebrainsz/neurons
Overview

● Neurons can be used as a powerful computational model

● Flexible system for machine learning

http://singularityhub.com/wp-content/uploads/2015/08/autistic-neural-network-3.jpg
History
● Warren McCulloch and Walter Pitts (1943)
● Paul Werbos (1975)
● Best MNIST Performance (2012)
● Swiftkey (2015)
● AlphaGo (2016)

http://goo.gl/7iPTK0

https://blog.swiftkey.com/neural-networks-a-meaningful-leap-for-mobile-typing/
https://goo.gl/djpFVt
Motivation
● Neural Networks is an emerging field in machine-learning

● Combines biological and computational fields

● Wide range of implementation complexities

http://www.frontiersin.org/files/Articles/7025/fninf-04-00112-r1/image_m/fninf-04-000112-
g001.jpg
Approach
● Learn image features using adaptable filters

● Sigmoidal Activation Function

● Feed-Forward Back-Propagation

https://upload.wikimedia.org/wikipedia/commons/thumb/6/60/ArtificialNeuronModel_english.png/600px-ArtificialNeuronModel_english.png
Layer Architecture
● Each pixel value is an input neuron
● Input-to-Hidden and Hidden-to-Output Weights
● Each output neuron corresponds to a categorical value

http://engineering.flipboard.com/assets/convnets/Convolution_schematic.gif

http://www.extremetech.com/wp-content/uploads/2015/07/NeuralNetwork.png
Sigmoidal Activation

http://i.stack.imgur.com/iIcbq.gif http://www.saedsayad.com/images/ANN_Sigmoid.png
Feed-Forward Back-Propagation

https://idea2bank.files.wordpress.com/2011/04/bpp.jpg
Software
● Modularity

● User Interface

● MATLAB for generating training data

https://docs.toradex.com/102686-modularity-icon.jpg?v=1
Procedure
● Trained on set of 28,000 images
○ 10 digits and 4 symbols
■ Generated symbol images

● Parameter Optimization
○ Number of Hidden Nodes
○ Momentum
○ Learning Rate Annealing

https://www2.warwick.ac.uk/fac/cross_fac/complexity/study/msc_and_ph
d/co902/2013_2014/resources/mnisttrain.png
Results
Digit Marginal Accuracy

● 95.43% accuracy on MNIST test database 0 98%

1 99%

2 92%
Classifier Accuracy
3 91%
Linear 88.0%
4 93%
K-Nearest-Neighbors 95.0%
5 90%
2-layer NN, 1000 hidden units 95.5%
6 96%
40 PCA + quadratic classifier 96.7%
7 93%

3-layer NN, 500+300 HU 98.47% 8 91%

http://yann.lecun.com/exdb/mnist/
9 90%
Results
● Tested network on our own handwritten digits and symbols

● Inconsistent accuracy across digits


Challenges
● Back-Propagation algorithm

● Proper parameter selection


○ Achieve >90% accuracy
○ Efficient training

● Sufficient training data

● LCDK USB speeds

http://www.ebuyer.com/blog/wp-content/uploads/2015/07/neural-map.jpg
Discussion
● Network performed better on the MNIST testing

● Slow USB speeds were not solved

● Add more hidden layers

● Testing design flexibility

http://www.theneweconomy.com/wp-content/uploads/2015/10/neural-networks.jpg
Conclusion
● Neural networks do work!

● Number of layers modeling complexity

● Networks have a small parameter solution space

http://www.learningmachines101.com/wp-content/uploads/2015/09/LM101-035image.jpg

Das könnte Ihnen auch gefallen