Beruflich Dokumente
Kultur Dokumente
DEEP LEARNING
By Gargee Sanyal
1
CONTENTS
I. Introduction
II. History
III. Principle
IV. Technology
V. Working
VI. Formulations
VII. Advantage
VIII. Disadvantage
IX. Real Time Applications
X. Future Scope
XI. Conclusion
XII. References
2
INTRODUCTION
What is Deep Learning?
3
HISTORY
1958: Frank Rosenblatt creates the perceptron, an algorithm for
pattern recognition.
4
PRINCIPLE
Deep learning is based on the concept of artificial neural
networks, or computational systems that mimic the way the
human brain functions.
5
TECHNOLOGY
Deep learning is a fast-growing field, and new architectures,
variants appear every few weeks. We'll see discuss the major three:
6
TECHNOLOGY
2. Recurrent Neural Network (RNN)
RNNs are called recurrent because they perform the same task
for every element of a sequence, with the output being
depended on the previous computations. Or RNNs have a
“memory” which captures information about what has been
calculated so far.
7
TECHNOLOGY
3. Long-Short Term Memory
LSTM can learn "Very Deep Learning" tasks that require memories
of events that happened thousands or even millions of discrete time
steps ago.LSTM works even when there are long delays, and it can
handle signals that have a mix of low and high frequency
components.
8
WORKING
Consider the following handwritten sequence:
9
WORKING
10
WORKING
The idea of neural network is
to develop a system which can
learn from these large training
examples.
11
FORMULATIONS
The basis of deep learning is classification which can be further
used for detection, ranking, regression, etc.
12
ADVANTAGES
1. It does feature extraction, no need for engineering features
3. Better optimization
6. Better Architectures
13
CHALLENGES
1. Need a large dataset
14
REAL TIME APPLICATIONS
1. Automatic Colorization of Black and White Images
15
REAL TIME APPLICATIONS
6. Automatic Text Generation
16
FUTURE SCOPE
1. Deep Learning will speed search for extra terrestrial life.
17
FUTURE SCOPE
2. For Astronauts, Next Steps on Journey to Space Will Be Virtual
18
CONCLUSION
The low maturity of Deep Learning and its applications such as
large deep neural networks achieve the best results on speech
recognition, visual object recognition and several language related
task field warrants extensive future research. Nevertheless, the
possibilities of deep learning in future are infinite ranging from
driverless cars, to robots exploring the universe and to what not if
the upcoming architectures are creative enough.
19
REFERENCES
[1] Yoshua Bengio(2009), "Learning Deep Architectures for AI",
Foundations and Trends in Machine Learning: Vol. 2: No. 1,
pp 1-127
[2] Hinton, G. E., Osindero, S., & Teh, YW. (2006). A fast
learning algorithm for deep belief nets.
[3] Goodfellow, I. J., Warde Farley, D., Mirza, M., Courville,
A., & Bengio, Y. (2013). Maxout Networks.
[4] Agostinelli, F., Hoffman, M., Sadowski, P., & Baldi, P.
(2015). Learning activation functions to improve deep neural
networks.
[5] Hinton, G. E., Srivastava, N., Krizhevsky, A., Sutskever, I.,
& Salakhutdinov, R. R. (2012). Improving neural networks
by preventing co-adaptation of feature detectors.
20