Sie sind auf Seite 1von 7

Raul Rojas (29 June 2013). Neural Networks: A Systematic Introduction.

Springer Science &


Business Media. pp. 3–. ISBN 978-3-642-61068-4.

Kim, Jaebok & Englebienne, Gwenn & Truong, Khiet & Evers, Vanessa. (2017). Deep Temporal
Models using Identity Skip-Connections for Speech Emotion Recognition.
10.1145/3123266.3123353.

T.Wang, J. Huan and B. Li, "Data Dropout: Optimizing Training Data for Convolutional Neural
Networks," 2018 IEEE 30th International Conference on Tools with Artificial Intelligence (ICTAI),
Volos, 2018, pp. 39-46.
doi: 10.1109/ICTAI.2018.00017

Rumelhart, D. E., Hinton, G.E., Williams, R. J.,” Learning internal representations by error
propagation, “Parallel Distributed Processing: Explorations in the Microstructure of Cognition.
(1986).

Schmidhuber, Juergen. (2014). Deep Learning in Neural Networks: An Overview. Neural


Networks. 61. 10.1016/j.neunet.2014.09.003.

Kashmir Srinagar, j&k, India 2013 Comparative Study of Back Propagation Learning Algorithms
for Neural Networks ISSN: 2277 128X

Rumelhart, D. E., Hinton, G.E., Williams, R. J.,” Learning internal representations by error
propagation, “Parallel Distributed Processing: Explorations in the Microstructure of Cognition.
(1986).

Hearst, Marti & Dumais, S.T. & Osman, E. & Platt, John & Scholkopf, B.. (1998). Support vector
machines. Intelligent Systems and their Applications, IEEE. 13. 18 - 28. 10.1109/5254.708428.

Tian, Yingjie & Shi, Yong & Liu, Xiaohui. (2012). Recent advances on support vector machines
research. Technological and Economic Development of Economy. 18.
10.3846/20294913.2012.661205.
Rodan, Ali & Faris, Hossam & Al-sakran, Jamal & Al-Kadi, Omar. (2014). A Support Vector
Machine Approach for Churn Prediction in Telecom Industry. International journal on information.
17.

Rosasco, L.; De Vito, E. D.; Caponnetto, A.; Piana, M.; Verri, A. (2004). "Are Loss Functions All
the Same?" (PDF). Neural Computation. 16 (5): 1063–1076. CiteSeerX 10.1.1.109.6786.
doi:10.1162/089976604773135104. PMID 15070510.

Surakhi, Ola & Walid, A. Salameh. (2014). Enhancing the Performance of the BackPropagation for
Deep Neural Network. INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY. 13.
5274-5285. 10.24297/ijct.v13i12.5279.

Thakkar, Vignesh & Tewary, Suman & Chakraborty, Chandan. (2018). Batch Normalization in
Convolutional Neural Networks — A comparative study with CIFAR-10 data. 1-5.
10.1109/EAIT.2018.8470438.

Roberts, Eric. “Neural Networks History: The 1940’s to the 1970’s.” Stanford University,
Department of Computer Science. Accessed 09 October 2019.
https://cs.stanford.edu/people/eroberts/courses/soco/projects/neuralnetworks/History/history1.html

Tan, Hong & Lim, King Hann. (2019). Vanishing Gradient Mitigation with Deep Learning Neural
Network Optimization. 1-4. 10.1109/ICSCC.2019.8843652.

Roberts, Eric. “Neural Networks History: The 1980’s to the Present.” Stanford University,
Department of Computer Science. Accessed 09 July 2018.
Https://cs.stanford.edu/people/eroberts/courses/soco/projects/neuralnetworks/History/history2.html

Hochreiter, Sepp, Schmidhuber, Jürgen. “Long Short-Term Memory.” Institute of


Bioinformatics, Johannes Kepler University. Neural Computation, volume 9, pp. 1735-1780,
Accessed 10 July 2018. Http://www.bioinf.jku.at/publications/older/2604.pdf/

“A Collection of Neural Network Use Cases and Applications.” Pressive. 06 Aug 2017. Accessed
09 July 2018. Http://analyticscosm.com/a-collection-of-neural-network-use-casesand-applications/
“CS231n Convolutional Neural Networks for Visual Recognition.” Stanford University. Access 09
July 2018. Https://cs231n.github.io/convolutionalnetworks/#conv

Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep
convolutional neural networks. In Advances in neural information processing systems, pages 1097–
1105, 2012

Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image
recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition,
pages 770–778,2016

Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by
reducing internal covariate shift. arXiv preprintarXiv:1502.03167, 2015.

Vinod Nair and Geoffrey E Hinton. Rectified linear units improve restricted boltzmann machines.
In Proceedings of the 27th international conference on machine learning (ICML-10), pages 807–
814, 2010.

Djork-Arne Clevert, Thomas Unterthiner, and Sepp Hochreiter. Fast and ´accurate deep network
learning by exponential linear units (elus). arXivpreprint arXiv:1511.07289, 2015.

Gunter Klambauer, Thomas Unterthiner, Andreas Mayr, and Sepp ¨Hochreiter. Self-normalizing
neural networks. arXiv preprintarXiv:1706.02515, 2017.

P.Goyal, P. Dollar, R. Girshick, P. Noordhuis, “Accurate, Large Minibatch SGD: Training


ImageNet in 1 Hour,” Facebook AI Research (FAIR), In CVPR, 2017.

S.Ioffe and C. Szegedy, “Batch Normalization: Accelerating Deep Network Training by Reducing
Internal Covariate Shift,” In Proceedings of The 32nd International Conference on Machine
Learning, pp. 448–456, 2015

Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair,
Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural
information processing systems, pages 2672–2680, 2014
Pang Wei Koh and Percy Liang. Understanding black-box predictions via influence functions.
arXiv preprint arXiv:1703.04730, 2017.

R. J. Schalkoff, Artificial Neural Networks, McGraw-Hill, 1997

N. K. Bose, and P. Liang, Neural Network Fundamentals with Graph Algorithms and Applications,
McGraw-Hill, 1996.

S. Haykin, Neural Networks: A Comprehensive Foundation, 2nd ed. Englewood Cliffs, NJ:
Prentice-Hall, 1999

Zweiri, Y.H., Seneviratne, L D., Althoefer, K,” Stability analysis of three term back propagation
algorithm,”J.Neural Networks 18, 1341-1347(2005)

R. A. Jacobs, “Increased rates of convergence through learning adaptation, “Neural Networks,Vol.1


pp.295-307,1988

Shao, H., and Zheng,H.,” A new bp algorithm with adaptive momentum for FNNs training,” in:
GCIS 2009, Xiamen, China, pp. 16--20 (2009)

Rehman, M. Z., Nawi, N.M., Ghazali, M. I, “Noise-induced hearing loss (NIHL) prediction in
humans using a modified back propagation neural network,” in: 2nd International Conference on
Science Engineering and Technology, pp. 185--189 (2011)

Chien cheng Yu,and Bin Da Liu,”A backpropogation algorithm with adaptive learning rate and
momentum coefficient,” IEEE,2002

Swanston, D. J., Bishop, J.M., and Mitchell, R. J,” Simple adaptive momentum: new algorithm for
training multilayer perceptrons,” J. Electronic Letters. 30, 1498-1500 (1994)

Mitchell, R. J.,“On simple adaptive momentum,” in: CIS 2008, London, United Kingdom, pp.01--
06 (2008)

Nawi, N. M., Ransing, M. R. and Ransing, R. S,”An improved conjugate gradient based learning
algorithm for back propagation neural networks,” J. Computational Intelligence. 4, 46--55 (2007)
M. Z. Rehman , N. M. Nawi ,“ Improving the Accuracy of Gradient Descent Back Propagation
Algorithm (GDAM) on Classification Problems,” (IJNCAA) 1(4): 838-847ns, (2011) (ISSN: 2220-
9085)

Maier, H. R. and Dandy, G. C., “The effect of internal parameters and geometry on the performance
of backpropagation neural networks,” an empirical study. Environmental Modelling and Software.
13(2): p. 193-209.(1998)

Chandra P, Singh Y.,” An activation function adapting training algorithm for sigmoid feed forward
networks,” Neurocomputing 61:429–437(2004).

Pao YH, “Adaptive pattern recognition and neural networks,”2nd edn. Addison-Wesley, New York,
1989.

Hartman E, Keeler JD, Kowalski, “Layered neural networks with Gaussian hidden units as
universal approximations,” Neural Comput Appl 2(2):210–215, 1990

Hornik K, “Approximation capabilities of multilayer feed forward networks. Neural Net 4(2):251–
257, 1991

Leung H, Haykin S, “ Rational function neural network,” Neural Comput Appl 5(6):928–938,1993

Giraud B, Lapedes A, Lon C, Lemm J , “ Lorentzian neural Nets,” Neural Net 8(5):757–767,1995

Skoundrianos EN, Tzafestas SG, “Modelling and FDI of dynamic discrete time systems using a
MLP with a new sigmoidal activation function,” J Intell Robotics Syst 41(1):19–36,2004

Ma L, Khorasani K (2005), “ Constructive feed forward neural networks using hermite polynomial
activation functions,” IEEE Trans Neural Net 16(4):821–833

Wen C, Ma X , “ A max-piecewise-linear neural network for function


aproximation,”Neurocomputing 71:843–852,2005
Efe MO , “ Novel neuronal activation functions for feed forward neural networks,” Neural Process
Lett 28:63–79,2008

Gomes GSS, Ludermir TB, “Complementary log-log and probit: activation functions implemented
in artificial neural networks,”in: 8th International conference on hybrid intelligent Systems,” IEEE
Computer Society, pp 939–942,2008.

Gwang-Hee, K., Jie.-Eon., Y., Sung-Hoon., A., Hun-Hee, C. & Kyung-In, K., “Neural network
model incorporating a genetic algorithm in estimating construction costs,” Building and
Environment, Vol. 39(11), pp.1333 – 1340, 2004

Bazaraa, M. S., Sherali, H. D., & Shetty, C. M, Nonlinear programming. Theory and algorithms,
2nd ed., India, Wiley2004.

N. Riedmiller, H. Braun, “A direct adaptive method for faster back-propagation learning: the
RPROP algorithm, “in: Proc. of the International Conference on Neural Networks, 1993, pp. 586–
591.

Ioffe, S. and Szegedy, C. (2015). Batch normalization: Accelerating deep network training by
reducing internal covariate shift. CoRR, abs/1502.03167.

Johnson, J., Alahi, A., and Fei-Fei, L. (2016). Perceptual Losses Supplementary. Arxiv,
pages 1–5

Goodfellow, I. J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S.,
Courville, A., and Bengio, Y. (2014). Generative Adversarial Networks

Radford, A., Metz, L., and Chintala, S. (2015). Unsupervised Representation Learning
with Deep Convolutional Generative Adversarial Networks. pages 1–16

Lei Ba, J., Kiros, J. R., and Hinton, G. E. (2016). Layer Normalization. ArXiv e-prints.

Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., and Courville, A. C. (2017).
Improved training of wasserstein gans. CoRR, abs/1704.00028.
Arjovsky, M., Chintala, S., and Bottou, L. (2017). Wasserstein GAN

Cortes, C.; Vapnik, V. 1995. Support vector networks, in Proceedings of Machine Learning 20:
273–297

Peng, Y.; Kou, G.; Wang, G. X., et al. 2009. Empirical evaluation of classifiers for software risk
management, International Journal of Information Technology and Decision Making 8(4): 749–767.
http://dx.doi.org/10.1142/S0219622009003715

Cristianini, N.; Shawe-Taylor, J. 2000. An Introduction to Support Vector Machines and Other
Kernel-based Learning Methods. Cambridge University Press.

X. Liu, Z. Deng, and Y. Yang, “Recent progress in semantic image segmentation,” Artif.
Intell. Rev., vol. 52, no. 2, pp. 1089–1106, 2019

K. Jarrett, K. Kavukcuoglu, M. Ranzato, and Y. LeCun, “What is the best multi-stage


architecture for object recognition? BT - Computer Vision, 2009 IEEE 12th International
Conference on,” Comput. Vision, 2009 …, pp. 2146–2153, 2009

Das könnte Ihnen auch gefallen