Sie sind auf Seite 1von 32

ARTIFICIAL NEURAL NETWORK AND

ITS APPLICATION IN CHEMICAL


ENGINEERING
PRESENTED BY : ATANU KUMAR PAUL 11/CHE/411
GUIDED BY : PROFESSOR K. C. GHANTA
[1] Tambe, S. S., B. D. Kulkarni, P. B. Deshpande, Elements of Artificial Neural
Networks with Selected Applications in Chemical Engineering, and Chemical And
Biological Sciences, Simulation & Advanced Controls, Inc., USA, (1996)
What is ANN ?
An artificial neural network (ANN), usually called
neural network (NN).

It is a mathematical model or computational model
that is inspired by the structure and functional
aspects of biological neural networks.
[1]


A neural network consists of an interconnected
group of artificial neurons, and it processes
information using a connectionist approach to
computation.
2

In most cases an ANN is an adaptive system
that changes its structure based on external
or internal information that flows through
the network during the learning phase.

Modern neural networks are non-linear
statistical data modeling tools. They are
usually used to model complex
relationships between inputs and outputs or
to find patterns in data.
3

The original inspiration for the term Artificial
Neural Network came from examination of central
nervous systems and their neurons, axons,
dendrites, and synapses, which constitute the
processing elements of biological neural networks
investigated by neuroscience.

In an artificial neural network, simple artificial
nodes, variously called "neurons", "neurodes",
"processing elements" (PEs) or "units", are
connected together to form a network of nodes
similar to the biological neural networks hence
the term "artificial neural network".
Background of ANN
4

5

Biological Neurons
The cell itself
includes a nucleus
(at the center).
From cell 2, the
dendrites provide
input signals to the
cell 1.
From cell 1, the
axon sends output
signals to cell 2 via
the axon terminals.
These axon
terminals merge
with the dendrites
of cell 2.
6

A GENERAL ANN Architecture
7

McCulloch and Pitts (1943): The model they created had
two inputs and a single output. They noted that a neuron
would not activate if only one of the inputs was active.
Rosenblatt (1958): He developed perceptron.
Selfridge (1958): Brought the idea of the weight space to
the perceptron.
Widrow and Hoff (1960): Developed a mathematical
method for adapting the weights.
Werbos (1974): First developed the back propagation
algorithm.
Anderson and Rosenfeld (1987): The electrochemical
process of a neuron works like a voltage-to-frequency
translator
Kaastra and Boyd (1996): Developed neural network
model for forecasting financial and economic time series.
History of Neural Networks
8

Recent Development in ANN
Dewolf et al. (2000): Demonstrated the
applicability of neural network technology
for plant diseases forecasting.

Sanzogni et al. (2001): Developed the models
for predicting milk production from farm
inputs using standard feed forward ANN.

Gaudart et al. (2004): Compared the
performance of MLP and that of linear
regression.
9

Bhadeshia (2008): Applied neural network in
materials science.

Rivero et al (2010): By using genetic programming
they generate and simplified ANNs.

Nikbakht et all (2011): They modelled double CSTR
by Radial Basis Function (RBF) & Multilayer
Perceptron (MLP) Neural networks.

Somsong et all (2011): Neural Network Modelling
and Optimization for a Batch Crystallizer to Produce
Purified Terephthalic Acid.
CONTD......
Properties of ANNs
Adaptively: changing the connection
strengths to learn things

Non-linearity: the non-linear activation
functions are essential

Fault tolerance: if one of the neurons or
connections is damaged, the whole
network still works quite well .
1
0

They might be better alternatives than
classical solutions for problems
characterised by:

Nonlinearities
High dimensionality
Noisy, complex, imprecise, imperfect
and error prone sensor data
A lack of a clearly stated
mathematical solution or algorithm
1
1

Artificial
Intellect with
Neural
Networks
Intelligent
Control
Technical
Diagnistics
Intelligent
Data Analysis
and Signal
Processing
Advance
Robotics
Machine
Vision
Image &
Pattern
Recognition
Intelligent
Security
Systems
Intelligent
Medicine
Devices
Intelligent
Expert
Systems
APPLICATIONS OF ARTIFICIAL NEURAL
NETWORKS
1
2

DEVELOPMENT OF NEURAL
NETWORK
Step 1: Data Collection
Step 2: Training And Testing Data
Separation
Step 3: Network Architecture
Step 4: Parameter Tuning And Weight
Initialization
Step 5: Data Transformation
Step 6: Training
Step 7: Testing
Step 8: Implementation 1
3

1
4

Different ANN Algorithms published in
various literatures
[3]

1
5

Algorithm
Input
Layer
Transfer
Function
Output
Layer
Transfer
Function
No Of
Nodes
Average
Absolute
Relative
Error
(AARE)
Standard
Deviation
R
Levenburg Marquardt logsig purelin 3 0.127 0.164 0.997
BFGS Algorithm tansig purelin 8 0.131 0.141 0.939
Fletcher Reeves Update tansig purelin 7 0.135 0.157 0.945
One Step Secant Algorithm radbas purelin 6 0.136 0.138 0.940
Powell Beale Restarts tansig purelin 3 0.140 0.164 0.943
Polak Ribire Update tansig purelin 8 0.142 0.164 0.944
Batch Gradient Descent logsig purelin 3 0.152 0.170 0.931
Resilient Back Propagation tribas purelin 6 0.152 0.189 0.924
Performance of Different ANN
algorithm
[3]

1
6

Advantages of slurry transport
Pipeline transport has been a progressive
technology for conveying a large quantity of
bulk materials.

The modern way of pipelining prefers the
concentrated slurries since hydraulic
transport of dense hydro-mixtures can bring
several advantages.
1
7

Compared to a mechanical transport, the use
of a pipeline ensures a dust free
environment, demands substantially less
space, makes possible full automation and
requires a minimum of operating staff.

On the other hand, it brings higher
operational pressures and considerable
demands for a high quality of pumping
equipment and control system.
1
8

In solidliquid multiphase flow, the separate phases
move at different average velocities and the in situ
concentrations are not same as the concentrations in
which the phases are introduced or removed from the
system.
The variation of in situ concentrations from the supply
concentrations is referred to as hold-up phenomenon.
The hold-up effect is measured by the hold-up ratio,
given by the ratio of the average in situ concentration
and mean discharge concentration:

1
9

Transfer
Function
Node-1 Node-2 Node-3 Node-4 Node-5 Node-6 Node-7 Node-8 Node-9 Node-10
traingdm
0.007927 0.006512 0.00863 0.007665 0.010365 0.010908 0.009283 0.011281 0.012347 0.014219
traingda
0.002227 0.0019 5.56E-04 0.003659 0.001214 6.04E-04 0.003803 0.001032 0.002219 9.43E-04
trainbfg
0.001225 0.011836 0.00239 0.001146 0.002847 0.003317 7.52E-04 0.001714 0.004518 6.56E-04
traincgp
0.001603 0.001177 0.001428 5.93E-04 5.16E-04 0.003119 4.38E-04 0.001253 4.43E-04 6.07E-04
traingdx
7.08E-04 0.003339 0.002992 0.004041 8.97E-04 0.001035 0.001412 0.001668 0.006168 0.001416
trainoss
0.004139 0.00412 0.001275 0.008528 0.001165 0.001349 0.002171 0.001366 0.00205 0.001161
trainlm
1.27E-04 3.68E-04 4.55E-04 8.22E-05 0.003833 2.61E-04 1.46E-04 6.54E-04 1.86E-04 2.57E-04
trainr
0.002642 5.77E-04 5.69E-04 6.36E-04 4.39E-04 5.80E-04 0.00918 0.001271 0.001008 5.43E-04
traincgb
3.55E-04 0.002754 0.001771 4.25E-04 1.34E-04 3.76E-04 4.31E-04 9.09E-04 5.52E-04 5.51E-04
trainrp
0.004059 0.005213 0.003364 0.002913 0.001135 4.30E-04 7.05E-04 0.006367 0.001048 0.002184
trainc
3.93E-04 0.00201 0.003713 6.67E-04 6.00E-04 7.27E-04 5.14E-04 3.50E-04 5.33E-04 5.62E-04
traincgf
0.004139 0.002698 9.29E-04 0.001207 7.46E-04 4.45E-04 0.001863 0.001024 0.002795 0.002099
trainb
0.004646 0.005271 0.003376 0.006689 0.009102 0.004574 0.005323 0.001448 0.002555 0.004726
trainru
0.014015 0.023642 0.054308 0.012482 0.037854 0.042394 0.02013 0.020234 0.018043 0.013641
traingd
0.010893 0.014854 0.008741 0.008333 0.005302 0.019992 0.00968 0.005564 0.022081 0.009663
trainscg
5.75E-04 0.004944 0.001014 9.81E-04 5.00E-04 4.66E-04 0.0013 0.001291 0.001926 0.001628
MSE for Different Node Generated by ANN
2
0

2
1

2
2

2
3

2
4

2
5

2
6

Regime identification is important for slurry pipeline
design as it is the prerequisite to apply different
pressure drop correlations in different regimes.

Four distinct regimes were found existent in slurry flow
in a pipeline depending upon the average velocity of
flow.

Sliding bed
Saltation
heterogeneous suspension
Homogeneous suspension
Regime identification
2
7

Transfer
Function
Node-1
Node-
2
Node-
3
Node-
4
Node-
5
Node-
6
Node-
7
Node-
8
Node-
9
Node-
10
traingdm
1.01E+00 9.65E-01 9.09E-01 9.79E-01 1.05E+00 9.22E-01 1.45E+00 1.04E+00 1.87E+00 1.18E+00
traingda
1.09E+00 1.01E+00 1.01E+00 0.804162 0.95323 1.02E+00 0.951856 1.45E+00 0.856645 8.34E-01
trainbfg
8.88E-01 0.937544 0.889618 1.30E+00 0.982814 8.14E-01 8.89E-01 0.948371 1.265041 1.11E+00
traincgp
0.920743 0.90874 0.897977 1.74E+00 7.97E-01 0.918543 1.23E+00 0.934043 7.46E-01 7.99E-01
traingdx
1.08E+00 1.01E+00 1.05E+00 9.57E-01 8.69E-01 1.12E+00 1.49E+00 1.29E+00 8.86E-01 1.07E+00
trainoss
0.939558 1.048902 1.52E+00 0.932179 0.830731 0.615084 0.840119 0.798551 0.928933 0.806577
trainlm
8.97E-01 9.24E-01 9.19E-01 9.54E-01 0.966175 2.23E+00 9.12E-01 1.57E+00 1.31E+00 1.51E+00
trainr
0.956817 1.49E+00 8.84E-01 9.92E-01 1.09E+00 6.32E-01 1.008749 1.385658 1.274897 1.92E+00
traincgb
1.13E+00 1.052034 0.971989 1.01E+00 9.80E-01 8.37E-01 8.01E-01 9.67E-01 9.23E-01 8.43E-01
trainrp
0.842406 1.01232 1.023502 1.009231 0.912786 7.35E-01 8.65E-01 0.865856 1.081044 0.850224
trainc
7.68E-01 7.73E-01 7.98E-01 5.57E-01 1.33E+00 1.82E+00 9.26E-01 5.23E-01 1.40E+00 9.83E-01
traincgf
9.82E-01 9.24E-01 9.22E-01 8.88E-01 1.02E+00 9.42E-01 9.65E-01 1.18E+00 6.83E-01 8.00E-01
trainb
9.27E-01 1.02E+00 1.08E+00 9.72E-01 1.04E+00 1.11E+00 1.06E+00 9.27E-01 1.18E+00 2.62E+00
trainru
1.201813 1.05E+00 1.248647 1.504672 1.941665 1.173197 6.465297 2.434561 2.619652 7.829315
traingd
1.025887 0.971142 1.039606 0.961899 1.062214 0.990724 0.821197 1.309626 0.742879 1.050498
trainscg
1.03E+00 0.948015 0.990479 9.85E-01 8.30E-01 7.74E-01 1.062524 0.82514 0.940529 1.305016
MSE for Different Node in Regime Identification
2
8

2
9

From this entire study it may be concluded
that Artificial Neural Network (ANN) is
applied in chemical process engineering very
significantly.

ANN calculates and generates process data
very accurately.

Application of ANN in different fields of
Engineering other than data prediction, fault
diagnosis, process control etc. can be
explored .

1. Tambe, S. S., B. D. Kulkarni, P. B. Deshpande, Elements of
Artificial Neural Networks with Selected Applications in Chemical
Engineering, and Chemical And Biological Sciences, Simulation &
Advanced Controls, Inc., USA, (1996).
3. Lahiri, S. K., K. C. Ghanta, Development of An Artificial Neural
Network Correlation for Prediction of Pressure Drop of Slurry
Transport in Pipelines, International Journal of Mathematics,
Science & Engineering Applications (IJMESE), Vol. 2, No. 1, pp. 1-
21, (2008).
References
3
0

2. Lahiri S.K. and Ghanta K.C. ,Development of an artificial neural
network correlation for prediction of hold-up of slurry transport in
pipelines, Chemical Engineering Science 63,1497 1509 (2008)
3
1

CONTD......
4. Lahiri, Sandip K. and Ghanta, Kartik Chandra , Development of a
hybrid support vector machine and genetic algorithm model for regime
identification of slurry transport in pipelines, Asia-Pac. J. Chem. Eng.
,Published online in Wiley InterScience, (www.interscience.wiley.com)
DOI:10.1002/apj.410 (2009).
5. Agarwal, M., Jade A. M., Jayaraman V. K. and Kulkarni B. D. (2003),
Support vector machines: A useful tool for process engineering
applications, Chem. Engg. Progr., 57-62.
6. Vapnik V. (1995), The Nature of Statistical Learning Theory, Springer
Verlag, New York.

7. Vapnik V. (1998), Statistical Learning Theory, John Wiley, New York.
Thank You

Das könnte Ihnen auch gefallen