Sie sind auf Seite 1von 4

DIY

7/6/2018
1
AIAM

Group 16
KEDAR MAHAMUNI | PIYUSH AGRAWAL | SOUMYA RANJAN
BEHERA | VENKAT SAMALA
We are considering RMSE to be the conclusive measure of accuracy of this model. Code gives a
matrix of RMSE values from which we calculate the mean RMSE and this mean RMSE is used to
check the performance of the model.

First we experimented with normalized dataset followed by 0,1 coded data.

Analysis of Neural Network with normalized dataset:

With Normalized dataset


Layers Nodes Activation Function Learning Rate Mean RMSE
1 3 logistic NULL 1.236150144
2 3,3 logistic NULL 1.213693011
3 3,3,3 logistic NULL 1.175137275
1 8 logistic NULL 1.262626843

Layers Nodes Activation Function Learning Rate Mean RMSE


1 3 tanh NULL 1.437800398
2 3,3 tanh NULL 1.388236546
3 3,3,3 tanh NULL 1.345666505
1 8 tanh NULL 1.561703982

1. Comparing RMSE: Using the loop , we train the neural network multiple times and we find
that mean RMSE decreases as we train the neural network.
2. Change in Number of Layers: Referring to the tables above, we infer that Mean RMSE
decreases with increase in number of layers. e.g. for single layer RMSE is 1.236 which drops
to 1.175 with 3 layers. hence we can say that model becomes more and more accurate when
we increase the number of hidden layers.
3. Change in number of nodes in one layer: As we increased number of nodes in a single layer
then we see that mean RMSE increased. This may be due to overfitting.
4. Change in Activation Function: Change in activation function from ‘logistic’ to ‘tanh’
increased mean RMSE which is clear from the results obtained in the table.
5. Change in Learning Rate: We experimented with logistic function, layers= 1, nodes=3. We
found that there is not much significant difference in values on mean RMSE when we
changed the learning rate from 0.1 to 0.05.

Analysis of Neural Network with 0,1 coded data:

1. Comparing RMSE: Again, using the loop , we train the neural network multiple times and we
find that mean RMSE decreases as we train the neural network.
2. Change in Number of Layers: Referring to the tables below, we infer that mean RMSE
increases with increase in number of layers. e.g. for single layer RMSE is 0.6635833 which
rises to 0.6910081 with 3 layers.
Using 0,1 coded data
Layers Nodes Activation Function Learning Rate Mean RMSE
1 3 logistic NULL 0.6635833
2 3,3 logistic NULL 0.678151925
3 3,3,3 logistic NULL 0.6910081
1 10 logistic NULL 0.626001089

Layers Nodes Activation Function Learning Rate Mean RMSE


1 3 tanh NULL 0.7287653
2 3,3 tanh NULL 0.72923465
3 3,3,3 tanh NULL 0.74624985
1 10 tanh NULL 0.7026304

3. Change in number of nodes in one layer: As we increased number of nodes in a single layer
then we see that mean RMSE decreased. We can say that model becomes more accurate
with increase in number of nodes in a single layer.
4. Change in Activation Function: Change in activation function from ‘logistic’ to ‘tanh’
increased mean RMSE which is clear from the results obtained in the table.
5. Change in Learning Rate: We did similar experiment as in previous dataset (with logistic
function, layers= 1, nodes=3). We found that there is not much significant difference in
values on mean RMSE when we changed the learning rate from 0.1 to 0.05.

Das könnte Ihnen auch gefallen