Sie sind auf Seite 1von 20

• deep learning is a machine learning method that takes in an input X, and uses

it to predict an output of Y.
• Given a large dataset of input and output pairs, a deep learning algorithm will
try to minimize the difference between its prediction and expected output.
• By doing this, it tries to learn the association/pattern between given inputs and
outputs — this in turn allows a deep learning model to generalize to inputs that
it hasn’t seen before.
• Deep Learning Algorithms use something called a neural network to find
associations between a set of inputs and outputs. 
• A neural network is composed of input, hidden, and output layers — all of
which are composed of “nodes”. Input layers take in a numerical
representation of data (e.g. images with pixel specs), output layers output
predictions, while hidden layers are correlated with most of the computation.
Step 1: Forward Propagation
All weights in the network are randomly assigned. The network then takes the
first training example as input. Then output V from the node in consideration
can be calculated. Outputs from the other node in the hidden layer is also
calculated. The outputs of the nodes in the hidden layer act as inputs to the
nodes in the output layer.

Step 2: Back Propagation and Weight Updation


We calculate the total error at the output nodes and propagate these errors back through
the network using Backpropagation to calculate the gradients. Then we use an
optimization method such as Gradient Descent to ‘adjust’ all weights in the network with
an aim of reducing the error at the output layer.

We repeat this process with all other training examples in our dataset. Then, our network is
said to have learnt those examples.
Example
Activation Function
It is used to determine the output of neural network like yes or no.
It maps the resulting values in between 0 to 1 or -1 to 1 etc. (depending upon the function).

Gradient Descent
Gradient descent is an optimization algorithm used to minimize some function by
iteratively moving in the direction of steepest descent as defined by the negative of
the gradient. In machine learning, we use gradient descent to update the parameters of
our model.

Loss Function
Our objective is to minimize the loss for a neural network by optimizing its
parameters(weights). The loss is calculated using loss function by matching
the target(actual) value and predicted value by a neural network. Then we use
the gradient descent method to update the weights of the neural network such
that the loss is minimized. 
Example

Das könnte Ihnen auch gefallen