Blog Archives
Derivation: Error Backpropagation & Gradient Descent for Neural Networks
Introduction
Artificial neural networks (ANNs) are a powerful class of models used for nonlinear regression and classification tasks that are motivated by biological neural computation. The general idea behind ANNs is pretty straightforward: map some input onto a desired target value using a distributed cascade of nonlinear transformations (see Figure 1). However, for many, myself included, the learning algorithm used to train ANNs can be difficult to get your head around at first. In this post I give a step-by-step walk-through of the derivation of gradient descent learning algorithm commonly used to train ANNs (aka the backpropagation algorithm) and try to provide some high-level insights into the computations being performed during learning.
Some Background and Notation
An ANN consists of an input layer, an output layer, and any number (including zero) of hidden layers situated between the input and output layers. Figure 1 diagrams an ANN with a single hidden layer. The feed-forward computations performed by the ANN are as follows: The signals from the input layer are multiplied by a set of fully-connected weights
connecting the input layer to the hidden layer. These weighted signals are then summed and combined with a bias
(not displayed in the graphical model in Figure 1). This calculation forms the pre-activation signal
for the hidden layer. The pre-activation signal is then transformed by the hidden layer activation function
to form the feed-forward activation signals leaving leaving the hidden layer
. In a similar fashion, the hidden layer activation signals
are multiplied by the weights connecting the hidden layer to the output layer
, a bias
is added, and the resulting signal is transformed by the output activation function
to form the network output
. The output is then compared to a desired target
and the error between the two is calculated.
Training a neural network involves determining the set of parameters that minimize the errors that the network makes. Often the choice for the error function is the sum of the squared difference between the target values
and the network output
(for more detail on this choice of error function see):
Equation (1)
This problem can be solved using gradient descent, which requires determining for all
in the model. Note that, in general, there are two sets of parameters: those parameters that are associated with the output layer (i.e.
), and thus directly affect the network output error; and the remaining parameters that are associated with the hidden layer(s), and thus affect the output error indirectly.
Before we begin, let’s define the notation that will be used in remainder of the derivation. Please refer to Figure 1 for any clarification.
: input to node
for layer
: activation function for node
in layer
(applied to
)
: ouput/activation of node
in layer
: weights connecting node
in layer
to node
in layer
: bias for unit
in layer
: target value for node
in the output layer
Gradients for Output Layer Weights
Output layer connection weights, 
Since the output layer parameters directly affect the value of the error function, determining the gradients for those parameters is fairly straight-forward:
Equation (2)
Here, we’ve used the Chain Rule. (Also notice that the summation disappears in the derivative. This is because when we take the partial derivative with respect to the -th dimension/node, the only term that survives in the error gradient is
-th, and thus we can ignore the remaining terms in the summation). The derivative with respect to
is zero because it does not depend on
. Also, we note that
. Thus
Equation (3)
where, again we use the Chain Rule. Now, recall that and thus
, giving:
Equation (4)
The gradient of the error function with respect to the output layer weights is a product of three terms. The first term is the difference between the network output and the target value . The second term is the derivative of output layer activation function. And the third term is the activation output of node j in the hidden layer.
If we define to be all the terms that involve index k:
we obtain the following expression for the derivative of the error with respect to the output weights :
Equation (5)
Here the terms can be interpreted as the network output error after being back-propagated through the output activation function, thus creating an error “signal”. Loosely speaking, Equation (5) can be interpreted as determining how much each
contributes to the error signal by weighting the error signal by the magnitude of the output activation from the previous (hidden) layer associated with each weight (see Figure 1). The gradients with respect to each parameter are thus considered to be the “contribution” of the parameter to the error signal and should be negated during learning. Thus the output weights are updated as
, where
is some step size (“learning rate”) along the negative gradient.
As we’ll see shortly, the process of backpropagating the error signal can iterate all the way back to the input layer by successively projecting back through
, then through the activation function for the hidden layer via
to give the error signal
, and so on. This backpropagation concept is central to training neural networks with more than one layer.
Output layer biases, 
As far as the gradient with respect to the output layer biases, we follow the same routine as above for . However, the third term in Equation (3) is
, giving the following gradient for the output biases:
Equation (6)
Thus the gradient for the biases is simply the back-propagated error from the output units. One interpretation of this is that the biases are weights on activations that are always equal to one, regardless of the feed-forward signal. Thus the bias gradients aren’t affected by the feed-forward signal, only by the error.
Gradients for Hidden Layer Weights
Due to the indirect affect of the hidden layer on the output error, calculating the gradients for the hidden layer weights is somewhat more involved. However, the process starts just the same:
Notice here that the sum does not disappear because, due to the fact that the layers are fully connected, each of the hidden unit outputs affects the state of each output unit. Continuing on, noting that …
Equation (7)
Here, again we use the Chain Rule. Ok, now here’s where things get “slightly more involved”. Notice that the partial derivative in the third term in Equation (7) is with respect to , but the target
is a function of index
. How the heck do we deal with that!? Well, if we expand
, we find that it is composed of other sub functions (also see Figure 1):
Equation (8)
From the last term in Equation (8) we see that is indirectly dependent on
. Equation (8) also suggests that we can use the Chain Rule to calculate
. This is probably the trickiest part of the derivation, and goes like…
Equation (9)
Now, plugging Equation (9) into in Equation (7) gives the following for
:
Equation (10)
Notice that the gradient for the hidden layer weights has a similar form to that of the gradient for the output layer weights. Namely the gradient is some term weighted by the output activations from the layer below (). For the output weight gradients, the term that was weighted by
was the back-propagated error signal
(i.e. Equation (5)). Here, the weighted term includes
, but the error signal is further projected onto
and then weighted by the derivative of hidden layer activation function
. Thus, the gradient for the hidden layer weights is simply the output error signal backpropagated to the hidden layer, then weighted by the input to the hidden layer. To make this idea more explicit, we can define the resulting error signal backpropagated to layer
as
, and includes all terms in Equation (10) that involve index
. This definition results in the following gradient for the hidden unit weights:
Equation (11)
This suggests that in order to calculate the weight gradients at any layer in an arbitrarily-deep neural network, we simply need to calculate the backpropagated error signal that reaches that layer
and weight it by the feed-forward signal
feeding into that layer! Analogously, the gradient for the hidden layer weights can be interpreted as a proxy for the “contribution” of the weights to the output error signal, which can only be observed–from the point of view of the weights–by backpropagating the error signal to the hidden layer.
Output layer biases, 
Calculating the gradients for the hidden layer biases follows a very similar procedure to that for the hidden layer weights where, as in Equation (9), we use the Chain Rule to calculate . However, unlike Equation (9) the third term that results for the biases is slightly different:
Equation (12)
In a similar fashion to calculation of the bias gradients for the output layer, the gradients for the hidden layer biases are simply the backpropagated error signal reaching that layer. This suggests that we can also calculate the bias gradients at any layer in an arbitrarily-deep network by simply calculating the backpropagated error signal reaching that layer
!
Wrapping up
In this post we went over some of the formal details of the backpropagation learning algorithm. The math covered in this post allows us to train arbitrarily deep neural networks by re-applying the same basic computations. Those computations are:
- Calculated the feed-forward signals from the input to the output.
- Calculate output error
based on the predictions
and the target
- Backpropagate the error signals by weighting it by the weights in previous layers and the gradients of the associated activation functions
- Calculating the gradients
for the parameters based on the backpropagated error signal and the feedforward signals from the inputs.
- Update the parameters using the calculated gradients
The only real constraints on model construction is ensuring that the error function and the activation functions
are differentiable. For more details on implementing ANNs and seeing them at work, stay tuned for the next post.