Category Archives: Theory

Derivation: Error Backpropagation & Gradient Descent for Neural Networks

Introduction

Artificial neural networks (ANNs) are a powerful class of models used for nonlinear regression and classification tasks that are motivated by biological neural computation. The general idea behind ANNs is pretty straightforward: map some input onto a desired target value using a distributed cascade of nonlinear transformations (see Figure 1). However, for many, myself included, the learning algorithm used to train ANNs can be difficult to get your head around at first. In this post I give a step-by-step walk-through of the derivation of gradient descent learning algorithm commonly used to train ANNs (aka the backpropagation algorithm) and try to provide some high-level insights into the computations being performed during learning.

Artificial  Neural Network

Figure 1: Diagram of an artificial neural network with one hidden layer

 

Some Background and Notation

An ANN consists of an input layer, an output layer, and any number (including zero) of hidden layers situated between the input and output layers. Figure 1 diagrams an ANN with a single hidden layer. The feed-forward computations performed by the ANN are as follows: The signals from the input layer a_i are multiplied by a set of fully-connected weights w_{ij} connecting the input layer to the hidden layer. These weighted signals are then summed and combined with a bias b_i (not displayed in the graphical model in Figure 1). This calculation forms the pre-activation signal z_j = b_j + \sum_i a_i w_{ij} for the hidden layer. The pre-activation signal is then transformed by the hidden layer activation function g_j to form the feed-forward activation signals leaving leaving the hidden layer a_j. In a similar fashion, the hidden layer activation signals a_j are multiplied by the weights connecting the hidden layer to the output layer w_{jk}, a bias b_k is added, and the resulting signal is transformed by the output activation function g_k to form the network output a_k. The output is then compared to a desired target t_k and the error between the two is calculated.

Training a neural network involves determining the set of parameters \theta = \{\mathbf{W},\mathbf{b}\} that minimize the errors that the network makes. Often the choice for the error function is the sum of the squared difference between the target values t_k and the network output a_k (for more detail on this choice of error function see):

\Large{\begin{array}{rcl} E &=& \frac{1}{2} \sum_{k \in K}(a_k - t_k)^2 \end{array}}

Equation (1)

This problem can be solved using gradient descent, which requires determining \frac{\partial E}{\partial \theta} for all \theta in the model. Note that, in general, there are two sets of parameters: those parameters that are associated with the output layer (i.e. \theta_k = \{w_{jk}, b_k\}), and thus directly affect the network output error; and the remaining parameters that are associated with the hidden layer(s), and thus affect the output error indirectly.

Before we begin, let’s define the notation that will be used in remainder of the derivation. Please refer to Figure 1 for any clarification.

  • {z_j}: input to node j for layer l
  • {g_j}: activation function for node j in layer l (applied to {z_j})
  • a_j=g_j(z_j): ouput/activation of node j in layer l
  • {w_{ij}}: weights connecting node i in layer (l-1) to node j in layer l
  • {b_{j}}: bias for unit j in layer l
  • {t_{k}}: target value for node k in the output layer

Gradients for Output Layer Weights

Output layer connection weights, w_{jk}

Since the output layer parameters directly affect the value of the error function, determining the gradients for those parameters is fairly straight-forward:

\Large{\begin{array}{rcl} \frac{\partial E }{\partial w_{jk}} &=& \frac{1}{2} \sum_{k \in K}(a_k - t_k)^2 \\  &=& (a_k - t_k)\frac{\partial}{\partial w_{jk}}(a_k - t_k) \end{array}}

Equation (2)

Here, we’ve used the Chain Rule. (Also notice that the summation disappears in the derivative. This is because when we take the partial derivative with respect to the j-th dimension/node, the only term that survives in the error gradient is j-th, and thus we can ignore the remaining terms in the summation). The derivative with respect to t_k is zero because it does not depend on w_{jk}. Also, we note that a_k = g(z_k). Thus

\Large{\begin{array}{rcl}\frac{\partial E }{\partial w_{jk}} &=& (a_k - t_k)\frac{\partial}{\partial w_{jk}}a_k \\  &=& (a_k - t_k)\frac{\partial}{\partial w_{jk}}g_k(z_k) \\  &=& (a_k - t_k)g_k'(z_k)\frac{\partial}{\partial w_{jk}}z_k, \end{array}}

Equation (3)

where, again we use the Chain Rule. Now, recall that z_k = b_j + \sum_j g_j(z_j)w_{jk} and thus \frac{\partial z_{k}}{\partial w_{jk}} = g_j(z_j) = a_j, giving:

\Large{\begin{array}{rcl} \frac{\partial E }{\partial w_{jk}} &=& (a_k - t_k)g_k'(z_k)a_j \end{array}}

Equation (4)

The gradient of the error function with respect to the output layer weights is a product of three terms. The first term is the difference between the network output and the target value t_k. The second term is the derivative of output layer activation function. And the third term is the activation output of node j in the hidden layer.

If we define \delta_k to be all the terms that involve index k:

\Large{\begin{array}{rcl} \delta_k &=& (a_k - t_k)g_k'(z_k)\end{array}}

we obtain the following expression for the derivative of the error with respect to the output weights w_{jk}:

\Large{\begin{array}{rcl} \frac{\partial E }{\partial w_{jk}} = \delta_k a_j \end{array}}

Equation (5)

Here the \delta_k terms can be interpreted as the network output error after being back-propagated through the output activation function, thus creating an error “signal”. Loosely speaking, Equation (5) can be interpreted as determining how much each w_{jk} contributes to the error signal by weighting the error signal by the magnitude of the output activation from the previous (hidden) layer associated with each weight (see Figure 1). The gradients with respect to each parameter are thus considered to be the “contribution” of the parameter to the error signal and should be negated during learning. Thus the output weights are updated as w_{jk}\leftarrow w_{jk} - \eta \frac{\partial E }{\partial w_{jk}}, where \eta is some step size (“learning rate”) along the negative gradient.

As we’ll see shortly, the process of backpropagating the error signal can iterate all the way back to the input layer by successively projecting \delta_k back through w_{jk}, then through the activation function for the hidden layer via g'_j to give the error signal \delta_j, and so on. This backpropagation concept is central to training neural networks with more than one layer.

Output layer biases, \Large{b_{k}}

As far as the gradient with respect to the output layer biases, we follow the same routine as above for w_{jk}. However, the third term in Equation (3) is \frac{\partial}{\partial b_k} z_k = \frac{\partial}{\partial b_k} \left[ b_k + \sum_j g_j(z_j)\right] = 1, giving the following gradient for the output biases:

\Large{\begin{array}{rcl} \frac{\partial E }{\partial b_k} &=& (a_k - t_k)g_k'(z_k)(1) \\  &=& \delta_k \end{array}}

Equation (6)

Thus the gradient for the biases is simply the back-propagated error from the output units. One interpretation of this is that the biases are weights on activations that are always equal to one, regardless of the feed-forward signal. Thus the bias gradients aren’t affected by the feed-forward signal, only by the error.

 

Gradients for Hidden Layer Weights

Due to the indirect affect of the hidden layer on the output error, calculating the gradients for the hidden layer weights w_{ij}  is somewhat more involved. However, the process starts just the same:

\Large{\begin{array}{rcl} \frac{\partial E }{\partial w_{ij}}&=&\frac{1}{2} \sum_{k \in K}(a_k - t_k)^2 \\  &=& \sum_{k \in K} (a_k - t_k) \frac{\partial}{\partial w_{ij}}a_k  \end{array}}

Notice here that the sum does not disappear because, due to the fact that the layers are fully connected, each of the hidden unit outputs affects the state of each output unit. Continuing on, noting that a_k = g_k(z_k)

\Large{\begin{array}{rcl} \frac{\partial E }{\partial w_{ij}}&=& \sum_{k \in K} (a_k - t_k) \frac{\partial }{\partial w_{ij}}g_k(z_k) \\  &=& \sum_{k \in K} (a_k - t_k)g'_k(z_k)\frac{\partial }{\partial w_{ij}}z_k  \end{array}}

Equation (7)

Here, again we use the Chain Rule. Ok, now here’s where things get “slightly more involved”. Notice that the partial derivative in the third term in Equation (7) is with respect to w_{ij}, but the target z_j is a function of index j. How the heck do we deal with that!? Well, if we expand z_k, we find that it is composed of other sub functions (also see Figure 1):

\Large{\begin{array}{rcl} z_k &=& b_k + \sum_j a_jw_{jk} \\  &=& b_k + \sum_j g_j(z_j)w_{jk} \\  &=& b_k + \sum_j g_j(b_i + \sum_i z_i w_{ij})w_{jk}\end{array}}

Equation (8)

From the last term in Equation (8) we see that z_k is indirectly dependent on w_{ij}.  Equation (8) also suggests that we can use the Chain Rule to calculate \frac{\partial z_k }{\partial w_{ij}}. This is probably the trickiest part of the derivation, and goes like…

\Large{\begin{array}{rcl} \frac{\partial z_k }{\partial w_{ij}} &=& \frac{\partial z_k}{\partial a_j}\frac{\partial a_j}{\partial w_{ij}} \\  &=& \frac{\partial}{\partial a_j}a_jw_{jk}\frac{\partial a_j}{\partial w_{ij}} \\  &=& w_{jk}\frac{\partial a_j}{\partial w_{ij}} \\  &=& w_{jk}\frac{\partial g_j(z_j)}{\partial w_{ij}} \\  &=& w_{jk}g_j'(z_j)\frac{\partial z_j}{\partial w_{ij}} \\  &=& w_{jk}g_j'(z_j)\frac{\partial}{\partial w_{ij}}(b_i + \sum_i a_i w_{ij}) \\  &=& w_{jk}g_j'(z_j)a_i \end{array}}

Equation (9)

Now, plugging Equation (9) into z_k in Equation (7) gives the following for \frac{\partial E}{\partial w_{ij}}:

\Large{\begin{array}{rcl} \frac{\partial E }{\partial w_{ij}}&=& \sum_{k \in K} (a_k - t_k)g'_k(z_k)w_{jk} g'_j(z_j)a_i \\  &=& g'_j(z_j)a_i \sum_{k \in K} (a_k - t_k)g'_k(z_k)w_{jk} \\  &=& a_i g'_j(z_j) \sum_{k \in K} \delta_k w_{jk} \end{array}}

Equation (10)

Notice that the gradient for the hidden layer weights has a similar form to that of the gradient for the output layer weights. Namely the gradient is some term weighted by the output activations from the layer below (a_i). For the output weight gradients, the term that was weighted by a_j was the back-propagated error signal \delta_k (i.e. Equation (5)). Here, the weighted term includes \delta_k, but the error signal is further projected onto w_{jk} and then weighted by the derivative of hidden layer activation function g'_j. Thus, the gradient for the hidden layer weights is simply the output error signal backpropagated to the hidden layer, then weighted by the input to the hidden layer. To make this idea more explicit, we can define the resulting error signal backpropagated to layer j as \delta_j, and includes all terms in Equation (10) that involve index j. This definition results in the following gradient for the hidden unit weights:

\Large{\begin{array}{rcl} \frac{\partial E }{\partial w_{ij}}&=& a_i g'_j(z_j) \sum_{k \in K} \delta_k w_{jk} \\  &=& \delta_j a_i \\  \text{where} \\  \delta_j &=& g'_j(z_j) \sum_{k \in K} \delta_k w_{jk} \end{array}}

Equation (11)

This suggests that in order to calculate the weight gradients at any layer l in an arbitrarily-deep neural network, we simply need to calculate the backpropagated error signal that reaches that layer \delta_l and weight it by the feed-forward signal a_{l-1}feeding into that layer! Analogously, the gradient for the hidden layer weights can be interpreted as a proxy for the “contribution” of the weights to the output error signal, which can only be observed–from the point of view of the weights–by backpropagating the error signal to the hidden layer.

Output layer biases, \Large{w_{ij}}

Calculating the gradients for the hidden layer biases follows a very similar procedure to that for the hidden layer weights where, as in Equation (9), we use the Chain Rule to calculate \frac{\partial z_k}{\partial b_i}. However, unlike Equation (9) the third term that results for the biases is slightly different:

\Large{\begin{array}{rcl} \frac{\partial z_k }{\partial b_i} &=& w_{jk}g_j'(z_j)\frac{\partial z_j}{\partial b_i} \\  &=& w_{jk}g_j'(z_j)\frac{\partial}{\partial b_i}(b_i + \sum_i a_i w_{ij}) \\  &=& w_{jk}g_j'(z_j)(1), \\  \text{giving} \\  \frac{\partial E }{\partial b_i}&=& g'_j(z_j) \sum_{k \in K} \delta_k w_{jk} \\  &=& \delta_j \end{array}}

Equation (12)

In a similar fashion to calculation of the bias gradients for the output layer, the gradients for the hidden layer biases are simply the backpropagated error signal reaching that layer. This suggests that we can also calculate the bias gradients at any layer l in an arbitrarily-deep network by simply calculating the backpropagated error signal reaching that layer \delta_l!

Wrapping up

In this post we went over some of the formal details of the backpropagation learning algorithm. The math covered in this post allows us to train arbitrarily deep neural networks by re-applying the same basic computations. Those computations are:

  1. Calculated the feed-forward signals from the input to the output.
  2. Calculate output error E based on the predictions a_k and the target t_k
  3. Backpropagate the error signals by weighting it by the weights in previous layers and the gradients of the associated activation functions
  4. Calculating the gradients \frac{\partial E}{\partial \theta} for the parameters based on the backpropagated error signal and the feedforward signals from the inputs.
  5. Update the parameters using the calculated gradients \theta \leftarrow \theta - \eta\frac{\partial E}{\partial \theta}

The only real constraints on model construction is ensuring that the error function E and the activation functions g_l are differentiable. For more details on implementing ANNs and seeing them at work, stay tuned for the next post.

Model Selection: Underfitting, Overfitting, and the Bias-Variance Tradeoff

In machine learning and pattern recognition, there are many ways (an infinite number, really) of solving any one problem. Thus it is important to have an objective criterion for assessing the accuracy of candidate approaches and for selecting the right model for a data set at hand. In this post we’ll discuss the concepts of under- and overfitting and how these phenomena are related to the statistical quantities bias and variance. Finally, we will discuss how these concepts can be applied to select a model that will accurately generalize to novel scenarios/data sets.

Models for Regression

When performing regression analyses we would like to characterize how the value of some dependent variable changes as some independent variable x is varied. For example, say we would like to characterize the firing rate of a neuron in visual cortex as we vary the orientation of a grating pattern presented to the eye. We assume that there is some true relationship function f(x) that maps the independent variable values (i.e. the angle of the grating pattern) onto the dependent variable values (i.e. firing rate). We would like to determine the form of the function f(x) from observations of independent-dependent value pairs (I may also refer to these as input-output pairs, as we can think of the function f(x) taking x as input and producing an output). However, in the real world, we don’t get to observe f(x) directly, but instead get noisy observations y, where

(1) y = f(x) + \epsilon

Here we will assume that \epsilon is random variable distributed according to a zero-mean Gaussian with standard deviation \sigma^2. Note that because \epsilon is a random variable, y is also a random variable (with a mean that is on conditioned on both x and f(x), and a variance \sigma^2).

As an example, say that the true function f(x) we want to determine has the the following form (though we don’t know it):

f(x) = \sin(\pi x)

Thus the observations y we get to see have the following distribution.

y = \sin(\pi x) + \mathcal N(0,\sigma^2)

Below we define the function f(x) and display it, then draw a few observation samples y, and display them as well:

% CREATE A RANDOM DATASET
rng(10)
nData = 10; % N
x = 2*(rand(1,nData)-.5);
xGrid = linspace(-1,1,100);

% DEFINE AND TARGET FUNCTION f(x)
f = inline('sin(pi*x)','x');

h = [];
h(1) = plot(xGrid,f(xGrid),'k','Linewidth',2);
xlabel('x')
hold on

% DEFINE AND DISPLAY y
noiseSTD = .1;
y = f(x) + noiseSTD*randn(size(x));
h(2) = scatter(x,y,'ko');
legend(h,{'f(x)','y'},'Location','Northwest');

bias-variance-f_x-y

Again, the goal here is to characterized the function f(x). However, since we don’t know the function form of f(x), we must instead estimate some other function g(x) that we think will be a good approximation to f(x). Thus we call g(x) an estimator of f(x). In general, an estimator is some parameterized model that can capture a wide range of functional forms. One such class of estimators is the weighted combination of ordered polynomials:

g_N(x) = \theta_0 + \theta_1x + \theta_2x^2 + \dots \theta_N x^N

As the polynomial order N increases, the functions g_N(x) are able to capture increasingly complex behavior. For example, g_0(x) desribes a horizontal line with an adjustable vertical offset, g_1(x) desribes a line with adjustable vertical offset and adjustable slope, g_2(x) describes a function that also includes a quadratic term. We thus try to fit the values of the parameters for a given estimator g_N(x)  to best account for observed data in the hopes that we will also accurately approximate f(x).

Below we estimate the parameters of three polynomial model functions of increasing complexity (using Matlab’s \texttt{polyfit.m}) to the sampled data displayed above. Specifically, we estimate the functions g_1(x), g_3(x) and g_{10}(x).

% FIT POLYNOMIAL MODELS & DISPLAY
% (ASSUMING PREVIOUS PLOT ABOVE STILL AVAILABLE)
degree = [1,3,10];
theta = {};
cols = [.8 .05 0.05; 0.05 .6 0.05; 0.05 0.05 .6];
for iD = 1:numel(degree)
	figure(1)
	theta{iD} = polyfit(x,y,degree(iD));
	fit{iD} = polyval(theta{iD},xGrid);
	h(end+1) = plot(xGrid,fit{iD},'color',cols(iD,:),'Linewidth',2);
	xlim([-1 1])
	ylim([-1 1])
end
legend(h,'f(x)','y','g_1(x)','g_3(x)','g_{10}(x)','Location','Northwest')

bias-variance-f_x-y-fits

Qualitatively, we see that the estimator g_1(x) (red line) provides a poor fit to the observed data, as well as a poor approximation to the function f(x) (black curve). We see that the estimator g_{10}(x) (blue curve) provides a very accurate fit to the data points, but varies wildly to do so, and therefore provides an inaccurate approximation of f(x). Finally, we see that the estimator g_3(x) (green curve) provides a fairly good fit to the observed data, and a much better job at approximating f(x).

Our original goal was to approximate f(x), not the data points per se. Therefore g_3(x), at least qualitatively, provides a more desirable estimate of f(x) than the other two estimators. The fits for g_1(x) and g_{10}(x) are examples of “underfitting” and “overfitting” to the observed data. Underfitting occurs when an estimator g(x) is not flexible enough to capture the underlying trends in the observed data. Overfitting occurs when an estimator is too flexible, allowing it to capture illusory trends in the data. These illusory trends are often the result of the noise in the observations y.

Variance and Bias of an Estimator

The model fits for g_N(x) discussed above were based on a single, randomly-sampled data set of observations y. However, there are in principle, a potentially infinite number of data sets that can be observed (drawn from y). In order to determine a good model of f(x), it would be helpful to have an idea of how an estimator will perform on any or all of these potential datasets.  To get an idea of how each of the estimators discussed above performs in general we can repeat the model fitting procedure for many data sets.

Here we perform such an analyses, sampling 50 independent data sets according to Equation (1) (with \sigma=0.1), then fitting the parameters for the polynomial functions of model order N = [1,3,10] to each dataset.

% FIT MODELS TO K INDEPENDENT DATASETS
K = 50;
for iS = 1:K
	ySim = f(x) + noiseSTD*randn(size(x));
	for jD = 1:numel(degree)
		% FIT THE MODEL USING polyfit.m
		thetaTmp = polyfit(x,ySim,degree(jD));
		% EVALUATE THE MODEL FIT USING polyval.m
		simFit{jD}(iS,:) = polyval(thetaTmp,xGrid);
	end
end

% DISPLAY ALL THE MODEL FITS
h = [];
for iD = 1:numel(degree)
	figure(iD+1)
	hold on
	% PLOT THE FUNCTION FIT TO EACH DATASET
	for iS = 1:K
		h(1) = plot(xGrid,simFit{iD}(iS,:),'color',brighten(cols(iD,:),.6));
	end
	% PLOT THE AVERAGE FUNCTION ACROSS ALL FITS
	h(2) = plot(xGrid,mean(simFit{iD}),'color',cols(iD,:),'Linewidth',5);
	% PLOT THE UNDERLYING FUNCTION f(x)
	h(3) = plot(xGrid,f(xGrid),'color','k','Linewidth',3);
	% CALCULATE THE SQUARED ERROR AT EACH POINT, AVERAGED ACROSS ALL DATASETS
	squaredError = (mean(simFit{iD})-f(xGrid)).^2;
	% PLOT THE SQUARED ERROR
	h(4) = plot(xGrid,squaredError,'k--','Linewidth',3);
	uistack(h(2),'top')
	hold off
	axis square
	xlim([-1 1])
	ylim([-1 1])
	legend(h,{sprintf('Individual g_{%d}(x)',degree(iD)),'Mean of All Fits','f(x)','Squared Error'},'Location','WestOutside')
	title(sprintf('Model Order=%d',degree(iD)))
end

bias-variance-simulation1

Here are the results for the estimator g_1(x)

bias-variance-simulation3

…and for the estimator g_3(x), …

bias-variance-simulation10

… and for g_{10}(x).

The lightly-colored curves in each of the three plots above are an individual polynomial model fit to one of the 50 sampled data sets. The darkly-colored curve in each plot is the average over the 50 individual fits. The dark black curve is the true, underlying function f(x).

We see that for the estimator g_1(x) (red curves), model fits do not vary much from data set to data set. Thus the expected (mean) estimator fit over all the data sets, formally written as \mathbb E[g(x)], is very similar to all the individual fits.

A common formal definition used in statistics to describe how much a single estimator deviates from the average estimator over datasets is the variance of the estimator. Formally defined as

variance=\mathbb E[(g(x)-\mathbb E[g(x)])^2]

the variance is the average squared difference between any single data-set-dependent estimate of g(x) and the average value of g(x) estimated over all datasets.

According to the definition of variance, we can say that the estimator g_1(x) exhibits low variance.

A commonly-used metric in statistics for assessing how an estimator g(x) approximates a target function f(x), based on behavior over many observed data sets is what is called the bias of the estimator. Formally defined as:

bias = \mathbb E[g(x)] - f(x)

The bias describes how much the average estimator fit over datasets \mathbb E[g(x)] deviates from the value of the underlying target function f(x).

We can see from the plot for g(x)_1 that \mathbb E[g_1(x)] deviates significantly from f(x). Thus we can say that the estimator g_1(x) exhibits large bias when approximating the function f(x).

Investigating the results for the estimator g_{10}(x) (blue curves), we see that each individual model fit varies dramatically from one data set to another. Thus we can say that this estimator exhibits large variance. Looking at \mathbb E[g_{10}(x)] (the dark blue curve), we see that on average the estimator g_{10}(x) provides a better approximation to f(x) than the estimator g_1(x). Thus we can say that g_{10}(x) exhibits a lower bias than the estimator g_1(x).

Investigating the fits for the estimator g_3(x) (green curves), we find that this estimator has low bias. Furthermore, the average estimator \mathbb E[g_3(x)] (dark green curve) accurately approximates the true function f(x), telling us that that the estimator g_3(x) also has low bias.

We established earlier that the estimator g_3(x) provided a qualitatively better fit to the function f(x) than the other two polynomial estimators for a single dataset. It appears that this is also the case over many datasets. We also find that estimator g_3(x) exhibits low bias and low variance, whereas the other two, less-desirable estimators, have either high bias or high variance. Thus it would appear that having both low bias and low variance is a reasonable criterion for selecting an accurate model of f(x).

Included in each of the three plots above is a dashed black line representing the squared difference between the average estimator \mathbb E[g_N(x)] and the true function f(x). Calculating squared model errors is a common practice for quantifying the goodness of a model fit. If we calculate the expected value of each of the dashed black curves (and assuming that all values of x are equally likely to occur), we would obtain a single value for each estimator that is the mean squared error (MSE) between the expected estimator and the true function.

\mathbb E[(\mathbb E[g(x)]-f(x))^2] = \frac{1}{N}\sum_{i=1}^N (\mathbb E[g(x)]-f(x))^2

For the estimator g_3(x), the MSE will be very small, as the dashed black curve for this estimator is near zero for all values of x. The estimators g_1(x) and g_{10}(x) would have significantly larger values. Now, because exhibiting both a low MSE, as well as having both low bias and variance are indicative of a good estimator, it would be reasonable to assume that squared model error is directly related to bias and variance. The next section provides some formal evidence for this notion.

Expected Prediction Error and the Bias-variance Tradeoff

For a given estimator g(x) fit to a data set of xy pairs, we would like to know, given all the possible datasets out there, what is the expected prediction error we will observe for a new data point x^*, y^* = f(x^*) + \epsilon. If we define prediction error to be the squared difference in model prediction g(x^*) and observations y^*, the expected prediction error is then:

\mathbb E[(g(x^*) - y^*)^2]

If we expand this a little and use a few identities, something interesting happens:

(2) \mathbb E[(g(x^*) - y^*)^2] = \mathbb E[g(x^*)^2-2g(x^*)y^*+y^{*2}]=

(3) \mathbb E[g(x^*)^2]-2\mathbb E[g(x^*)y^*]+\mathbb E[y^{*2}]=

(4) \mathbb E[(g(x^*)-\mathbb E[g(x^*)])^2] + \mathbb E[g(x^*)]^2-2 \mathbb E[g(x^*)]f(x^*) + \mathbb E[(y^*-f(x^*))^2] + f(x^*)^2

where we have applied the following Lemma to the first and third terms of Equation (3), and use the fact to \mathbb E[y] = f(x) (Think of averaging over an infinite number of datasets sampled from y; all noise will average out, leaving f(x)). Rearranging Equation (4), we obtain

(5) \mathbb E[(g(x^*)-\mathbb E[g(x^*)])^2] + \mathbb E[g(x^*)]^2 - 2 \mathbb E[g(x^*)]f(x^*) + f(x^*)^2 + \mathbb E[(y^*-f(x^*))^2]

which can be further simplified and grouped into three terms

(6) \mathbb E[(g(x^*) - \mathbb E[g(x^*)])^2] +\mathbb (E[g(x^*)]-f(x^*))^2 + \mathbb E[(y^*-f(x^*))^2]

  1. The first term is the variance of the estimator introduced above.
  2. The second term is the square of the bias of the estimator, also introduced above.
  3. The third term is the variance of the observation noise and describes how much the observations y vary from the true function f(x). Notice that the noise term does not depend on the estimator g(x). Thus the noise term is a constant that places a lower bound on expected prediction error.

Here we find that the expected prediction error on new data (x^*,y^*) (in the squared differences sense) is the combination of three terms:

Expected prediction error = estimator variance + squared estimator bias + noise

Thus the expected prediction error on new data can be used as a quantitative criterion for selecting the best model from a candidate set of estimators! It turns out that, given N new data points (\bold x^*,\bold y^*), the expected prediction error can be easily approximated as the mean squared error over data pairs:

\mathbb E[(g(\bold x^*) - \bold y^*)^2] \approx \frac{1}{N}\sum_{i=1}^N(g(x_i^*)-y_i^*)^2

Below we demonstrate these findings with another set of simulations. We simulate 100 independent datasets, each with 25 xy pairs. We then partition each dataset into two non-overlapping sets: a training set using for fitting model parameters, and a testing set used to calculate prediction error. We then fit the parameters for estimators of varying complexity. Complexity is varied by using polynomial functions that range in model order from 1 (least complex) to 12 (most complex). We then calculate and display the squared bias, variance, and error on testing set for each of the estimators:

N = 25; % # OF OBSERVATIONS PER DATASET
K = 100;% # OF DATASETS
noiseSTD = .5; % NOISE STANDARDE DEV.
nTrain = ceil(N*.9); % # OF TRAINING POINTS
nPolyMax = 12; % MAXIMUM MODEL COMPLEXITY

% # INITIALIZE SOME VARIABLES
xGrid = linspace(-1,1,N);
meanPrediction = zeros(K,N);
thetaHat = {};
x = linspace(-1,1,N);
x = x(randperm(N));
for iS = 1:K % LOOP OVER DATASETS
	% CREATE OBSERVED DATA, y
	y = f(x) + noiseSTD*randn(size(x));

	% CREATE TRAINING SET
	xTrain = x(1:nTrain);
	yTrain = y(1:nTrain);

	% CREATE TESTING SET
	xTest = x(nTrain+1:end);
	yTest = y(nTrain+1:end);

	% FIT MODELS
	for jD = 1:nPolyMax

		% MODEL PARAMETER ESTIMATES
		thetaHat{jD}(iS,:) = polyfit(xTrain,yTrain,jD);

		% PREDICTIONS
		yHatTrain{jD}(iS,:) = polyval([thetaHat{jD}(iS,:)],xTrain); TRAINING SET
		yHatTest{jD}(iS,:) = polyval([thetaHat{jD}(iS,:)],xTest);% TESTING SET

		% MEAN SQUARED ERROR
		trainErrors{jD}(iS) = mean((yHatTrain{jD}(iS,:) - yTrain).^2); % TRAINING
		testErrors{jD}(iS) = mean((yHatTest{jD}(iS,:) - yTest).^2); % TESTING
	end
end

% CALCULATE AVERAGE PREDICTION ERROR, BIAS, AND VARIANCE
for iD = 1:nPolyMax
	trainError(iD) = mean(trainErrors{iD});
	testError(iD) = mean(testErrors{iD});
	biasSquared(iD) = mean((mean(yHatTest{iD})-f(xTest)).^2);
	variance(iD) = mean(var(yHatTest{iD},1));
end
[~,bestModel] = min(testError);

% DISPLAY
figure;
hold on;
plot(testError,'k','Linewidth',2);
plot(biasSquared,'r','Linewidth',2);
plot(variance,'b','Linewidth',2);
plot(biasSquared + variance,'m-.','Linewidth',2);
yl = ylim;
plot([bestModel,bestModel],[yl(1),yl(2)],'k--');
xlim([1,nPolyMax]);
xlabel('Model Complexity (Polynomial Order)')
legend('Test Error','Bias^2','Variance','Bias^2+Var.','Best Model')
hold off;

bias-variance-tradeoff

Here we see how, as the model complexity increases, the estimator variance (blue curve) gradually increases. Additionally, as model complexity increases, the squared bias (red curve) decreases. Thus there is a tradeoff between bias and variance that comes with model complexity: models that are too complex will have high variance and low bias; models that are too simple will have high bias and low variance. The best model will have both low bias and low variance. In this example, we highlight the best estimator in terms of prediction error on the testing set (black curve) with a dashed black vertical line. The best estimator corresponds to a polynomial model of order of N=3. Notice that the vertical black line is located where function defined by the sum of the squared bias and variance (dashed magenta curve) is also at a minimum.

Notice also how the sum of the squared bias and variance also has the same shape as curve defined by the prediction error on the testing set. This exemplifies how the error on novel data can be used as a proxy for determining the best estimator from a candidate set based on squared bias and variance. The noise term in Equation (6) is also represented in the figure by the vertical displacement between the black curve and dashed magenta curve.

It is very important to point out that all of these results are based on evaluating prediction error on novel data, not used to estimate model parameters. In fact assessing a model performance based on prediction error calculated on the same data used to estimate the model parameters is highly problematic, as it causes models to always overfit. In plain terms, this means that we will always favor a more complex model if we assess goodness of model fits on the training data, as a more complex model will be better able to capture small, random trends in the data due to noise.

This overfitting phenomenon is demonstrated below. We plot the error calculated on the training set (Train Error, green curve) along with the error calculated on the testing set (Test Error, black curve) for the above simulation. We also identify the best estimator as we did above.

% DISPLAY
figure, hold on;
plot(trainError,'g','Linewidth',2);
plot(testError,'k','Linewidth',2);
yl = ylim;
plot([bestModel,bestModel],[yl(1),yl(2)],'k--');
xlim([1,nPolyMax]);
xlabel('Model Complexity (Polynomial Order)')
legend('Train Error','Test Error','Best Model');
hold off;

bias-variance-train-test-error

We see here that as model complexity increases, the error calculated on the training set continues to decrease, where as the error on the testing set increases past the optimal polynomial order N=3. We  showed above that error calculated on the testing set is the true indicator of how well an estimator will generalize to new data points. The error calculated on the training set strongly disagrees with the error calculated on the testing set after the optimal model complexity has been reached. Since, in general, the whole point of modeling a data set is to generalize to novel data, assessing model predictions on the training set data should be avoided.

Wrapping Up

Here we discussed how the bias and variance of an estimator are related to squared prediction error. Though we focused on regression, these concepts can also be applied to classification problems. We found that an optimal estimator will have both low variance and low bias. We further found that information about squared bias and variance is contained in expected prediction error calculated on a testing set of data not used to fit a model’s parameters.

The concepts of estimator bias and variance are generally only clear in the context of an ensemble of datasets. However, in real-world applications, there is generally only a single observed dataset. In such cases the roles of bias and variance are less obvious (though, it is possible to calculate estimates of variance and bias using resampling methods such as bootstrapping). However, the direct connection we made between bias, variance with the mean-squared error calculated on a testing set give us a direct means for assessing a group of candidate estimators in light of a single data set. We only need to partition the available data set into separate portions: one used to fit model parameters (a training set), and another used to assess prediction accuracy (testing set). Comparing prediction accuracy across potential estimators is equivalent to assessing biases and variances of the estimators across many datasets. Note that resampling methods such as cross-validation can prove helpful here, particularly when the amount of observed data is small.

Note, all the code for this post, containe in a single script is here:

clear
clc
close all

% CREATE A RANDOM DATASET
rng(10)
nData = 10; % N
x = 2*(rand(1,nData)-.5);

xGrid = linspace(-1,1,100);

% DEFINE AND TARGET FUNCTION f(x)
f = inline('sin(pi*x)','x');

h = [];
h(1) = plot(xGrid,f(xGrid),'k','Linewidth',2);
xlabel('x')
hold on

% DEFINE AND DISPLAY y
noiseSTD = .1;
y = f(x) + noiseSTD*randn(size(x));
h(2) = scatter(x,y,'ko');
legend(h,{'f(x)','y'},'Location','Northwest');

% FIT POLYNOMIAL MODELS & DISPLAY
% (ASSUMING PREVIOUS PLOT ABOVE STILL AVAILABLE)
degree = [1,3,10];
theta = {};
cols = [.8 .05 0.05; 0.05 .6 0.05; 0.05 0.05 .6];
for iD = 1:numel(degree)
	figure(1)
	theta{iD} = polyfit(x,y,degree(iD));
	fit{iD} = polyval(theta{iD},xGrid);
	h(end+1) = plot(xGrid,fit{iD},'color',cols(iD,:),'Linewidth',2);
	xlim([-1 1])
	ylim([-1 1])
end
legend(h,'f(x)','y','g_1(x)','g_3(x)','g_{10}(x)','Location','Northwest')

% FIT MODELS TO K INDEPENDENT DATASETS
K = 50;
for iS = 1:K
	ySim = f(x) + noiseSTD*randn(size(x));
	for jD = 1:numel(degree)
		% FIT THE MODEL USING polyfit.m
		thetaTmp = polyfit(x,ySim,degree(jD));
		% EVALUATE THE MODEL FIT USING polyval.m
		simFit{jD}(iS,:) = polyval(thetaTmp,xGrid);
	end
end

% DISPLAY ALL THE MODEL FITS
h = [];
for iD = 1:numel(degree)
	figure(iD+1)
	hold on
	% PLOT THE FUNCTION FIT TO EACH DATASET
	for iS = 1:K
		h(1) = plot(xGrid,simFit{iD}(iS,:),'color',brighten(cols(iD,:),.6));
	end
	% PLOT THE AVERAGE FUNCTION ACROSS ALL FITS
	h(2) = plot(xGrid,mean(simFit{iD}),'color',cols(iD,:),'Linewidth',5);
	% PLOT THE UNDERLYING FUNCTION f(x)
	h(3) = plot(xGrid,f(xGrid),'color','k','Linewidth',3);
	% CALCULATE THE SQUARED ERROR AT EACH POINT, AVERAGED ACROSS ALL DATASETS
	squaredError = (mean(simFit{iD})-f(xGrid)).^2;
	% PLOT THE SQUARED ERROR
	h(4) = plot(xGrid,squaredError,'k--','Linewidth',3);
	uistack(h(2),'top')
	hold off
	axis square
	xlim([-1 1])
	ylim([-1 1])
	legend(h,{sprintf('Individual g_{%d}(x)',degree(iD)),'Mean of All Fits','f(x)','Squared Error'},'Location','WestOutside')
	title(sprintf('Model Order=%d',degree(iD)))
end

N = 25; % # OF OBSERVATIONS PER DATASET
K = 100;% # OF DATASETS
noiseSTD = .5; % NOISE STANDARDE DEV.
nTrain = ceil(N*.9); % # OF TRAINING POINTS
nPolyMax = 12; % MAXIMUM MODEL COMPLEXITY

% # INITIALIZE SOME VARIABLES
xGrid = linspace(-1,1,N);
meanPrediction = zeros(K,N);
thetaHat = {};
x = linspace(-1,1,N);
x = x(randperm(N));
for iS = 1:K % LOOP OVER DATASETS

	% CREATE OBSERVED DATA, y
	y = f(x) + noiseSTD*randn(size(x));

	% CREATE TRAINING SET
	xTrain = x(1:nTrain);
	yTrain = y(1:nTrain);

	% CREATE TESTING SET
	xTest = x(nTrain+1:end);
	yTest = y(nTrain+1:end);

	% FIT MODELS
	for jD = 1:nPolyMax

		% MODEL PARAMETER ESTIMATES
		thetaHat{jD}(iS,:) = polyfit(xTrain,yTrain,jD);

		% PREDICTIONS
		yHatTrain{jD}(iS,:) = polyval([thetaHat{jD}(iS,:)],xTrain); % TRAINING SET
		yHatTest{jD}(iS,:) = polyval([thetaHat{jD}(iS,:)],xTest);% TESTING SET

		% MEAN SQUARED ERROR
		trainErrors{jD}(iS) = mean((yHatTrain{jD}(iS,:) - yTrain).^2); % TRAINING
		testErrors{jD}(iS) = mean((yHatTest{jD}(iS,:) - yTest).^2); % TESTING
	end
end

% CALCULATE AVERAGE PREDICTION ERROR, BIAS, AND VARIANCE
for iD = 1:nPolyMax
	trainError(iD) = mean(trainErrors{iD});
	testError(iD) = mean(testErrors{iD});
	biasSquared(iD) = mean((mean(yHatTest{iD})-f(xTest)).^2);
	variance(iD) = mean(var(yHatTest{iD},1));
end
[~,bestModel] = min(testError);

% DISPLAY
figure;
hold on;
plot(testError,'k','Linewidth',2);
plot(biasSquared,'r','Linewidth',2);
plot(variance,'b','Linewidth',2);
plot(biasSquared + variance,'m-.','Linewidth',2);
yl = ylim;
plot([bestModel,bestModel],[yl(1),yl(2)],'k--');
xlim([1,nPolyMax]);
xlabel('Model Complexity (Polynomial Order)')
legend('Test Error','Bias^2','Variance','Bias^2+Var.','Best Model')
hold off;

% DISPLAY
figure, hold on;
plot(trainError,'g','Linewidth',2);
plot(testError,'k','Linewidth',2);
yl = ylim;
plot([bestModel,bestModel],[yl(1),yl(2)],'k--');
xlim([1,nPolyMax]);
xlabel('Model Complexity (Polynomial Order)')
legend('Train Error','Test Error','Best Model');
hold off;

Derivation: Ordinary Least Squares Solution and Normal Equations

In a linear regression framework, we assume some output variable y is a linear combination of some independent input variables X plus some independent noise \epsilon. The way the independent variables are combined is defined by a parameter vector \beta:

\Large{\begin{array}{rcl} y &=& X \beta + \epsilon \end{array}}

We also assume that the noise term \epsilon is drawn from a standard Normal distribution:

\Large{ \begin{array}{rcl}\epsilon &\sim& N(0,I)\end{array}}

For some estimate of the model parameters \hat \beta, the model’s prediction errors/residuals e are the difference between the model prediction and the observed ouput values

\Large{\begin{array}{rcl} e = y - X\hat \beta \end{array}}

The Ordinary Least Squares (OLS) solution to the problem (i.e. determining an optimal solution for \hat \beta) involves minimizing the sum of the squared errors with respect to the model parameters, \hat \beta. The sum of squared errors is equal to the inner product of the residuals vector with itself \sum e_i^2 = e^Te :

\Large{\begin{array}{rcl} e^T e &=& (y - X \hat \beta)^T (y - X \hat \beta) \\  &=& y^Ty - y^T (X \hat \beta) - (X \hat \beta)^T y + (X \hat \beta)^T (X \hat \beta) \\  &=& y^Ty - (X \hat \beta)^T y - (X \hat \beta)^T y + (X \hat \beta)^T (X \hat \beta) \\  &=& y^Ty - 2(X \hat \beta)^T y + (X \hat \beta)^T (X \hat \beta) \\  &=& y^Ty - 2\hat \beta^T X^T y + \hat \beta^T X^T X \hat \beta \\  \end{array}}

To determine the parameters, \hat \beta, we minimize the sum of squared residuals with respect to the parameters.

\Large{\begin{array}{rcl} \frac{\partial}{\partial \beta} \left[ e^T e \right] &=& 0 \\  &=& -2X^Ty + 2X^TX \hat \beta \text{, and thus} \\  X^Ty &=& X^TX \hat \beta  \end{array}}

due to the identity \frac{\partial \mathbf{a}^T \mathbf{b}}{\partial \mathbf{a}} = \mathbf{b}, for vectors \mathbf{a} and \mathbf{b}. This relationship is matrix form of the Normal Equations. Solving for \hat \beta gives  the analytical solution to the Ordinary Least Squares problem.

\Large{\begin{array}{rcl} \hat \beta &=& (X^TX)^{-1}X^Ty \end{array}}

Boom.

Cutting Your Losses: Loss Functions & the Sum of Squares Loss

Often times, particularly in a regression framework, we are given a set of inputs (independent variables) \bold{x} and a set outputs (dependent variables) \bold{y}, and we want to devise a model function

f(\bold{x})=\bold{y}

that predicts the outputs given some inputs as best as possible. But what does it mean for a model to predict “as best as possible” exactly? In order to make the notion of how good a model is explicit, it is common to adopt a loss function

J(f(\bold{x});\bold{y}),

The loss function is some function of the model’s prediction errors (a.k.a. residuals) \bold{e} = \bold{y} - f(\bold{x}) at predicting outputs \bold y given the inputs \bold x (the loss function is also often referred to as the cost function, as it makes explicit the “cost” of incorrect prediction). “Good” models of a dataset will have small prediction errors, and therefore produce small loss function values. Determining the “best” model is equivalent to finding model function that minimizes the loss function. A common choice for this loss function is the sum of squared of the errors (SSE) loss. If there are M input-output pairs, the SSE Loss function is formally:

J(f(\bold{x});\bold{y}) =\sum_{i=1}^M (y_i - f(x_i))^2

This formula states that, for each output predicted by the model, we determine how far away the prediction is from the actual value y_i (i.e. subtraction). Each of the M individual distances are then squared and added to give a single number indicating how well (or badly) the model function captures the structure of the data across all the datapoints. The “best” model under this loss is called the least sum of squares (LSS) solution.

But why square the errors before summing them? At first, this seems somewhat unintuitive (or even ad hoc!). Surely there are other, more straight-forward loss functions we can devise. An initial notion of just adding the errors leads to a dead end because adding many positive and negative errors (i.e. resulting from data located below and above the model function) just cancels out; we want our measure of errors to be all positive (or all negative). Therefore, another idea would be to just take the absolute value of the errors |\bold{e}| before summing. Turns out, this is a known loss function, called the sum of absolute errors (SAE) or sum of absolute deviations (SAD) loss function. Finding the “best” SAE/SAD model is called the least absolute error LAE/LAD solution and such a solution was actually proposed decades before LSS. Though LAE is indeed used in contemporary methods (we’ll talk more about LAE later), the sum of squares loss function is far more popular in practice. Why does sum of squares always make the cut?

Useful Interpretations of the Sum of Squares Loss for Linear Regression

Areas of squares

Figure 1 demonstrates a set of 2D data (blue dots) and the LSS linear function (black line) of the form

\bold{\hat y} = f(\bold x) = \beta_0 + \beta_1 \bold x,

where the parameters \beta_0 (the offet of the line from y = 0) and \beta_1 (the slope) have been estimated (LSS) to “best” fit the data.

Figure 1 – Linear model fit to some data points

One helpful interpretation of SSE loss function is demonstrated in Figures 2. Each red square is a literal interpretation of the squared error for linear function fit in Figure 1. Here we see that no matter if the errors occur from predictions being greater than or less than the actual output values, the error term is always positive.

Figure 2 — Least squares loss function represented as areas

In this interpretation, the goal of finding the LSS solution is to find the line that results in the smallest red area. This interpretation is also useful for understanding the important regression metric known as the coefficient of determination R^2, which is an indicator of how well a linear model function explains or predicts a dataset. Imagine that instead of the line fit in Figures 1-2, we instead fit a simpler model that has no slope parameter, and only a bias/offset parameter (Figure 3). In this case the simpler model only captures the mean value of the data along the y-dimension.

Figure 3 — Least squares loss for a linear function that only captures the mean of the dependent variable

The squared error in this model corresponds to the (unscaled) variance of the data. Lets denote the total area of the green squares in Figure 3 as the total sum of squares (TSS) error, and the area of the red squares in Figure 2 as the residual sum of squares (RSS) of the linear model fit. The metric R^2 is related to the red and green areas as follows

R^2 = 1 - \frac{RSS}{TSS}

If the linear model is doing a good job of fitting the data, then the variance of the model errors/residuals (RSS) term will be small compared to the variance of the dataset (TSS), and the R^2 metric will be close to one. If the model is doing a poor job of fitting the data, then the variance residuals will approach that of the data itself, and the metric will be close to zero (Note too that the value of  R^2 can also take negative values, in the case when the RSS is larger than the TSS, indicating a very poor model).

A bar suspended by springs

We can gain some important insight to the importance of the least squares loss by developing concepts within the framework of a physical system (Figure 4). In this formulation, a set of springs (red, dashed lines, our errors e) suspend a bar (solid black line, our linear function f(\bold{x})) to a set of anchors (blue datapoints, our outputs \bold{y}). Note that in this formulation, the springs are constrained to operate only along the vertical direction (y-dimension). This constraint is equivalent to saying that there is only error in our measurement of the dependent variables, and is often an assumption made in regression frameworks.

Figure 4 — Least squares interpreted in terms of a physical system of a bar suspended by springs

From Hooke’s Law, the force created by each spring on the bar is proportional to the distance (error) from the bar (linear function) to its corresponding anchor (datapoint) :

F_i = -ke_i

Further, there is a potential energy U_i associated with each spring (datapoint). The total potential energy for the entire system is as follows:

\sum_i U_i = \sum_i \int -k e_i de_i \\= \sum_i \frac{1}{2} k e_i^2 \\ = \sum_i (y_i - f(x_i))^2

(assuming a spring constant of k=2). This demonstrates that the equilibrium state of this system (i.e. the arrangement of the bar that minimizes the potential energy of the system) is analogous to the state that minimizes the sum of the squared error (distance) between the bar (line function) and the anchors (datapoints).
The physical interpretation given above can also be used to derive how linear regression solutions are related to the variances of the independent variables \bold x and the covariance between \bold x and \bold y. When the bar is in the equilibrium position (optimal solution), the net force  exerted on the bar zero. Because \bold{\hat y} = \beta_0 + \beta_1 \bold x, this first zero-net-force condition is formally described as:
\sum_i^M y_i - \beta_0 - \beta_1x_i = 0
The second condition that is fullfilled during equilibrium is that there are no torquing forces on the bar (i.e. the bar is not rotating about an axis). Because torque created about an axis is the force times distance away from the origin (average x-value; the origin), this second zero-net-torque condition is formally described by:
\sum_i^M x_i(y_i - \beta_0 - \beta_1x_i) = 0

From the equation corresponding to the first zero-net-force condition, we can solve for the bias parameter \beta_0 of the linear function that describes the orientation of the bar:

\beta_0=\frac{1}{M}\sum_i (y_i - \beta_1 x_i)

\beta_0= \bar y - \beta_1 \bar x

Here the \bar \cdot (pronounced “bar”) means the average value. Plugging this expression into the second second zero-net-torque condition equation, we discover that the slope of the line has an interesting interpretation related to the variances of the data:

\sum_i x_i(y_i - \beta_0 - \beta_1x_i) = 0

\sum_i x_i(y_i - (\bar y - \beta_1 \bar x) - \beta_1x_i) = 0

\sum_i x_i(y_i - \bar y) = \beta_1 \sum_i x_i(x_i - \bar x)

\sum_i (x_i - \bar x)(y_i - \bar y) = \beta_1 \sum_i (x_i -\bar x)^2

\beta_1 = \frac{\sum_i (x_i - \bar x)(y_i - \bar y)}{\sum_i (x_i -\bar x)^2} = \frac{\text{cov}(x,y)}{\text{var}(x)}

The expressions for the parameters \beta_0 and \beta_1 tell us that, under the least squares linear regression framework, The average of the dependent variables is equal to a scaled version of the average of independent variables plus an offset:

\bar y = \beta_0 + \beta_1 \bar x

Further, the scaling factor (the slope) is equal to the ratio of the covariance between the dependent and independent variables to the variance of the independent variable. Therefore if x and y are positively correlated, the slope will be positive, if they are negatively correlated, the slope will be negative.

Because of these relationships, the LSS solution has a number of useful properties:

  1. The sum of the residuals under the LSS solution is zero(this is equivalent to the first zero-net-force condition above).
  2. Because of 1., the average residual of the LSS solution is zero
  3. The covariance between the independent variables and the residuals is zero.
  4. The LLS solution always passes through the mean (center of mass) of the sample
  5. The LSS solution minimizes the variance of the residuals/model errors.

Therefore, the least squares loss function directly relates model residuals to how the independent and dependent variables co-vary. These relationships are not available with other loss functions such as the least absolute deviation.

What’s interesting, is that the two physical constraint equations derived from the physical system above are also obtained through other analytic analyses of linear regression including defining the LSS problem using both maximum likelihood estimation and method of moments.

Wrapping up

There are many other reasons, albeit suggestions, as to why squared errors are often preferred to other rectifying functions of the errors (i.e. making all errors be positive or zero):

  1. The Least Squares solution can be derived in closed form, allowing simple analytic implementations and fast computation of model parameters.
  2. Unlike the LAE loss, the SSE loss is differentiable (i.e. is smooth) everywhere, which allows model parameters to be estimated using straight-forward, gradient-based optimizations
  3. Squared errors have deep ties in statistics and maximum likelihood estimation methods (as mentioned above), particularly when the errors are distributed according to the Golden Boy of statistical distributions, the Normal distribution.
  4. There are a number of geometric and linear algebra theorems that support using least squares. For instance the Gauss-Markov theorem states that if errors of a linear function are distributed Normally about the mean of the line, then the LSS solution gives the best unbiased estimator for the parameters \bold \beta.
  5. Squared functions have a long history of  facilitating calculus calculations used throughout the physical sciences.

The SSE loss does have a number of downfalls as well. For instance, because each error is squared, any outliers in the dataset can dominate the parameter estimation process. For this reason, the LSS loss is said to lack robustness. Therefore preprocessing of the the dataset (i.e. removing or thresholding outlier values) may be necessary when using the LSS loss.

MATLAB Code

The code to produce the figures in this post is below. The code can be directly copied to your clipboard using the toolbar at the top right of the code display.

close all; clear
randn('state',12345);
x = -5:5; % INPUTS
y = x + .5*randn(size(x)); % OUTPUTS

% PART 1 - LINEAR MODEL

% PLOT DATA
figure(1);
h1 = scatter(x,y,'filled');
xlim([min(x)-1 max(x)+1]);
xlim([min(y)-1 max(y)+1]);
axis square
xlabel('x');
ylabel('y');

fprintf('\nHere are a set of 2D data pairs...');
pause
clc

% FIND LSS SOLUTION
bias = ones(size(x));
beta=regress(y',[bias;x]');
intercept = beta(1);
slope = beta(2);
yHat = x*slope + intercept;

% PLOT MODEL FIT
hold on;
h2 = plot(x,yHat,'k-','Linewidth',2);

fprintf('\n...and the LSS linear function determined for the points.\n');
pause
clc

% CALCULATE PREDICTION ERROR
e = y - yHat;
posErrors = find(e>=0);
negErrors = setdiff(1:numel(x),posErrors);

% PLOT "SQUARED" ERRORS
cnt = 1;
for iP = 1:numel(posErrors);
 xs = [x(posErrors(iP))-e(posErrors(iP)), ...
 x(posErrors(iP)), ...
 x(posErrors(iP)), ...
 x(posErrors(iP))-e(posErrors(iP))];

 ys = [y(posErrors(iP))-e(posErrors(iP)), ...
 y(posErrors(iP))-e(posErrors(iP)), ...
 y(posErrors(iP)), ...
 y(posErrors(iP))];

 hS(cnt)=patch(xs,ys,'r');
 set(hS(cnt),'EdgeColor','r');
 set(hS(cnt),'FaceAlpha',.5);
 cnt = cnt+1;
end
for iN = 1:numel(negErrors);
 xs = [x(negErrors(iN))-e(negErrors(iN)), ...
 x(negErrors(iN)), ...
 x(negErrors(iN)), ...
 x(negErrors(iN))-e(negErrors(iN))];

 ys = [y(negErrors(iN)), ...
 y(negErrors(iN)), ...
 y(negErrors(iN))-e(negErrors(iN)), ...
 y(negErrors(iN))-e(negErrors(iN))];

 hS(cnt)= patch(xs,ys,'r');
 set(hS(cnt),'EdgeColor','r');
 set(hS(cnt),'FaceAlpha',.5);
 cnt = cnt+1;
end

uistack(h2,'top');
uistack(h1,'top');

fprintf('\nOne helpful interpretation is to represent the')
fprintf('\nsquared errors literally as the area spanned')
fprintf('\nin the space (red squares).\n')
fprintf('\nFinding the LSS solution is equivalent to minimizing')
fprintf('\nthe sum of the area of these squares.\n')
pause
clc

% PART 2 - TRIVIAL LINEAR MODEL
figure(2);
h1 = scatter(x,y,'filled');
xlim([min(x)-1 max(x)+1]);
xlim([min(y)-1 max(y)+1]);
axis square
xlabel('x');
ylabel('y');

yHat0 = mean(y).*ones(size(x));
e0 = y - yHat0;
posErrors0 = find(e0>=0);
negErrors0 = setdiff(1:numel(x),posErrors0);

% PLOT TRIVIAL MODEL FIT
hold on;
h2 = plot(x,yHat0,'k-','Linewidth',2);

fprintf('\nNow, imagine that we fit a simpler model to the datapoints')
fprintf('\nthat is a line with no slope parameter and an offset parameter.')
fprintf('\nIn this case we''re essentially fitting the mean of the data.')
pause
clc

% PLOT TRIVIAL MODEL "SQUARED" ERRORS
cnt = 1;
for iP = 1:numel(posErrors0);
 xs = [x(posErrors0(iP))-e0(posErrors0(iP)), ...
 x(posErrors0(iP)), ...
 x(posErrors0(iP)), ...
 x(posErrors0(iP))-e0(posErrors0(iP))];

 ys = [y(posErrors0(iP))-e0(posErrors0(iP)), ...
 y(posErrors0(iP))-e0(posErrors0(iP)), ...
 y(posErrors0(iP)), ...
 y(posErrors0(iP))];

 hS(cnt)=patch(xs,ys,'g');
 set(hS(cnt),'EdgeColor','g');
 set(hS(cnt),'FaceAlpha',.5);
 cnt = cnt+1;
end
for iN = 1:numel(negErrors0);
 xs = [x(negErrors0(iN))-e0(negErrors0(iN)), ...
 x(negErrors0(iN)), ...
 x(negErrors0(iN)), ...
 x(negErrors0(iN))-e0(negErrors0(iN))];

 ys = [y(negErrors0(iN)), ...
 y(negErrors0(iN)), ...
 y(negErrors0(iN))-e0(negErrors0(iN)), ...
 y(negErrors0(iN))-e0(negErrors0(iN))];

 hS(cnt)= patch(xs,ys,'g');
 set(hS(cnt),'EdgeColor','g');
 set(hS(cnt),'FaceAlpha',.5);
 cnt = cnt+1;
end

uistack(h2,'top');
uistack(h1,'top');

fprintf('\nTherefore the sum of residuals for this model area equal to the')
fprintf('\n(unscaled) variance of the data.\n')
pause
fprintf('\nThe ratio of the area of the green boxes in Figure 1 to the area of')
fprintf('\nthe red boxes in Figure 2 is related to the important metric known')
fprintf('\nas the coefficient of determination, R^2. Specifically:\n')
fprintf('\nR^2 = 1 - Red/Green\n')
fprintf('\nNote that as the linear model fit improves, the area of the red')
fprintf('\nboxes decreases and the value of R^2 approaches one.')
pause
clc

% PART 3- "SPRINGS" INTERPRETATION
figure(3)
h1 = scatter(x,y,'filled');
xlim([min(x)-1 max(x)+1]);
xlim([min(y)-1 max(y)+1]);hold on
h2 = plot(x,yHat,'k-','Linewidth',2);
axis square
xlabel('x');
ylabel('y');

h3 = line([x;x],[y;yHat],'color','r','linestyle','--','Linewidth',2);
uistack(h1,'top');

fprintf('\nIt can also be helpful to think of errors corresponding to')
fprintf('\nindividual datapoints as springs (Figuree 3, red dashes)')
fprintf('\nattached to a suspended bar (black line)\n')
fprintf('\nIf the springs are limited to only operate in the the y-direction,')
fprintf('\nthen sum of the potential energies stored in the springs when')
fprintf('\nthe bar has reached its equilibtium position directly corresponds')
fprintf('\nto the sum of squares error function:\n')
fprintf('\ne = y - yHat;\nU = integral(ke)de = 1/2ke^2\n')
fprintf('\nif k = 1, then:\n')
fprintf('\nU = 1/2(y - yHat)^2, \nthe least squares error function\n')
pause
clc

close all; clear all; clc