Blog Archives

Model Selection: Underfitting, Overfitting, and the Bias-Variance Tradeoff

In machine learning and pattern recognition, there are many ways (an infinite number, really) of solving any one problem. Thus it is important to have an objective criterion for assessing the accuracy of candidate approaches and for selecting the right model for a data set at hand. In this post we’ll discuss the concepts of under- and overfitting and how these phenomena are related to the statistical quantities bias and variance. Finally, we will discuss how these concepts can be applied to select a model that will accurately generalize to novel scenarios/data sets.

Models for Regression

When performing regression analyses we would like to characterize how the value of some dependent variable changes as some independent variable x is varied. For example, say we would like to characterize the firing rate of a neuron in visual cortex as we vary the orientation of a grating pattern presented to the eye. We assume that there is some true relationship function f(x) that maps the independent variable values (i.e. the angle of the grating pattern) onto the dependent variable values (i.e. firing rate). We would like to determine the form of the function f(x) from observations of independent-dependent value pairs (I may also refer to these as input-output pairs, as we can think of the function f(x) taking x as input and producing an output). However, in the real world, we don’t get to observe f(x) directly, but instead get noisy observations y, where

(1) y = f(x) + \epsilon

Here we will assume that \epsilon is random variable distributed according to a zero-mean Gaussian with standard deviation \sigma^2. Note that because \epsilon is a random variable, y is also a random variable (with a mean that is on conditioned on both x and f(x), and a variance \sigma^2).

As an example, say that the true function f(x) we want to determine has the the following form (though we don’t know it):

f(x) = \sin(\pi x)

Thus the observations y we get to see have the following distribution.

y = \sin(\pi x) + \mathcal N(0,\sigma^2)

Below we define the function f(x) and display it, then draw a few observation samples y, and display them as well:

% CREATE A RANDOM DATASET
rng(10)
nData = 10; % N
x = 2*(rand(1,nData)-.5);
xGrid = linspace(-1,1,100);

% DEFINE AND TARGET FUNCTION f(x)
f = inline('sin(pi*x)','x');

h = [];
h(1) = plot(xGrid,f(xGrid),'k','Linewidth',2);
xlabel('x')
hold on

% DEFINE AND DISPLAY y
noiseSTD = .1;
y = f(x) + noiseSTD*randn(size(x));
h(2) = scatter(x,y,'ko');
legend(h,{'f(x)','y'},'Location','Northwest');

bias-variance-f_x-y

Again, the goal here is to characterized the function f(x). However, since we don’t know the function form of f(x), we must instead estimate some other function g(x) that we think will be a good approximation to f(x). Thus we call g(x) an estimator of f(x). In general, an estimator is some parameterized model that can capture a wide range of functional forms. One such class of estimators is the weighted combination of ordered polynomials:

g_N(x) = \theta_0 + \theta_1x + \theta_2x^2 + \dots \theta_N x^N

As the polynomial order N increases, the functions g_N(x) are able to capture increasingly complex behavior. For example, g_0(x) desribes a horizontal line with an adjustable vertical offset, g_1(x) desribes a line with adjustable vertical offset and adjustable slope, g_2(x) describes a function that also includes a quadratic term. We thus try to fit the values of the parameters for a given estimator g_N(x)  to best account for observed data in the hopes that we will also accurately approximate f(x).

Below we estimate the parameters of three polynomial model functions of increasing complexity (using Matlab’s \texttt{polyfit.m}) to the sampled data displayed above. Specifically, we estimate the functions g_1(x), g_3(x) and g_{10}(x).

% FIT POLYNOMIAL MODELS & DISPLAY
% (ASSUMING PREVIOUS PLOT ABOVE STILL AVAILABLE)
degree = [1,3,10];
theta = {};
cols = [.8 .05 0.05; 0.05 .6 0.05; 0.05 0.05 .6];
for iD = 1:numel(degree)
	figure(1)
	theta{iD} = polyfit(x,y,degree(iD));
	fit{iD} = polyval(theta{iD},xGrid);
	h(end+1) = plot(xGrid,fit{iD},'color',cols(iD,:),'Linewidth',2);
	xlim([-1 1])
	ylim([-1 1])
end
legend(h,'f(x)','y','g_1(x)','g_3(x)','g_{10}(x)','Location','Northwest')

bias-variance-f_x-y-fits

Qualitatively, we see that the estimator g_1(x) (red line) provides a poor fit to the observed data, as well as a poor approximation to the function f(x) (black curve). We see that the estimator g_{10}(x) (blue curve) provides a very accurate fit to the data points, but varies wildly to do so, and therefore provides an inaccurate approximation of f(x). Finally, we see that the estimator g_3(x) (green curve) provides a fairly good fit to the observed data, and a much better job at approximating f(x).

Our original goal was to approximate f(x), not the data points per se. Therefore g_3(x), at least qualitatively, provides a more desirable estimate of f(x) than the other two estimators. The fits for g_1(x) and g_{10}(x) are examples of “underfitting” and “overfitting” to the observed data. Underfitting occurs when an estimator g(x) is not flexible enough to capture the underlying trends in the observed data. Overfitting occurs when an estimator is too flexible, allowing it to capture illusory trends in the data. These illusory trends are often the result of the noise in the observations y.

Variance and Bias of an Estimator

The model fits for g_N(x) discussed above were based on a single, randomly-sampled data set of observations y. However, there are in principle, a potentially infinite number of data sets that can be observed (drawn from y). In order to determine a good model of f(x), it would be helpful to have an idea of how an estimator will perform on any or all of these potential datasets.  To get an idea of how each of the estimators discussed above performs in general we can repeat the model fitting procedure for many data sets.

Here we perform such an analyses, sampling 50 independent data sets according to Equation (1) (with \sigma=0.1), then fitting the parameters for the polynomial functions of model order N = [1,3,10] to each dataset.

% FIT MODELS TO K INDEPENDENT DATASETS
K = 50;
for iS = 1:K
	ySim = f(x) + noiseSTD*randn(size(x));
	for jD = 1:numel(degree)
		% FIT THE MODEL USING polyfit.m
		thetaTmp = polyfit(x,ySim,degree(jD));
		% EVALUATE THE MODEL FIT USING polyval.m
		simFit{jD}(iS,:) = polyval(thetaTmp,xGrid);
	end
end

% DISPLAY ALL THE MODEL FITS
h = [];
for iD = 1:numel(degree)
	figure(iD+1)
	hold on
	% PLOT THE FUNCTION FIT TO EACH DATASET
	for iS = 1:K
		h(1) = plot(xGrid,simFit{iD}(iS,:),'color',brighten(cols(iD,:),.6));
	end
	% PLOT THE AVERAGE FUNCTION ACROSS ALL FITS
	h(2) = plot(xGrid,mean(simFit{iD}),'color',cols(iD,:),'Linewidth',5);
	% PLOT THE UNDERLYING FUNCTION f(x)
	h(3) = plot(xGrid,f(xGrid),'color','k','Linewidth',3);
	% CALCULATE THE SQUARED ERROR AT EACH POINT, AVERAGED ACROSS ALL DATASETS
	squaredError = (mean(simFit{iD})-f(xGrid)).^2;
	% PLOT THE SQUARED ERROR
	h(4) = plot(xGrid,squaredError,'k--','Linewidth',3);
	uistack(h(2),'top')
	hold off
	axis square
	xlim([-1 1])
	ylim([-1 1])
	legend(h,{sprintf('Individual g_{%d}(x)',degree(iD)),'Mean of All Fits','f(x)','Squared Error'},'Location','WestOutside')
	title(sprintf('Model Order=%d',degree(iD)))
end

bias-variance-simulation1

Here are the results for the estimator g_1(x)

bias-variance-simulation3

…and for the estimator g_3(x), …

bias-variance-simulation10

… and for g_{10}(x).

The lightly-colored curves in each of the three plots above are an individual polynomial model fit to one of the 50 sampled data sets. The darkly-colored curve in each plot is the average over the 50 individual fits. The dark black curve is the true, underlying function f(x).

We see that for the estimator g_1(x) (red curves), model fits do not vary much from data set to data set. Thus the expected (mean) estimator fit over all the data sets, formally written as \mathbb E[g(x)], is very similar to all the individual fits.

A common formal definition used in statistics to describe how much a single estimator deviates from the average estimator over datasets is the variance of the estimator. Formally defined as

variance=\mathbb E[(g(x)-\mathbb E[g(x)])^2]

the variance is the average squared difference between any single data-set-dependent estimate of g(x) and the average value of g(x) estimated over all datasets.

According to the definition of variance, we can say that the estimator g_1(x) exhibits low variance.

A commonly-used metric in statistics for assessing how an estimator g(x) approximates a target function f(x), based on behavior over many observed data sets is what is called the bias of the estimator. Formally defined as:

bias = \mathbb E[g(x)] - f(x)

The bias describes how much the average estimator fit over datasets \mathbb E[g(x)] deviates from the value of the underlying target function f(x).

We can see from the plot for g(x)_1 that \mathbb E[g_1(x)] deviates significantly from f(x). Thus we can say that the estimator g_1(x) exhibits large bias when approximating the function f(x).

Investigating the results for the estimator g_{10}(x) (blue curves), we see that each individual model fit varies dramatically from one data set to another. Thus we can say that this estimator exhibits large variance. Looking at \mathbb E[g_{10}(x)] (the dark blue curve), we see that on average the estimator g_{10}(x) provides a better approximation to f(x) than the estimator g_1(x). Thus we can say that g_{10}(x) exhibits a lower bias than the estimator g_1(x).

Investigating the fits for the estimator g_3(x) (green curves), we find that this estimator has low bias. Furthermore, the average estimator \mathbb E[g_3(x)] (dark green curve) accurately approximates the true function f(x), telling us that that the estimator g_3(x) also has low bias.

We established earlier that the estimator g_3(x) provided a qualitatively better fit to the function f(x) than the other two polynomial estimators for a single dataset. It appears that this is also the case over many datasets. We also find that estimator g_3(x) exhibits low bias and low variance, whereas the other two, less-desirable estimators, have either high bias or high variance. Thus it would appear that having both low bias and low variance is a reasonable criterion for selecting an accurate model of f(x).

Included in each of the three plots above is a dashed black line representing the squared difference between the average estimator \mathbb E[g_N(x)] and the true function f(x). Calculating squared model errors is a common practice for quantifying the goodness of a model fit. If we calculate the expected value of each of the dashed black curves (and assuming that all values of x are equally likely to occur), we would obtain a single value for each estimator that is the mean squared error (MSE) between the expected estimator and the true function.

\mathbb E[(\mathbb E[g(x)]-f(x))^2] = \frac{1}{N}\sum_{i=1}^N (\mathbb E[g(x)]-f(x))^2

For the estimator g_3(x), the MSE will be very small, as the dashed black curve for this estimator is near zero for all values of x. The estimators g_1(x) and g_{10}(x) would have significantly larger values. Now, because exhibiting both a low MSE, as well as having both low bias and variance are indicative of a good estimator, it would be reasonable to assume that squared model error is directly related to bias and variance. The next section provides some formal evidence for this notion.

Expected Prediction Error and the Bias-variance Tradeoff

For a given estimator g(x) fit to a data set of xy pairs, we would like to know, given all the possible datasets out there, what is the expected prediction error we will observe for a new data point x^*, y^* = f(x^*) + \epsilon. If we define prediction error to be the squared difference in model prediction g(x^*) and observations y^*, the expected prediction error is then:

\mathbb E[(g(x^*) - y^*)^2]

If we expand this a little and use a few identities, something interesting happens:

(2) \mathbb E[(g(x^*) - y^*)^2] = \mathbb E[g(x^*)^2-2g(x^*)y^*+y^{*2}]=

(3) \mathbb E[g(x^*)^2]-2\mathbb E[g(x^*)y^*]+\mathbb E[y^{*2}]=

(4) \mathbb E[(g(x^*)-\mathbb E[g(x^*)])^2] + \mathbb E[g(x^*)]^2-2 \mathbb E[g(x^*)]f(x^*) + \mathbb E[(y^*-f(x^*))^2] + f(x^*)^2

where we have applied the following Lemma to the first and third terms of Equation (3), and use the fact to \mathbb E[y] = f(x) (Think of averaging over an infinite number of datasets sampled from y; all noise will average out, leaving f(x)). Rearranging Equation (4), we obtain

(5) \mathbb E[(g(x^*)-\mathbb E[g(x^*)])^2] + \mathbb E[g(x^*)]^2 - 2 \mathbb E[g(x^*)]f(x^*) + f(x^*)^2 + \mathbb E[(y^*-f(x^*))^2]

which can be further simplified and grouped into three terms

(6) \mathbb E[(g(x^*) - \mathbb E[g(x^*)])^2] +\mathbb (E[g(x^*)]-f(x^*))^2 + \mathbb E[(y^*-f(x^*))^2]

  1. The first term is the variance of the estimator introduced above.
  2. The second term is the square of the bias of the estimator, also introduced above.
  3. The third term is the variance of the observation noise and describes how much the observations y vary from the true function f(x). Notice that the noise term does not depend on the estimator g(x). Thus the noise term is a constant that places a lower bound on expected prediction error.

Here we find that the expected prediction error on new data (x^*,y^*) (in the squared differences sense) is the combination of three terms:

Expected prediction error = estimator variance + squared estimator bias + noise

Thus the expected prediction error on new data can be used as a quantitative criterion for selecting the best model from a candidate set of estimators! It turns out that, given N new data points (\bold x^*,\bold y^*), the expected prediction error can be easily approximated as the mean squared error over data pairs:

\mathbb E[(g(\bold x^*) - \bold y^*)^2] \approx \frac{1}{N}\sum_{i=1}^N(g(x_i^*)-y_i^*)^2

Below we demonstrate these findings with another set of simulations. We simulate 100 independent datasets, each with 25 xy pairs. We then partition each dataset into two non-overlapping sets: a training set using for fitting model parameters, and a testing set used to calculate prediction error. We then fit the parameters for estimators of varying complexity. Complexity is varied by using polynomial functions that range in model order from 1 (least complex) to 12 (most complex). We then calculate and display the squared bias, variance, and error on testing set for each of the estimators:

N = 25; % # OF OBSERVATIONS PER DATASET
K = 100;% # OF DATASETS
noiseSTD = .5; % NOISE STANDARDE DEV.
nTrain = ceil(N*.9); % # OF TRAINING POINTS
nPolyMax = 12; % MAXIMUM MODEL COMPLEXITY

% # INITIALIZE SOME VARIABLES
xGrid = linspace(-1,1,N);
meanPrediction = zeros(K,N);
thetaHat = {};
x = linspace(-1,1,N);
x = x(randperm(N));
for iS = 1:K % LOOP OVER DATASETS
	% CREATE OBSERVED DATA, y
	y = f(x) + noiseSTD*randn(size(x));

	% CREATE TRAINING SET
	xTrain = x(1:nTrain);
	yTrain = y(1:nTrain);

	% CREATE TESTING SET
	xTest = x(nTrain+1:end);
	yTest = y(nTrain+1:end);

	% FIT MODELS
	for jD = 1:nPolyMax

		% MODEL PARAMETER ESTIMATES
		thetaHat{jD}(iS,:) = polyfit(xTrain,yTrain,jD);

		% PREDICTIONS
		yHatTrain{jD}(iS,:) = polyval([thetaHat{jD}(iS,:)],xTrain); TRAINING SET
		yHatTest{jD}(iS,:) = polyval([thetaHat{jD}(iS,:)],xTest);% TESTING SET

		% MEAN SQUARED ERROR
		trainErrors{jD}(iS) = mean((yHatTrain{jD}(iS,:) - yTrain).^2); % TRAINING
		testErrors{jD}(iS) = mean((yHatTest{jD}(iS,:) - yTest).^2); % TESTING
	end
end

% CALCULATE AVERAGE PREDICTION ERROR, BIAS, AND VARIANCE
for iD = 1:nPolyMax
	trainError(iD) = mean(trainErrors{iD});
	testError(iD) = mean(testErrors{iD});
	biasSquared(iD) = mean((mean(yHatTest{iD})-f(xTest)).^2);
	variance(iD) = mean(var(yHatTest{iD},1));
end
[~,bestModel] = min(testError);

% DISPLAY
figure;
hold on;
plot(testError,'k','Linewidth',2);
plot(biasSquared,'r','Linewidth',2);
plot(variance,'b','Linewidth',2);
plot(biasSquared + variance,'m-.','Linewidth',2);
yl = ylim;
plot([bestModel,bestModel],[yl(1),yl(2)],'k--');
xlim([1,nPolyMax]);
xlabel('Model Complexity (Polynomial Order)')
legend('Test Error','Bias^2','Variance','Bias^2+Var.','Best Model')
hold off;

bias-variance-tradeoff

Here we see how, as the model complexity increases, the estimator variance (blue curve) gradually increases. Additionally, as model complexity increases, the squared bias (red curve) decreases. Thus there is a tradeoff between bias and variance that comes with model complexity: models that are too complex will have high variance and low bias; models that are too simple will have high bias and low variance. The best model will have both low bias and low variance. In this example, we highlight the best estimator in terms of prediction error on the testing set (black curve) with a dashed black vertical line. The best estimator corresponds to a polynomial model of order of N=3. Notice that the vertical black line is located where function defined by the sum of the squared bias and variance (dashed magenta curve) is also at a minimum.

Notice also how the sum of the squared bias and variance also has the same shape as curve defined by the prediction error on the testing set. This exemplifies how the error on novel data can be used as a proxy for determining the best estimator from a candidate set based on squared bias and variance. The noise term in Equation (6) is also represented in the figure by the vertical displacement between the black curve and dashed magenta curve.

It is very important to point out that all of these results are based on evaluating prediction error on novel data, not used to estimate model parameters. In fact assessing a model performance based on prediction error calculated on the same data used to estimate the model parameters is highly problematic, as it causes models to always overfit. In plain terms, this means that we will always favor a more complex model if we assess goodness of model fits on the training data, as a more complex model will be better able to capture small, random trends in the data due to noise.

This overfitting phenomenon is demonstrated below. We plot the error calculated on the training set (Train Error, green curve) along with the error calculated on the testing set (Test Error, black curve) for the above simulation. We also identify the best estimator as we did above.

% DISPLAY
figure, hold on;
plot(trainError,'g','Linewidth',2);
plot(testError,'k','Linewidth',2);
yl = ylim;
plot([bestModel,bestModel],[yl(1),yl(2)],'k--');
xlim([1,nPolyMax]);
xlabel('Model Complexity (Polynomial Order)')
legend('Train Error','Test Error','Best Model');
hold off;

bias-variance-train-test-error

We see here that as model complexity increases, the error calculated on the training set continues to decrease, where as the error on the testing set increases past the optimal polynomial order N=3. We  showed above that error calculated on the testing set is the true indicator of how well an estimator will generalize to new data points. The error calculated on the training set strongly disagrees with the error calculated on the testing set after the optimal model complexity has been reached. Since, in general, the whole point of modeling a data set is to generalize to novel data, assessing model predictions on the training set data should be avoided.

Wrapping Up

Here we discussed how the bias and variance of an estimator are related to squared prediction error. Though we focused on regression, these concepts can also be applied to classification problems. We found that an optimal estimator will have both low variance and low bias. We further found that information about squared bias and variance is contained in expected prediction error calculated on a testing set of data not used to fit a model’s parameters.

The concepts of estimator bias and variance are generally only clear in the context of an ensemble of datasets. However, in real-world applications, there is generally only a single observed dataset. In such cases the roles of bias and variance are less obvious (though, it is possible to calculate estimates of variance and bias using resampling methods such as bootstrapping). However, the direct connection we made between bias, variance with the mean-squared error calculated on a testing set give us a direct means for assessing a group of candidate estimators in light of a single data set. We only need to partition the available data set into separate portions: one used to fit model parameters (a training set), and another used to assess prediction accuracy (testing set). Comparing prediction accuracy across potential estimators is equivalent to assessing biases and variances of the estimators across many datasets. Note that resampling methods such as cross-validation can prove helpful here, particularly when the amount of observed data is small.

Note, all the code for this post, containe in a single script is here:

clear
clc
close all

% CREATE A RANDOM DATASET
rng(10)
nData = 10; % N
x = 2*(rand(1,nData)-.5);

xGrid = linspace(-1,1,100);

% DEFINE AND TARGET FUNCTION f(x)
f = inline('sin(pi*x)','x');

h = [];
h(1) = plot(xGrid,f(xGrid),'k','Linewidth',2);
xlabel('x')
hold on

% DEFINE AND DISPLAY y
noiseSTD = .1;
y = f(x) + noiseSTD*randn(size(x));
h(2) = scatter(x,y,'ko');
legend(h,{'f(x)','y'},'Location','Northwest');

% FIT POLYNOMIAL MODELS & DISPLAY
% (ASSUMING PREVIOUS PLOT ABOVE STILL AVAILABLE)
degree = [1,3,10];
theta = {};
cols = [.8 .05 0.05; 0.05 .6 0.05; 0.05 0.05 .6];
for iD = 1:numel(degree)
	figure(1)
	theta{iD} = polyfit(x,y,degree(iD));
	fit{iD} = polyval(theta{iD},xGrid);
	h(end+1) = plot(xGrid,fit{iD},'color',cols(iD,:),'Linewidth',2);
	xlim([-1 1])
	ylim([-1 1])
end
legend(h,'f(x)','y','g_1(x)','g_3(x)','g_{10}(x)','Location','Northwest')

% FIT MODELS TO K INDEPENDENT DATASETS
K = 50;
for iS = 1:K
	ySim = f(x) + noiseSTD*randn(size(x));
	for jD = 1:numel(degree)
		% FIT THE MODEL USING polyfit.m
		thetaTmp = polyfit(x,ySim,degree(jD));
		% EVALUATE THE MODEL FIT USING polyval.m
		simFit{jD}(iS,:) = polyval(thetaTmp,xGrid);
	end
end

% DISPLAY ALL THE MODEL FITS
h = [];
for iD = 1:numel(degree)
	figure(iD+1)
	hold on
	% PLOT THE FUNCTION FIT TO EACH DATASET
	for iS = 1:K
		h(1) = plot(xGrid,simFit{iD}(iS,:),'color',brighten(cols(iD,:),.6));
	end
	% PLOT THE AVERAGE FUNCTION ACROSS ALL FITS
	h(2) = plot(xGrid,mean(simFit{iD}),'color',cols(iD,:),'Linewidth',5);
	% PLOT THE UNDERLYING FUNCTION f(x)
	h(3) = plot(xGrid,f(xGrid),'color','k','Linewidth',3);
	% CALCULATE THE SQUARED ERROR AT EACH POINT, AVERAGED ACROSS ALL DATASETS
	squaredError = (mean(simFit{iD})-f(xGrid)).^2;
	% PLOT THE SQUARED ERROR
	h(4) = plot(xGrid,squaredError,'k--','Linewidth',3);
	uistack(h(2),'top')
	hold off
	axis square
	xlim([-1 1])
	ylim([-1 1])
	legend(h,{sprintf('Individual g_{%d}(x)',degree(iD)),'Mean of All Fits','f(x)','Squared Error'},'Location','WestOutside')
	title(sprintf('Model Order=%d',degree(iD)))
end

N = 25; % # OF OBSERVATIONS PER DATASET
K = 100;% # OF DATASETS
noiseSTD = .5; % NOISE STANDARDE DEV.
nTrain = ceil(N*.9); % # OF TRAINING POINTS
nPolyMax = 12; % MAXIMUM MODEL COMPLEXITY

% # INITIALIZE SOME VARIABLES
xGrid = linspace(-1,1,N);
meanPrediction = zeros(K,N);
thetaHat = {};
x = linspace(-1,1,N);
x = x(randperm(N));
for iS = 1:K % LOOP OVER DATASETS

	% CREATE OBSERVED DATA, y
	y = f(x) + noiseSTD*randn(size(x));

	% CREATE TRAINING SET
	xTrain = x(1:nTrain);
	yTrain = y(1:nTrain);

	% CREATE TESTING SET
	xTest = x(nTrain+1:end);
	yTest = y(nTrain+1:end);

	% FIT MODELS
	for jD = 1:nPolyMax

		% MODEL PARAMETER ESTIMATES
		thetaHat{jD}(iS,:) = polyfit(xTrain,yTrain,jD);

		% PREDICTIONS
		yHatTrain{jD}(iS,:) = polyval([thetaHat{jD}(iS,:)],xTrain); % TRAINING SET
		yHatTest{jD}(iS,:) = polyval([thetaHat{jD}(iS,:)],xTest);% TESTING SET

		% MEAN SQUARED ERROR
		trainErrors{jD}(iS) = mean((yHatTrain{jD}(iS,:) - yTrain).^2); % TRAINING
		testErrors{jD}(iS) = mean((yHatTest{jD}(iS,:) - yTest).^2); % TESTING
	end
end

% CALCULATE AVERAGE PREDICTION ERROR, BIAS, AND VARIANCE
for iD = 1:nPolyMax
	trainError(iD) = mean(trainErrors{iD});
	testError(iD) = mean(testErrors{iD});
	biasSquared(iD) = mean((mean(yHatTest{iD})-f(xTest)).^2);
	variance(iD) = mean(var(yHatTest{iD},1));
end
[~,bestModel] = min(testError);

% DISPLAY
figure;
hold on;
plot(testError,'k','Linewidth',2);
plot(biasSquared,'r','Linewidth',2);
plot(variance,'b','Linewidth',2);
plot(biasSquared + variance,'m-.','Linewidth',2);
yl = ylim;
plot([bestModel,bestModel],[yl(1),yl(2)],'k--');
xlim([1,nPolyMax]);
xlabel('Model Complexity (Polynomial Order)')
legend('Test Error','Bias^2','Variance','Bias^2+Var.','Best Model')
hold off;

% DISPLAY
figure, hold on;
plot(trainError,'g','Linewidth',2);
plot(testError,'k','Linewidth',2);
yl = ylim;
plot([bestModel,bestModel],[yl(1),yl(2)],'k--');
xlim([1,nPolyMax]);
xlabel('Model Complexity (Polynomial Order)')
legend('Train Error','Test Error','Best Model');
hold off;
Advertisements