# Blog Archives

## Derivation: The Covariance Matrix of an OLS Estimator (and applications to GLS)

We showed in an earlier post that for the linear regression model

$y = X\beta + \epsilon$,

the optimal Ordinary Least Squares (OLS) estimator for model parameters $\beta$ is

$\hat \beta = (X^TX)^{-1}X^Ty$

However, because independent variables $X$ and responses $y$ can take on any value, they are both random variables. And, because $\hat \beta$ is a linear combination of $X$ and $y$, it is also a random variable, and therefore has a covariance. The definition of the covariance matrix $C_{\hat \beta}$ for the OLS estimator is defined as:

$C_{\hat \beta} = E[(\hat \beta - \beta)(\hat \beta - \beta)^T]$

where, $E[*]$ denotes the expected value operator. In order to find an expression for $C_{\hat \beta}$, we first need an expression for  $(\hat \beta - \beta)$. The following derives this expression:

$\hat \beta = (X^TX)^{-1}X^T(X\beta + \epsilon)$,

where we use the fact that

$y = X\beta + \epsilon$.

It follows that

$\hat \beta = (X^TX)^{-1}X^TX \beta + (X^TX)^{-1}\epsilon$

$\hat \beta = \beta + (X^TX)^{-1}X^T \epsilon$

and therefore

$(\hat \beta - \beta) = (X^TX)^{-1}X^T \epsilon$

Now following the original definition for $C_{\hat \beta}$

$C_{\hat \beta} = E[(\hat \beta - \beta)(\hat \beta - \beta)^T]$

$= E[(X^TX)^{-1}X^T\epsilon((X^TX)^{-1}X^T \epsilon)^T]$

$= E[(X^TX)^{-1}X^T\epsilon \epsilon^T X(X^TX)^{-1}]$

where we take advantage of $(AB)^T = B^T A^T$ in order to rewrite the second term in the product of the expectation. If we take $X$ to be fixed for a given estimator of $\hat \beta$ (in other words we don’t randomly resample the independent variables), then the expectation only depends on the remaining stochastic/random variable, namely $\epsilon$. Therefore the above expression can be written as

$C_{\hat \beta} = (X^TX)^{-1}X^T E[\epsilon \epsilon^T] X(X^TX)^{-1}$.

where $E[\epsilon \epsilon^T]$ is the covariance of the noise term in the model. Because OLS assumes uncorrelated noise, the noise covariance is equal to $\sigma^2 I$, where $\sigma^2$ is the variance along each dimension, and $I$ is an identity matrix of size equal to the number of dimensions. The expression for the estimator covariance is now:

$C_{\hat \beta} = (X^TX)^{-1}X^T (\sigma^2 I) X(X^TX)^{-1}$,

$= \sigma^2 I (X^TX)^{-1} X^T X(X^TX)^{-1}$

which simplifies to

$C_{\hat \beta} = \sigma^2 (X^T X)^{-1}$

A further simplifying assumption made by OLS that is often made is that $\epsilon$ is drawn from a zero mean multivariate Guassian distribution of unit variances (i.e. $\sigma^2 = 1$), resulting in a noise covariance equal to the identity. Thus

$C_{\hat \beta} = (X^TX)^{-1}$

## Applying the derivation results to Generalized Least Squares

Notice that the expression for the OLS estimator covariance is equal to first inverse term in the expression for the OLS estimator. Identitying the covariance for the OLS estimator in this way gives a helpful heuristic to easily identify the covariance of related estimators that do not make the simplifying assumptions about the covariance that are made in OLS. For instance in Generalized Least Squares (GLS), it is possible for the noise terms to co-vary. The covariance is represented as a noise covariance matrix $C_{\epsilon}$. This gives the model form

$y = X \beta + \epsilon$,

where $E[\epsilon | X] = 0; Var[\epsilon | X] = C_{\epsilon}$.

In otherwords, under GLS, the noise terms have zero mean, and covariance $C_{\epsilon}$.  It turns out that estimator for the GLS model parameters is

$\hat \beta_{GLS} = (X^T C_{\epsilon}^{-1} X)^{-1} X^T C_{\epsilon}^{-1}y$.

Notice the similarity between the GLS and OLS estimators. The only difference is that in GLS, the solution for the parameters is scaled by the inverse of the noise covariance. And, in a similar fashion to the OLS estimator, the covariance for the GLS estimator is first term in the product that defines the GLS estimator:

$C_{\hat \beta, GLS} = (X^T C_{\epsilon}^{-1}X)^{-1}$

## fMRI in Neuroscience: Modeling the HRF With FIR Basis Functions

In the previous post on fMRI methods, we discussed how to model the selectivity of a voxel using the General Linear Model (GLM). One of the basic assumptions that we must make in order to use the GLM is that we also have an accurate model of the Hemodynamic Response Function (HRF) for the voxel. A common practice is to use a canonical HRF model established from previous empirical studies of fMRI timeseries. However, voxels throughout the brain and across subjects exhibit a variety of shapes, so the canonical model is often incorrect. Therefore it becomes necessary to estimate the shape of the HRF for each voxel.

There are a number of ways that have been developed for estimating HRFs, most of them are based on temporal basis function models. (For details on basis function models, see this previous post.). There are a number of basis function sets available, but in this post we’ll discuss modeling the HRF using a flexible basis set composed of a set of delayed impulses called Finite Impulse Response (FIR) basis.

## Modeling HRFs With a Set of Time-delayed Impulses

Let’s say that we have an HRF with the following shape.

A Model HRF.

We would like to be able to model the HRF as a weighted combination of simple basis functions. The simplest set of basis functions is the FIR basis, which is a series of $H$ distinct unit-magnitude (i.e. equal to one) impulses, each of which is delayed in time by $t = 1 \dots H$ TRs. An example of  modeling the HRF above using FIR basis functions is below:

%% REPRESENTING AN HRF WITH FIR BASIS FUNCTIONS
% CREATE ACTUAL HRF (AS MEASURED BY MRI SCANNER)
rand('seed',12345)
TR = 1                              % REPETITION TIME
t = 1:TR:20;                        % MEASUREMENTS
h = gampdf(t,6) + -.5*gampdf(t,10); % ACTUAL HRF
h = h/max(h);

% DISPLAY THE HRF
figure;
stem(t,h,'k','Linewidth',2)
axis square
xlabel(sprintf('Basis Function Contribution\nTo HRF'))
title(sprintf('HRF as a Series of \nWeighted FIR Basis Functions'))

% CREATE/DISPLAY FIR REPRESENTATION
figure; hold on
cnt = 1;

% COLORS BASIS FUNCTIONS ACCORDING TO HRF WEIGHT
map = jet(64);
cRange = linspace(min(h),max(h),64);

for iT = numel(h):-1:1
firSignal = ones(size(h));
firSignal(cnt) = 2;
[~,cIdx] = min(abs(cRange-h(cnt)));
color = map(cIdx,:);
plot(1:numel(h),firSignal + 2*(iT-1),'Color',color,'Linewidth',2)
cnt = cnt+1;
end
colormap(map); colorbar; caxis([min(h) max(h)]);

% DISPLAY
axis square;
ylabel('Basis Function')
xlabel('Time (TR)')
set(gca,'YTick',0:2:39,'YTickLabel',20:-1:1)
title(sprintf('Weighted FIR Basis\n Set (20 Functions)'));


Representing the HRF above as a weighted set of FIR basis functions.
The color of each of the 20 basis functions corresponds to its weight

Each of the basis functions $b_t$ has an unit impulse that occurs at time $t = 1 \dots 20$; otherwise it is equal to zero. Weighting each basis function $b_t$ with the corresponding value of the HRF at each time point $t$, followed by a sum across all the functions gives the target HRF in the first plot above. The FIR basis model makes no assumptions about the shape of the  HRF–the weight applied to each basis function can take any value–which allows the model to capture a wide range of HRF profiles.

Given an experiment where various stimuli are presented to a subject and BOLD responses evoked within the subject’s brain, the goal is to determine the HRF to each of the stimuli within each voxel. Let’s take a look at a concrete example of how we can use the FIR basis to simultaneously estimate HRFs to many stimuli for multiple voxels with distint tuning properties.

## Estimating the HRF of Simulated Voxels Using the FIR Basis

For this example we revisit a simulation of voxels with 4 different types of tuning (for details, see the previous post on fMRI in Neuroscience). One voxel is strongly tuned for visual stimuli (such as a light), the second voxel is weakly tuned for auditory stimuli (such as a tone), the third is moderately tuned for somatosensory stimuli (such as warmth applied to the palm), and the final voxel is unselective (i.e. weakly and equally selective for all three types of stimuli). We simulate an experiment where the blood-oxygen-level dependent (BOLD) signals evoked  in each voxel by a series of stimuli consisting of nonoverlapping lights, tones, and applications of warmth to the palm, are measured over $T=330$ fMRI measurments (TRs). Below is the simulation of the experiment and the resulting simulated BOLD signals:

%% SIMULATE AN EXPERIMENT
% SOME CONSTANTS
trPerStim = 30;
nRepeat = 10;
nTRs = trPerStim*nRepeat + length(h);
nCond = 3;
nVox = 4;
impulseTrain0 = zeros(1,nTRs);

% RANDOM ONSET TIMES (TRs)
onsetIdx = randperm(nTRs-length(h));

% VISUAL STIMULUS
impulseTrainLight = impulseTrain0;
impulseTrainLight(onsetIdx(1:nRepeat)) = 1;
onsetIdx(1:nRepeat) = [];

% AUDITORY STIMULUS
impulseTrainTone = impulseTrain0;
impulseTrainTone(onsetIdx(1:nRepeat)) = 1;
onsetIdx(1:nRepeat) = [];

% SOMATOSENSORY STIMULUS
impulseTrainHeat = impulseTrain0;
impulseTrainHeat(onsetIdx(1:nRepeat)) = 1;

% EXPERIMENT DESIGN / STIMULUS SEQUENCE
D = [impulseTrainLight',impulseTrainTone',impulseTrainHeat'];
X = conv2(D,h');
X = X(1:nTRs,:);

%% SIMULATE RESPONSES OF VOXELS WITH VARIOUS SELECTIVITIES
visualTuning =   [4 0 0]; % VISUAL VOXEL TUNING
auditoryTuning = [0 2 0]; % AUDITORY VOXEL TUNING
somatoTuning =   [0 0 3]; % SOMATOSENSORY VOXEL TUNING
noTuning =       [1 1 1]; % NON-SELECTIVE

beta = [visualTuning', ...
auditoryTuning', ...
somatoTuning', ...
noTuning'];

y0 = X*beta;
SNR = 5;
noiseSTD = max(y0)/SNR;
noise = bsxfun(@times,randn(size(y0)),noiseSTD);
y = y0 + noise; % VOXEL RESPONSES

% DISPLAY VOXEL TIMECOURSES
voxNames = {'Visual','Auditory','Somat.','Unselective'};
cols = lines(4);
figure;
for iV = 1:4
subplot(4,1,iV)
plot(y(:,iV),'Color',cols(iV,:),'Linewidth',2); xlim([0,nTRs]);
ylabel('BOLD Signal')
legend(sprintf('%s Voxel',voxNames{iV}))
end
xlabel('Time (TR)')
set(gcf,'Position',[100,100,880,500])


Four simulated voxels, each with strong visual, weak auditory,
moderate somatosensory and unselective tuning.

Now let’s estimate the HRF of each voxel to each of the $C = 3$ stimulus conditions using an FIR basis function model. To do so, we create a design matrix composed of successive sets of delayed impulses, where each set of impulses begins at the onset of each stimulus condition. For the $[T \times C]$-sized stimulus onset matrix $D$, we calculate an $[T \times HC]$ FIR design matrix $X_{FIR}$, where $H$ is the assumed length of the HRF we are trying to estimate. The code for creating and displaying the design matrix for an assumed HRF length $H=16$ is below:

%% ESTIMATE HRF USING FIR BASIS SET
% CREATE FIR DESIGN MATRIX
hrfLen = 16;  % WE ASSUME HRF IS 16 TRS LONG

% BASIS SET FOR EACH CONDITOIN IS A TRAIN OF INPULSES
X_FIR = zeros(nTRs,hrfLen*nCond);

for iC = 1:nCond
onsets = find(D(:,iC));
idxCols = (iC-1)*hrfLen+1:iC*hrfLen;
for jO = 1:numel(onsets)
idxRows = onsets(jO):onsets(jO)+hrfLen-1;
for kR = 1:numel(idxRows);
X_FIR(idxRows(kR),idxCols(kR)) = 1;
end
end
end

% DISPLAY
figure;
subplot(121);
imagesc(D);
colormap gray;
set(gca,'XTickLabel',{'Light','Tone','Som.'})
title('Stimulus Train');

subplot(122);
imagesc(X_FIR);
colormap gray;
title('FIR Design Matrix');
set(gca,'XTick',[8,24,40])
set(gca,'XTickLabel',{'Light','Tone','Som.'})
set(gcf,'Position',[100,100,550,400])


Left: The stimulus onset matrix (size = [T x 3]).
Right the corresponding Design Matrix (size = [T x 3*H])

In the right panel of the plot above, we see the  form of the FIR design matrix $X_{FIR}$ for the stimulus onset on the left. For each voxel, we want to determine the weight on each column of $X_{FIR}$ that will best explain the BOLD signals $y$ measured from each voxel. We can form this problem in terms of a General Linear Model:

$y = X_{FIR}\beta_{FIR}$

Where $\beta_{FIR}$ are the weights on each column of the FIR design matrix. If we set the values of $\beta_{HRF}$ such as to minimize the sum of the squared errors (SSE) between the model above and the measured actual responses

$SSE = \sum_i^N(y^{(i)} - X_{FIR}^{(i)})^2$,

then we can use the Ordinary Least Squares (OLS) solution discussed earlier to solve the for $\beta_{HRF}$.  Specifically, we solve for the weights as:

$\hat \beta_{FIR} = (X_{FIR}^T X_{FIR})^{-1} X_{FIR} y$

Once determined, the resulting $[CH \times V]$ matrix of weights $\hat \beta_{FIR}$ has the HRF of each of the $V=4$ different voxels to each stimulus condition along its columns. The first $H$ (1-16) of the weights along a column define the HRF to the first stimulus (the light). The second $H$ (17-32) weights along a column determine the HRF to the second stimulus (the tone), etc… Below we parse out these weights and display the resulting HRFs for each voxel:

% ESTIMATE HRF FOR EACH CONDITION AND VOXEL
betaHatFIR = pinv(X_FIR'*X_FIR)*X_FIR'*y;

% RESHAPE HRFS
hHatFIR = reshape(betaHatFIR,hrfLen,nCond,nVox);

% DISPLAY
figure
cols = lines(4);
names = {'Visual','Auditory','Somat.','Unselective'};
for iV = 1:nVox
subplot(2,2,iV)
hold on;
for jC = 1:nCond
hl = plot(1:hrfLen,hHatFIR(:,jC,iV),'Linewidth',2);
set(hl,'Color',cols(jC,:))
end
hl = plot(1:numel(h),h,'Linewidth',2);
xlabel('TR')
legend({'Light','Tone','Heat','True HRF'})
set(hl,'Color','k')
xlim([0 hrfLen])
grid on
axis tight
title(sprintf('%s Voxel',names{iV}));
end
set(gcf,'Position',[100,100,880,500])


HRF estimates for each voxel to each of the 3 stimuli. Black plots show the shape of the true HRF.

Here we see that estimated HRFs accurately capture both the shape of the HRF and the selectivity of each of the voxels. For instance, the HRFs estimated from the responses of first voxel indicate strong tuning for the light stimulus. The HRF estimated for the light stimulus has an amplitude that is approximately 4 times that of the true HRF. This corresponds with the actual tuning of the voxel (compare this to the value of  $\beta(1,1)$). Additionally, time delay till the maximum value (time-to-peak) of the HRF to the light is the same as the true HRF. The first voxel’s HRFs estimated for the other stimuli are essentially noise around baseline. This (correctly) indicates that the first voxel has no selectivity for those stimuli. Further inspection of the remaining estimated HRFs indicate accurate tuning and HRF shape is recovered for the other three voxels as well.

## Wrapping Up

In this post we discussed how to apply a simple basis function model (the FIR basis) to estimate the HRF profile and get an idea of the tuning of individual voxels. Though the FIR basis model can accurately model any HRF shape, it is often times too flexible. In scenarios where voxel signals are very noisy, the FIR basis model will tend to model the noise.

Additionally, the FIR basis set needs to incorporate a basis function for each time measurement.  For the example above, we assumed the HRF had a length of 16 TRs. The FIR basis therefore had 16 tuneable weights for each condition. This leads to a model with 48 ($C\times H = 3 \times 16$) tunable parameters for the GLM model. For experiments with many different stimulus conditions, the number of parameters can grow quickly (as $HC$). If the number of parameters is comparable (or more) than the number of BOLD signal measurements, it will be difficult accurately estimate $\hat \beta_{FIR}$. As we’ll see in later posts, we can often improve upon the FIR basis set by using more clever basis functions.

Another important but indirect issue that effects estimating the HRF is the experimental design, or rather the schedule used to present the stimuli. In the example above, the stimuli were presented in random, non-overlapping order. What if the stimuli were presented in the same order every time, with some set frequency? We’ll discuss in a later post the concept of design efficiency and how it affects our ability to characterize the shape of the HRF and, consequently, voxel selectivity.

## Basis Function Models

Often times we want to model data $y$ that emerges from some underlying function $f(x)$ of independent variables $x$ such that for some future input we’ll be able to accurately predict the future output values. There are various methods for devising such a model, all of which make particular assumptions about the types of functions the model can emulate. In this post we’ll focus on one set of methods called Basis Function Models (BFMs).

## Basis Sets and Linear Independence

The idea behind BFMs is to model the complex target function $f(x)$ as a linear combination of a set of simpler functions, for which we have closed form expressions. This set of simpler functions is called a basis set, and work in a similar manner to bases that compose vector spaces in linear algebra. For instance, any vector in the 2D spatial coordinate system (which is a vector space in  $\mathbb R^2$) can be composed of linear combinations of the $x$ and $y$ directions. This is demonstrated in the figures below:

Illustration of basis vectors along the x (blue) and y(red) directions, along with a target vector (black)

Above we see a target vector in black pointing from the origin (at xy coordinates (0,0)) to the xy coordinates (2,3), and the coordinate basis vectors $b^{(x)}$ and $b^{(y)}$, each of which point one unit along the x- (in blue) and y- (in red) directions.

We can compose the target vector as as a linear combination of the x- and y- basis vectors. Namely the target vector can be composed by adding (in the vector sense) 2 times the basis $b^{(x)}$ to 3 times the basis $b^{(y)}$:

Composing the target vector as a linear combination of the basis vectors

One thing that is important to note about the bases $b^{(x)}$ and $b^{(y)}$ is that they are linearly independent. This means that no matter how hard you try, you can’t compose the basis vector $b^{(x)}$ as a linear combination of the other basis vector $b^{(y)}$, and vice versa. In the 2D vector space, we can easily see this because the red and blue lines are perpendicular to one another (a condition called orthogonality). But we can formally determine if two (column) vectors are independent by calculating the (column) rank of a matrix $A$ that is composed by concatenating the two vectors.

$A = [b^{(x)},b^{(y)}]$

$= \begin{bmatrix} 1&0 \\ 0&1 \end{bmatrix}$

The rank of a matrix is the number of linearly independent columns in the matrix. If the rank of $A$ has the same value as the number of columns in the matrix, then the columns of  $A$  forms a linearly independent set of vectors. The rank of $A$ above is 2. So is the number of columns. Therefore the basis vectors $b^{(x)}$ and $b^{(y)}$ are indeed linearly independent. We can use this same matrix rank-based test to verify if vectors of  much higher dimension than two are independent. Linear independence of the basis set is important if we want to be able to define a unique model.

%% EXAMPLE OF COMPOSING A VECTOR OF BASIS VECTORS
figure;
targetVector = [0 0; 2 3]
basisX = [0 0; 1 0];
basisY = [0 0; 0 1];
hv = plot(targetVector(:,1),targetVector(:,2),'k','Linewidth',2)
hold on;
hx = plot(basisX(:,1),basisX(:,2),'b','Linewidth',2);
hy = plot(basisY(:,1),basisY(:,2),'r','Linewidth',2);
xlim([-4 4]); ylim([-4 4]);
xlabel('x-direction'), ylabel('y-direction')
axis square
grid
legend([hv,hx,hy],{'Target','b^{(x)}','b^{(y)}'},'Location','bestoutside');

figure
hv = plot(targetVector(:,1),targetVector(:,2),'k','Linewidth',2);
hold on;
hx = plot(2*basisX(:,1),2*basisX(:,2),'b','Linewidth',2);
hy = plot(3*basisY(:,1),3*basisY(:,2),'r','Linewidth',2);
xlim([-4 4]); ylim([-4 4]);
xlabel('x-direction'), ylabel('y-direction');
axis square
grid
legend([hv,hx,hy],{'Target','2b^{(x)}','3b^{(y)}'},'Location','bestoutside')

A = [1 0;
0 1];

% TEST TO SEE IF basisX AND basisY ARE
% LINEARLY INDEPENDENT
isIndependent = rank(A) == size(A,2)


## Modeling Functions with Linear Basis Sets

In a similar fashion to creating arbitrary vectors with vector bases, we can compose arbitrary functions in “function space” as a linear combination of simpler basis functions  (note that basis functions are also sometimes called kernels). One such set of basis functions is the set of polynomials:

$b^{(i)} = x^i$

Here each basis function is a polynomial of order $i$. We can then compose a basis set of $D$ functions, where the $D-th$ function is $b^{(D)}$, then model the function $f(x)$ as a linear combinations of these $D$ polynomial bases:

$f(x) = \beta_0 b^{(0)} + \beta_1 b^{(1)} + ... \beta_D b^{(D)}$

where $\beta_i$ is the weight on the $i$-th basis function. In matrix format this model takes the form

$f(x) = A \beta$

Here, again the matrix $A$ is the concatenation of each of the polynomial bases into its columns. What we then want to do is determine all the weights $\beta$ such that $A\beta$ is as close to $f(x)$ as possible. We can do this by using Ordinary Least Squares (OLS) regression, which was discussed in earlier posts. The optimal solution for the weights under OLS is:

$\hat \beta = (A^T A)^{-1}A^T y$

Let’s take a look at a concrete example, where we use a set of  polynomial basis functions to model a complex data trend.

## Example: Modeling $f(x)$ with Polynomial Basis Functions

In this example we model a set of data $y$ whose underlying function $f(x)$ is:

$f(x) = cos(x/2) + sin(x)$

In particular we’ll create a polynomial basis set of degree 10 and fit the $\beta$ weights using OLS. The Matlab code for this example, and the resulting graphical output are below:

Left: Basis set of 10 (scaled) polynomial functions. Center: estimated model weights for basis set. Right: Underlying model f(x) (blue), data sampled from the model (black circles), and the linear basis model fit (red).

%% EXAMPLE: MODELING A TARGET FUNCTION
x = [0:.1:20]';
f = inline('cos(.5*x) + sin(x)','x');

% CREATE A POLYNOMIAL BASIS SET
polyBasis = [];
nPoly = 10;
px = linspace(-10,10,numel(x))';
for iP = 1:nPoly
polyParams = zeros(1,nPoly);
polyParams(iP) = 1;
polyBasis = [polyBasis,polyval(polyParams,px)];
end

% SCALE THE BASIS SET TO HAVE MAX AMPLTUDE OF 1
polyBasis = fliplr(bsxfun(@rdivide,polyBasis,max(polyBasis)));

% CHECK LINEAR INDEPENDENCE
isIndependent = rank(polyBasis) == size(polyBasis,2)

% SAMPLE SOME DATA FROM THE TARGET FUNCTION
randIdx = randperm(numel(x));
xx = x(randIdx(1:30));
y = f(xx) + randn(size(xx))*.2;

% FIT THE POLYNOMIAL BASIS MODEL TO THE DATA(USING polyfit.m)
basisWeights = polyfit(xx,y,nPoly);

% MODEL OF TARGET FUNCTION
yHat = polyval(basisWeights,x);

% DISPLAY BASIS SET AND AND MODEL
subplot(131)
plot(polyBasis,'Linewidth',2)
axis square
xlim([0,numel(px)])
ylim([-1.2 1.2])
title(sprintf('Polynomial Basis Set\n(%d Functions)',nPoly))

subplot(132)
bar(fliplr(basisWeights));
axis square
xlim([0 nPoly + 1]); colormap hot
xlabel('Basis Function')
ylabel('Estimated Weight')
title('Model Weights on Basis Functions')

subplot(133);
hy = plot(x,f(x),'b','Linewidth',2); hold on
hd = scatter(xx,y,'ko');
hh = plot(x,yHat,'r','Linewidth',2);
xlim([0,max(x)])
axis square
legend([hy,hd,hh],{'f(x)','y','Model'},'Location','Best')
title('Model Fit')
hold off;


First off, let’s make sure that the polynomial basis is indeed linearly independent. As above, we’ll compute the rank of the matrix composed of the basis functions along its columns. The rank of the basis matrix has a value of 10, which is also the number of columns of the matrix (line 19 in the code above). This proves that the basis functions are linearly independent.

We fit the model using Matlab’s internal function $\text{polyfit.m}$, which performs OLS on the basis set matrix. We see that the basis set of 10 polynomial functions (including the zeroth-bias term) does a pretty good job of modeling a very complex function $f(x)$. We essentially get to model a highly nonlinear function using simple linear regression (i.e. OLS).

## Wrapping up

Though the polynomial basis set works well in many modeling problems, it may be a poor fit for some applications. Luckily we aren’t limited to using only polynomial basis functions. Other basis sets include Gaussian basis functions, Sigmoid basis functions, and finite impulse response (FIR) basis functions, just to name a few (a future post, we’ll demonstrate how the FIR basis set can be used to model the hemodynamic response function (HRF) of an fMRI voxel measured from brain).

## fMRI in Neuroscience: Estimating Voxel Selectivity & the General Linear Model (GLM)

In a typical fMRI experiment  a series of stimuli are presented to an observer and evoked brain activity–in the form of blood-oxygen-level-dependent (BOLD) signals–are measured from tiny chunks of the brain called voxels. The task of the researcher is then to infer the tuning of the voxels to features in the presented stimuli based on the evoked BOLD signals. In order to make this inference quantitatively, it is necessary to have a  model of how BOLD signals are evoked in the presence of stimuli. In this post we’ll develop a model of evoked BOLD signals, and from this model recover the tuning of individual voxels measured during an fMRI experiment.

## Modeling the Evoked BOLD Signals — The Stimulus and Design Matrices

Suppose we are running an event-related fMRI experiment where we present $C$ different stimulus conditions to an observer while recording the BOLD signals evoked in their brain over a series of $T$ consecutive fMRI measurements (TRs). We can represent the stimulus presentation quantitatively with a $T \times C$ binary Stimulus Matrix, $D$, whose entries indicate the onset of each stimulus condition (columns) at each point in time (rows). Now let’s assume that we have an accurate model of how a voxel is activated by a single, very short stimulus. This activation model is called hemodynamic response function (HRF), $h$, for the voxel, and, as we’ll discuss in a later post, can be estimated from the measured BOLD signals. Let’s assume for now that the voxel is also activated to an equal degree to all stimuli. In this scenario we can represent the BOLD signal evoked over the entire experiment with another $T \times C$ matrix $X$ called the Design Matrix that is the convolution of the stimulus matrix $D$ with the voxel’s HRF $h$.

$X = D * h$

Note that this model of the BOLD signal is an example of the Finite Impulse Response (FIR) model that was introduced in the previous post on fMRI Basics.

To make the concepts of $D$ and $X$ more concrete, let’s say our experiment consists of $C = 3$ different stimulus conditions: a light, a tone, and heat applied to the palm. Each stimulus condition is presented twice in a staggered manner during 80 TRs of fMRI measurements. The stimulus matrix and the design matrix are simulated here in Matlab:

TR = 1; % REPETITION TIME
t = 1:TR:20; % MEASUREMENTS
h = gampdf(t,6) + -.5*gampdf(t,10); % HRF MODEL
h = h/max(h); % SCALE HRF TO HAVE MAX AMPLITUDE OF 1

trPerStim = 30; % # TR PER STIMULUS
nRepeat = 2; % # OF STIMULUS REPEATES
nTRs = trPerStim*nRepeat + length(h);
impulseTrain0 = zeros(1,nTRs);

% VISUAL STIMULUS
impulseTrainLight = impulseTrain0;
impulseTrainLight(1:trPerStim:trPerStim*nRepeat) = 1;

% AUDITORY STIMULUS
impulseTrainTone = impulseTrain0;
impulseTrainTone(5:trPerStim:trPerStim*nRepeat) = 1;

% SOMATOSENSORY STIMULUS
impulseTrainHeat = impulseTrain0;
impulseTrainHeat(9:trPerStim:trPerStim*nRepeat) = 1;

% COMBINATION OF ALL STIMULI
impulseTrainAll = impulseTrainLight + impulseTrainTone + impulseTrainHeat;

% SIMULATE VOXELS WITH VARIOUS SELECTIVITIES
visualTuning = [4 0 0]; % VISUAL VOXEL TUNING
auditoryTuning = [0 2 0]; % AUDITORY VOXEL TUNING
somatoTuning = [0 0 3]; % SOMATOSENSORY VOXEL TUNING
noTuning = [1 1 1]; % NON-SELECTIVE

beta = [visualTuning', ...
auditoryTuning', ...
somatoTuning', ...
noTuning'];

% EXPERIMENT DESIGN / STIMULUS SEQUENCE
D = [impulseTrainLight',impulseTrainTone',impulseTrainHeat'];

% CREATE DESIGN MATRIX FOR THE THREE STIMULI
X = conv2(D,h'); % X = D * h
X(nTRs+1:end,:) = []; % REMOVE EXCESS FROM CONVOLUTION

% DISPLAY STIMULUS AND DESIGN MATRICES
subplot(121); imagesc(D); colormap gray;
xlabel('Stimulus Condition')
ylabel('Time (TRs)');
title('Stimulus Train, D');
set(gca,'XTick',1:3); set(gca,'XTickLabel',{'Light','Tone','Heat'});

subplot(122);
imagesc(X);
xlabel('Stimulus Condition')
ylabel('Time (TRs)');
title('Design Matrix, X = D * h')
set(gca,'XTick',1:3); set(gca,'XTickLabel',{'Light','Tone','Heat'});



Stimulus presentation matrix, D (left) and the Design Matrix X for an experiment with three stimulus conditions: a light, a tone, and heat applied to the palm

Each column of the design matrix above (the right subpanel in the above figure) is essentially a model of the BOLD signal evoked independently by each stimulus condition, and the total signal is simply a sum of these independent signals.

## Modeling Voxel Tuning — The Selectivity Matrix

In order to develop the concept of the design matrix we assumed that our theoretical voxel is equally tuned to all stimuli. However, few voxels in the brain exhibit such non-selective tuning. For instance, a voxel located in visual cortex will be more selective for the light than for the tone or the heat stimulus. A voxel in auditory cortex will be more selective for the tone than for the other two stimuli. A voxel in the somoatorsensory cortex will likely be more selective for the heat than the visual or auditory stimuli. How can we represent the tuning of these different voxels?

A simple way to model tuning to the stimulus conditions in an experiment is to multiplying each column of the design matrix by a weight that modulates the BOLD signal according to the presence of the corresponding stimulus condition. For example, we could model a visual cortex voxel by weighting the first column of $X$ with a positive value, and the remaining two columns with much smaller values (or even negative values to model suppression). It turns out that we can model the selectivity of $V$ individual voxels simultaneously through a $C \times V$ Selectivity Matrix$\beta$. Each entry in $\beta$ is the amount that the $v$-th voxel (columns) is tuned to the $c$-th stimulus condition (rows). Given the design matrix and the selectivity matrix, we can then predict the BOLD signals $y$ of selectively-tuned voxels with a simple matrix multiplication:

$y = X\beta$

Keeping with our example experiment, let’s assume that we are modeling the selectivity of four different voxels: a strongly-tuned visual voxel, a moderately-tuned somatosensory voxel, a weakly tuned auditory voxel, and an unselective voxel that is very weakly tuned to all three stimulus conditions. We can represent the tuning of these four voxels with a $3 \times 4$ selectivity matrix. Below we define a selectivity matrix that represents the tuning of these 4 theoretical voxels and simulate the evoked BOLD signals to our 3-stimulus experiment.

% SIMULATE NOISELESS VOXELS' BOLD SIGNAL
% (ASSUMING VARIABLES FROM ABOVE STILL IN WORKSPACE)
y0 = X*beta;

figure;
subplot(211);
imagesc(beta); colormap hot;
axis tight
ylabel('Condition')
set(gca,'YTickLabel',{'Visual','Auditory','Somato.'})
xlabel('Voxel');
set(gca,'XTick',1:4)
title('Voxel Selectivity, \beta')

subplot(212);
plot(y0,'Linewidth',2);
legend({'Visual Voxel','Auditory Voxel','Somato. Voxel','Unselective'});
xlabel('Time (TRs)'); ylabel('BOLD Signal');
title('Activity for Voxels with Different Stimulus Tuning')
set(gcf,'Position',[100 100 750 540])
subplot(211); colorbar


Selectivity matrix (top) for four theoretical voxels and GLM BOLD signals (bottom) for a simple experiment

The top subpanel in the simulation output visualizes the selectivity matrix defined for the four theoretical voxels. The bottom subpanel plots the columns of the $T \times V$ matrix of voxel responses $y$. We see that the maximum response of the strongly-tuned visual voxel (plotted in blue) is larger than that of the other voxels, corresponding to the larger weight upper left of the selectivity matrix. Also note that the response for the unselective voxel (plotted in cyan) demonstrates the linearity property of the FIR model. The attenuated but complex BOLD signal from the unselective voxel results from the sum of small independent signals evoked by each stimulus.

## Modeling Voxel Noise

The example above demonstrates how we can model BOLD signals evoked in noisless theoretical voxels. Though this noisless scenario is helpful for developing a modeling framework, real-world voxels exhibit variable amounts of noise (noise is any signal that cannot be accounted by the FIR model). Therefore we need to incorporate a noise term into our BOLD signal model.

The noise in a voxel is often modeled as a random variable $\epsilon$. A common choice for the noise model is a zero-mean Normal/Gaussian distribution with some variance $\sigma^2$:

$\epsilon \sim \mathcal N(0,\sigma^2)$

Though the variance of the noise model may not be known apriori, there are methods for estimating it from data. We’ll get to estimating noise variance in a later post when we discuss various sources of noise and how to account for them using more advance techniques. For simplicity, let’s just assume that the noise variance is 1 as we proceed.

## Putting It All Together — The General Linear Model (GLM)

So far we have introduced on the concepts of the stimulus matrix, the HRF, the design matrix, selectivity matrix, and the noise model. We can combine all of these to compose a comprehensive quantitative model of BOLD signals measured from a set of voxels during an experiment:

$y = X\beta + \epsilon \\ = (D * h)\beta + \epsilon$

This is referred to as the General Linear Model (GLM).

In a typical fMRI experiment the researcher controls the stimulus presentation $D$, and measures the evoked BOLD responses $y$ from a set of voxels. The problem then is to estimate the selectivities of the voxels based on these measurments. Specifically, we want to determine the parameters $\hat \beta$ that best explain the measured BOLD signals during our experiment. The most common way to do this is a method known as Ordinary Least Squares (OLS) Regression. Using OLS the idea is to adjust the values of $\hat \beta$ such that the predicted model BOLD signals are as similar to the measured signals as possible. In other words, the goal is to infer the selectivity each voxel would have to exhibit in order to produce the measured BOLD signals. I showed in an earlier post that the optimal OLS solution for the selectivities $\hat \beta$ is given by:

$\hat \beta = (X^T X)^{-1} X^T y$

Therefore, given a design matrix $X$ and a set of voxel responses $y$ associated with the design matrix, we can calculate the selectivities of voxels to the stimulus conditions represented by the columns of the design matrix. This works even when the BOLD signals are noisy. To get a better idea of this process at work let’s look at a quick example based on our toy fMRI experiment.

## Example: Recovering Voxel Selectivity Using OLS

Here the goal is to recover the selectivities of the four voxels in our toy experiment they have been corrupted with noise. First, we add noise to the voxel responses. In this example the variance of the added noise is based on a concept known as signal-to-noise-ration or SNR.  As the name suggests, SNR is the ratio of the underlying signal to the noise “on top of” the signal. SNR is a very important concept when interpreting fMRI analyses. If a voxel exhibits a low SNR, it will be far more difficult to estimate its tuning. Though there are many ways to define SNR, in this example it is defined as the ratio of the maximum signal amplitude to the variance of the noise model. The underlying noise model variance is adjusted to be one-fifth of the maximum amplitude of the BOLD signal, i.e. an SNR of 5.  Feel free to try different values of SNR by changing the value of the variable $\text{SNR}$ in the Matlab simulation. Noisy versions of the 4 model BOLD signals are plotted in the top subpanel of the figure below. We see that the noisy signals are very different from the actual underlying BOLD signals.

Noisy BOLD signals from 4 voxels (top) and GLM predictions (bottom) of the underlying BOLD signals

Here we estimate the selectivities $\hat \beta$ from the GLM using OLS, and then predict the BOLD signals in our experiment with this estimate. We see in the bottom subpanel of the above figure that the resulting GLM predictions of are quite accurate. We also compare the estimated selectivity matrix $\hat \beta$ to the actual selectivity matrix $\beta$ below. We see that OLS is able to recover the selectivity of all the voxels.

Actual (top) and estimated (bottom) selectivity matrices.

% SIMULATE NOISY VOXELS & ESTIMATE TUNING
% (ASSUMING VARIABLES FROM ABOVE STILL IN WORKSPACE)

SNR = 5; % (APPROX.) SIGNAL-TO-NOISE RATIO
noiseSTD = max(y0(:))./SNR; % NOISE LEVEL FOR EACH VOXEL
noise = bsxfun(@times,randn(size(y0)),noiseSTD);
y = y0 + noise;

betaHat = inv(X'*X)*X'*y % OLS
yHat = X*betaHat; % GLM PREDICTION

figure
subplot(211);
plot(y,'Linewidth',3);
xlabel('Time (s)'); ylabel('BOLD Signal');
legend({'Visual Voxel','Auditory Voxel','Somato. Voxel','Unselective'});
title('Noisy Voxel Responses');

subplot(212)
h1 = plot(y0,'Linewidth',3); hold on
h2 = plot(yHat,'-o');
legend([h1(end),h2(end)],{'Actual Responses','Predicted Responses'})
xlabel('Time (s)'); ylabel('BOLD Signal');
title('Model Predictions')
set(gcf,'Position',[100 100 750 540])

figure
subplot(211);
imagesc(beta); colormap hot(5);
axis tight
ylabel('Condition')
set(gca,'YTickLabel',{'Visual','Auditory','Somato.'})
xlabel('Voxel');
set(gca,'XTick',1:4)
title('Actual Selectivity, \beta')

subplot(212)
imagesc(betaHat); colormap hot(5);
axis tight
ylabel('Condition')
set(gca,'YTickLabel',{'Visual','Auditory','Somato.'})
xlabel('Voxel');
set(gca,'XTick',1:4)
title('Noisy Estimated Selectivity')
drawnow


## Wrapping Up

Here we introduced the GLM commonly used for fMRI data analyses and used the GLM framework to recover the selectivities of simulated voxels. We saw that the GLM is quite powerful of recovering the selectivity in the presence of noise. However, there are a few details left out of the story.

First, we assumed that we had an accurate (albeit exact) model for each voxel’s HRF. This is generally not the case. In real-world scenarios the HRF is either assumed to have some canonical shape, or the shape of the HRF is estimated the experiment data. Though assuming a canonical HRF shape has been validated for block design studies of peripheral sensory areas, this assumption becomes dangerous when using event-related designs, or when studying other areas of the brain.

Additionally, we did not include any  physiological noise signals in our theoretical voxels. In real voxels, the BOLD signal changes due to physiological processes such as breathing and heartbeat can be far larger than the signal change due to underlying neural activation. It then becomes necessary to either account for the nuisance signals in the GLM framework, or remove them before using the model described above. In two upcoming posts we’ll discuss these two issues: estimating the HRF shape from data, and dealing with nuisance signals.

## Derivation: Ordinary Least Squares Solution and Normal Equations

In a linear regression framework, we assume some output variable $y$ is a linear combination of some independent input variables $X$ plus some independent noise $\epsilon$. The way the independent variables are combined is defined by a parameter vector $\beta$:

$\Large{\begin{array}{rcl} y &=& X \beta + \epsilon \end{array}}$

We also assume that the noise term $\epsilon$ is drawn from a standard Normal distribution:

$\Large{ \begin{array}{rcl}\epsilon &\sim& N(0,I)\end{array}}$

For some estimate of the model parameters $\hat \beta$, the model’s prediction errors/residuals $e$ are the difference between the model prediction and the observed ouput values

$\Large{\begin{array}{rcl} e = y - X\hat \beta \end{array}}$

The Ordinary Least Squares (OLS) solution to the problem (i.e. determining an optimal solution for $\hat \beta$) involves minimizing the sum of the squared errors with respect to the model parameters, $\hat \beta$. The sum of squared errors is equal to the inner product of the residuals vector with itself $\sum e_i^2 = e^Te$ :

$\Large{\begin{array}{rcl} e^T e &=& (y - X \hat \beta)^T (y - X \hat \beta) \\ &=& y^Ty - y^T (X \hat \beta) - (X \hat \beta)^T y + (X \hat \beta)^T (X \hat \beta) \\ &=& y^Ty - (X \hat \beta)^T y - (X \hat \beta)^T y + (X \hat \beta)^T (X \hat \beta) \\ &=& y^Ty - 2(X \hat \beta)^T y + (X \hat \beta)^T (X \hat \beta) \\ &=& y^Ty - 2\hat \beta^T X^T y + \hat \beta^T X^T X \hat \beta \\ \end{array}}$

To determine the parameters, $\hat \beta$, we minimize the sum of squared residuals with respect to the parameters.

$\Large{\begin{array}{rcl} \frac{\partial}{\partial \beta} \left[ e^T e \right] &=& 0 \\ &=& -2X^Ty + 2X^TX \hat \beta \text{, and thus} \\ X^Ty &=& X^TX \hat \beta \end{array}}$

due to the identity $\frac{\partial \mathbf{a}^T \mathbf{b}}{\partial \mathbf{a}} = \mathbf{b}$, for vectors $\mathbf{a}$ and $\mathbf{b}$. This relationship is matrix form of the Normal Equations. Solving for $\hat \beta$ gives  the analytical solution to the Ordinary Least Squares problem.

$\Large{\begin{array}{rcl} \hat \beta &=& (X^TX)^{-1}X^Ty \end{array}}$

Boom.