Monthly Archives: December 2012

fMRI in Neuroscience: Modeling the HRF With FIR Basis Functions

In the previous post on fMRI methods, we discussed how to model the selectivity of a voxel using the General Linear Model (GLM). One of the basic assumptions that we must make in order to use the GLM is that we also have an accurate model of the Hemodynamic Response Function (HRF) for the voxel. A common practice is to use a canonical HRF model established from previous empirical studies of fMRI timeseries. However, voxels throughout the brain and across subjects exhibit a variety of shapes, so the canonical model is often incorrect. Therefore it becomes necessary to estimate the shape of the HRF for each voxel.

There are a number of ways that have been developed for estimating HRFs, most of them are based on temporal basis function models. (For details on basis function models, see this previous post.). There are a number of basis function sets available, but in this post we’ll discuss modeling the HRF using a flexible basis set composed of a set of delayed impulses called Finite Impulse Response (FIR) basis.

Modeling HRFs With a Set of Time-delayed Impulses

Let’s say that we have an HRF with the following shape.

The HRF represented as the sum of the weighted FIR basis functions above.

A Model HRF.

We would like to be able to model the HRF as a weighted combination of simple basis functions. The simplest set of basis functions is the FIR basis, which is a series of H distinct unit-magnitude (i.e. equal to one) impulses, each of which is delayed in time by t = 1 \dots H TRs. An example of  modeling the HRF above using FIR basis functions is below:

%% REPRESENTING AN HRF WITH FIR BASIS FUNCTIONS
% CREATE ACTUAL HRF (AS MEASURED BY MRI SCANNER)
rand('seed',12345)
TR = 1                              % REPETITION TIME
t = 1:TR:20;                        % MEASUREMENTS
h = gampdf(t,6) + -.5*gampdf(t,10); % ACTUAL HRF
h = h/max(h);

% DISPLAY THE HRF
figure;
stem(t,h,'k','Linewidth',2)
axis square
xlabel(sprintf('Basis Function Contribution\nTo HRF'))
title(sprintf('HRF as a Series of \nWeighted FIR Basis Functions'))

% CREATE/DISPLAY FIR REPRESENTATION
figure; hold on
cnt = 1;

% COLORS BASIS FUNCTIONS ACCORDING TO HRF WEIGHT
map = jet(64);
cRange = linspace(min(h),max(h),64);

for iT = numel(h):-1:1
	firSignal = ones(size(h));
	firSignal(cnt) = 2;
	[~,cIdx] = min(abs(cRange-h(cnt)));
	color = map(cIdx,:);
	plot(1:numel(h),firSignal + 2*(iT-1),'Color',color,'Linewidth',2)
	cnt = cnt+1;
end
colormap(map); colorbar; caxis([min(h) max(h)]);

% DISPLAY
axis square;
ylabel('Basis Function')
xlabel('Time (TR)')
set(gca,'YTick',0:2:39,'YTickLabel',20:-1:1)
title(sprintf('Weighted FIR Basis\n Set (20 Functions)'));
Representing HRF as a weighted set of FIR basis functions. The color of each of the 20 basis functions corresponds to its weight

Representing the HRF above as a weighted set of FIR basis functions.
The color of each of the 20 basis functions corresponds to its weight

Each of the basis functions b_t has an unit impulse that occurs at time t = 1 \dots 20; otherwise it is equal to zero. Weighting each basis function b_t with the corresponding value of the HRF at each time point t, followed by a sum across all the functions gives the target HRF in the first plot above. The FIR basis model makes no assumptions about the shape of the  HRF–the weight applied to each basis function can take any value–which allows the model to capture a wide range of HRF profiles.

Given an experiment where various stimuli are presented to a subject and BOLD responses evoked within the subject’s brain, the goal is to determine the HRF to each of the stimuli within each voxel. Let’s take a look at a concrete example of how we can use the FIR basis to simultaneously estimate HRFs to many stimuli for multiple voxels with distint tuning properties.

Estimating the HRF of Simulated Voxels Using the FIR Basis

For this example we revisit a simulation of voxels with 4 different types of tuning (for details, see the previous post on fMRI in Neuroscience). One voxel is strongly tuned for visual stimuli (such as a light), the second voxel is weakly tuned for auditory stimuli (such as a tone), the third is moderately tuned for somatosensory stimuli (such as warmth applied to the palm), and the final voxel is unselective (i.e. weakly and equally selective for all three types of stimuli). We simulate an experiment where the blood-oxygen-level dependent (BOLD) signals evoked  in each voxel by a series of stimuli consisting of nonoverlapping lights, tones, and applications of warmth to the palm, are measured over T=330 fMRI measurments (TRs). Below is the simulation of the experiment and the resulting simulated BOLD signals:

%% SIMULATE AN EXPERIMENT
% SOME CONSTANTS
trPerStim = 30;
nRepeat = 10;
nTRs = trPerStim*nRepeat + length(h);
nCond = 3;
nVox = 4;
impulseTrain0 = zeros(1,nTRs);

% RANDOM ONSET TIMES (TRs)
onsetIdx = randperm(nTRs-length(h));

% VISUAL STIMULUS
impulseTrainLight = impulseTrain0;
impulseTrainLight(onsetIdx(1:nRepeat)) = 1;
onsetIdx(1:nRepeat) = [];

% AUDITORY STIMULUS
impulseTrainTone = impulseTrain0;
impulseTrainTone(onsetIdx(1:nRepeat)) = 1;
onsetIdx(1:nRepeat) = [];

% SOMATOSENSORY STIMULUS
impulseTrainHeat = impulseTrain0;
impulseTrainHeat(onsetIdx(1:nRepeat)) = 1;

% EXPERIMENT DESIGN / STIMULUS SEQUENCE
D = [impulseTrainLight',impulseTrainTone',impulseTrainHeat'];
X = conv2(D,h');
X = X(1:nTRs,:);

%% SIMULATE RESPONSES OF VOXELS WITH VARIOUS SELECTIVITIES
visualTuning =   [4 0 0]; % VISUAL VOXEL TUNING
auditoryTuning = [0 2 0]; % AUDITORY VOXEL TUNING
somatoTuning =   [0 0 3]; % SOMATOSENSORY VOXEL TUNING
noTuning =       [1 1 1]; % NON-SELECTIVE

beta = [visualTuning', ...
        auditoryTuning', ...
        somatoTuning', ...
        noTuning'];

y0 = X*beta;
SNR = 5;
noiseSTD = max(y0)/SNR;
noise = bsxfun(@times,randn(size(y0)),noiseSTD);
y = y0 + noise; % VOXEL RESPONSES

% DISPLAY VOXEL TIMECOURSES
voxNames = {'Visual','Auditory','Somat.','Unselective'};
cols = lines(4);
figure;
for iV = 1:4
	subplot(4,1,iV)
	plot(y(:,iV),'Color',cols(iV,:),'Linewidth',2); xlim([0,nTRs]);
	ylabel('BOLD Signal')
	legend(sprintf('%s Voxel',voxNames{iV}))
end
xlabel('Time (TR)')
set(gcf,'Position',[100,100,880,500])
Four simulated voxels, each with strong visual, weak auditory, moderate somatosensory and no tuning.

Four simulated voxels, each with strong visual, weak auditory,
moderate somatosensory and unselective tuning.

Now let’s estimate the HRF of each voxel to each of the C = 3 stimulus conditions using an FIR basis function model. To do so, we create a design matrix composed of successive sets of delayed impulses, where each set of impulses begins at the onset of each stimulus condition. For the [T \times C]-sized stimulus onset matrix D, we calculate an [T \times HC] FIR design matrix X_{FIR}, where H is the assumed length of the HRF we are trying to estimate. The code for creating and displaying the design matrix for an assumed HRF length H=16 is below:

%% ESTIMATE HRF USING FIR BASIS SET
% CREATE FIR DESIGN MATRIX
hrfLen = 16;  % WE ASSUME HRF IS 16 TRS LONG

% BASIS SET FOR EACH CONDITOIN IS A TRAIN OF INPULSES
X_FIR = zeros(nTRs,hrfLen*nCond);

for iC = 1:nCond
	onsets = find(D(:,iC));
	idxCols = (iC-1)*hrfLen+1:iC*hrfLen;
	for jO = 1:numel(onsets)
		idxRows = onsets(jO):onsets(jO)+hrfLen-1;
		for kR = 1:numel(idxRows);
			X_FIR(idxRows(kR),idxCols(kR)) = 1;
		end
	end
end

% DISPLAY
figure;
subplot(121);
imagesc(D);
colormap gray;
set(gca,'XTickLabel',{'Light','Tone','Som.'})
title('Stimulus Train');

subplot(122);
imagesc(X_FIR);
colormap gray;
title('FIR Design Matrix');
set(gca,'XTick',[8,24,40])
set(gca,'XTickLabel',{'Light','Tone','Som.'})
set(gcf,'Position',[100,100,550,400])

Left: The stimulus onset matrix (size = [T x 3]). Right the corresponding Design Matrix (size = [T x 3*H])

Left: The stimulus onset matrix (size = [T x 3]).
Right the corresponding Design Matrix (size = [T x 3*H])

In the right panel of the plot above, we see the  form of the FIR design matrix X_{FIR} for the stimulus onset on the left. For each voxel, we want to determine the weight on each column of X_{FIR} that will best explain the BOLD signals y measured from each voxel. We can form this problem in terms of a General Linear Model:

y = X_{FIR}\beta_{FIR}

Where \beta_{FIR} are the weights on each column of the FIR design matrix. If we set the values of \beta_{HRF} such as to minimize the sum of the squared errors (SSE) between the model above and the measured actual responses

SSE = \sum_i^N(y^{(i)} - X_{FIR}^{(i)})^2,

then we can use the Ordinary Least Squares (OLS) solution discussed earlier to solve the for \beta_{HRF}.  Specifically, we solve for the weights as:

\hat \beta_{FIR} = (X_{FIR}^T X_{FIR})^{-1} X_{FIR} y

Once determined, the resulting [CH \times V] matrix of weights \hat \beta_{FIR} has the HRF of each of the V=4 different voxels to each stimulus condition along its columns. The first H (1-16) of the weights along a column define the HRF to the first stimulus (the light). The second H (17-32) weights along a column determine the HRF to the second stimulus (the tone), etc… Below we parse out these weights and display the resulting HRFs for each voxel:

% ESTIMATE HRF FOR EACH CONDITION AND VOXEL
betaHatFIR = pinv(X_FIR'*X_FIR)*X_FIR'*y;

% RESHAPE HRFS
hHatFIR = reshape(betaHatFIR,hrfLen,nCond,nVox);

% DISPLAY
figure
cols = lines(4);
names = {'Visual','Auditory','Somat.','Unselective'};
for iV = 1:nVox
	subplot(2,2,iV)
	hold on;
	for jC = 1:nCond
		hl = plot(1:hrfLen,hHatFIR(:,jC,iV),'Linewidth',2);
		set(hl,'Color',cols(jC,:))
	end
	hl = plot(1:numel(h),h,'Linewidth',2);
	xlabel('TR')
	legend({'Light','Tone','Heat','True HRF'})
	set(hl,'Color','k')
	xlim([0 hrfLen])
	grid on
	axis tight
	title(sprintf('%s Voxel',names{iV}));
end
set(gcf,'Position',[100,100,880,500])
HRF estimates for each voxel to each of the 3 stimuli. Black plots show the shape of the actual HRF.

HRF estimates for each voxel to each of the 3 stimuli. Black plots show the shape of the true HRF.

Here we see that estimated HRFs accurately capture both the shape of the HRF and the selectivity of each of the voxels. For instance, the HRFs estimated from the responses of first voxel indicate strong tuning for the light stimulus. The HRF estimated for the light stimulus has an amplitude that is approximately 4 times that of the true HRF. This corresponds with the actual tuning of the voxel (compare this to the value of  \beta(1,1)). Additionally, time delay till the maximum value (time-to-peak) of the HRF to the light is the same as the true HRF. The first voxel’s HRFs estimated for the other stimuli are essentially noise around baseline. This (correctly) indicates that the first voxel has no selectivity for those stimuli. Further inspection of the remaining estimated HRFs indicate accurate tuning and HRF shape is recovered for the other three voxels as well.

Wrapping Up

In this post we discussed how to apply a simple basis function model (the FIR basis) to estimate the HRF profile and get an idea of the tuning of individual voxels. Though the FIR basis model can accurately model any HRF shape, it is often times too flexible. In scenarios where voxel signals are very noisy, the FIR basis model will tend to model the noise.

Additionally, the FIR basis set needs to incorporate a basis function for each time measurement.  For the example above, we assumed the HRF had a length of 16 TRs. The FIR basis therefore had 16 tuneable weights for each condition. This leads to a model with 48 (C\times H = 3 \times 16) tunable parameters for the GLM model. For experiments with many different stimulus conditions, the number of parameters can grow quickly (as HC). If the number of parameters is comparable (or more) than the number of BOLD signal measurements, it will be difficult accurately estimate \hat \beta_{FIR}. As we’ll see in later posts, we can often improve upon the FIR basis set by using more clever basis functions.

Another important but indirect issue that effects estimating the HRF is the experimental design, or rather the schedule used to present the stimuli. In the example above, the stimuli were presented in random, non-overlapping order. What if the stimuli were presented in the same order every time, with some set frequency? We’ll discuss in a later post the concept of design efficiency and how it affects our ability to characterize the shape of the HRF and, consequently, voxel selectivity.

Basis Function Models

Often times we want to model data y that emerges from some underlying function f(x) of independent variables x such that for some future input we’ll be able to accurately predict the future output values. There are various methods for devising such a model, all of which make particular assumptions about the types of functions the model can emulate. In this post we’ll focus on one set of methods called Basis Function Models (BFMs).

Basis Sets and Linear Independence

The idea behind BFMs is to model the complex target function f(x) as a linear combination of a set of simpler functions, for which we have closed form expressions. This set of simpler functions is called a basis set, and work in a similar manner to bases that compose vector spaces in linear algebra. For instance, any vector in the 2D spatial coordinate system (which is a vector space in  \mathbb R^2) can be composed of linear combinations of the x and y directions. This is demonstrated in the figures below:

Illustration of basis vectors along the x (blue) and y(red) directions, along with a target vector (black)

Illustration of basis vectors along the x (blue) and y(red) directions, along with a target vector (black)

Above we see a target vector in black pointing from the origin (at xy coordinates (0,0)) to the xy coordinates (2,3), and the coordinate basis vectors b^{(x)} and b^{(y)}, each of which point one unit along the x- (in blue) and y- (in red) directions.

We can compose the target vector as as a linear combination of the x- and y- basis vectors. Namely the target vector can be composed by adding (in the vector sense) 2 times the basis b^{(x)} to 3 times the basis b^{(y)}:

Composing the target vector as a linear combination of the basis vectors

Composing the target vector as a linear combination of the basis vectors

One thing that is important to note about the bases b^{(x)} and b^{(y)} is that they are linearly independent. This means that no matter how hard you try, you can’t compose the basis vector b^{(x)} as a linear combination of the other basis vector b^{(y)}, and vice versa. In the 2D vector space, we can easily see this because the red and blue lines are perpendicular to one another (a condition called orthogonality). But we can formally determine if two (column) vectors are independent by calculating the (column) rank of a matrix A that is composed by concatenating the two vectors.

A = [b^{(x)},b^{(y)}]

= \begin{bmatrix} 1&0 \\ 0&1 \end{bmatrix}

The rank of a matrix is the number of linearly independent columns in the matrix. If the rank of A has the same value as the number of columns in the matrix, then the columns of  A  forms a linearly independent set of vectors. The rank of A above is 2. So is the number of columns. Therefore the basis vectors b^{(x)} and b^{(y)} are indeed linearly independent. We can use this same matrix rank-based test to verify if vectors of  much higher dimension than two are independent. Linear independence of the basis set is important if we want to be able to define a unique model.

%% EXAMPLE OF COMPOSING A VECTOR OF BASIS VECTORS
figure;
targetVector = [0 0; 2 3]
basisX = [0 0; 1 0];
basisY = [0 0; 0 1];
hv = plot(targetVector(:,1),targetVector(:,2),'k','Linewidth',2)
hold on;
hx = plot(basisX(:,1),basisX(:,2),'b','Linewidth',2);
hy = plot(basisY(:,1),basisY(:,2),'r','Linewidth',2);
xlim([-4 4]); ylim([-4 4]);
xlabel('x-direction'), ylabel('y-direction')
axis square
grid
legend([hv,hx,hy],{'Target','b^{(x)}','b^{(y)}'},'Location','bestoutside');

figure
hv = plot(targetVector(:,1),targetVector(:,2),'k','Linewidth',2);
hold on;
hx = plot(2*basisX(:,1),2*basisX(:,2),'b','Linewidth',2);
hy = plot(3*basisY(:,1),3*basisY(:,2),'r','Linewidth',2);
xlim([-4 4]); ylim([-4 4]);
xlabel('x-direction'), ylabel('y-direction');
axis square
grid
legend([hv,hx,hy],{'Target','2b^{(x)}','3b^{(y)}'},'Location','bestoutside')

A = [1 0;
0 1];

% TEST TO SEE IF basisX AND basisY ARE
% LINEARLY INDEPENDENT
isIndependent = rank(A) == size(A,2)

Modeling Functions with Linear Basis Sets

In a similar fashion to creating arbitrary vectors with vector bases, we can compose arbitrary functions in “function space” as a linear combination of simpler basis functions  (note that basis functions are also sometimes called kernels). One such set of basis functions is the set of polynomials:

b^{(i)} = x^i

Here each basis function is a polynomial of order i. We can then compose a basis set of D functions, where the D-th function is b^{(D)}, then model the function f(x) as a linear combinations of these D polynomial bases:

f(x) = \beta_0 b^{(0)} + \beta_1 b^{(1)} + ... \beta_D b^{(D)}

where \beta_i is the weight on the i-th basis function. In matrix format this model takes the form

f(x) = A \beta

Here, again the matrix A is the concatenation of each of the polynomial bases into its columns. What we then want to do is determine all the weights \beta such that A\beta is as close to f(x) as possible. We can do this by using Ordinary Least Squares (OLS) regression, which was discussed in earlier posts. The optimal solution for the weights under OLS is:

\hat \beta = (A^T A)^{-1}A^T y

Let’s take a look at a concrete example, where we use a set of  polynomial basis functions to model a complex data trend.

Example: Modeling f(x) with Polynomial Basis Functions

In this example we model a set of data y whose underlying function f(x) is:

f(x) = cos(x/2) + sin(x)

In particular we’ll create a polynomial basis set of degree 10 and fit the \beta weights using OLS. The Matlab code for this example, and the resulting graphical output are below:

Left: Basis set of 10 (scaled) polynomial functions. Center: estimated model weights for basis set. Right: Underlying model f(x) (blue), data sampled from the model (black circles), and the linear basis model fit (red).

Left: Basis set of 10 (scaled) polynomial functions. Center: estimated model weights for basis set. Right: Underlying model f(x) (blue), data sampled from the model (black circles), and the linear basis model fit (red).

%% EXAMPLE: MODELING A TARGET FUNCTION
x = [0:.1:20]';
f = inline('cos(.5*x) + sin(x)','x');

% CREATE A POLYNOMIAL BASIS SET
polyBasis = [];
nPoly = 10;
px = linspace(-10,10,numel(x))';
for iP = 1:nPoly
	polyParams = zeros(1,nPoly);
	polyParams(iP) = 1;
	polyBasis = [polyBasis,polyval(polyParams,px)];
end

% SCALE THE BASIS SET TO HAVE MAX AMPLTUDE OF 1
polyBasis = fliplr(bsxfun(@rdivide,polyBasis,max(polyBasis)));

% CHECK LINEAR INDEPENDENCE
isIndependent = rank(polyBasis) == size(polyBasis,2)

% SAMPLE SOME DATA FROM THE TARGET FUNCTION
randIdx = randperm(numel(x));
xx = x(randIdx(1:30));
y = f(xx) + randn(size(xx))*.2;

% FIT THE POLYNOMIAL BASIS MODEL TO THE DATA(USING polyfit.m)
basisWeights = polyfit(xx,y,nPoly);

% MODEL OF TARGET FUNCTION
yHat = polyval(basisWeights,x);

% DISPLAY BASIS SET AND AND MODEL
subplot(131)
plot(polyBasis,'Linewidth',2)
axis square
xlim([0,numel(px)])
ylim([-1.2 1.2])
title(sprintf('Polynomial Basis Set\n(%d Functions)',nPoly))

subplot(132)
bar(fliplr(basisWeights));
axis square
xlim([0 nPoly + 1]); colormap hot
xlabel('Basis Function')
ylabel('Estimated Weight')
title('Model Weights on Basis Functions')

subplot(133);
hy = plot(x,f(x),'b','Linewidth',2); hold on
hd = scatter(xx,y,'ko');
hh = plot(x,yHat,'r','Linewidth',2);
xlim([0,max(x)])
axis square
legend([hy,hd,hh],{'f(x)','y','Model'},'Location','Best')
title('Model Fit')
hold off;

First off, let’s make sure that the polynomial basis is indeed linearly independent. As above, we’ll compute the rank of the matrix composed of the basis functions along its columns. The rank of the basis matrix has a value of 10, which is also the number of columns of the matrix (line 19 in the code above). This proves that the basis functions are linearly independent.

We fit the model using Matlab’s internal function \text{polyfit.m}, which performs OLS on the basis set matrix. We see that the basis set of 10 polynomial functions (including the zeroth-bias term) does a pretty good job of modeling a very complex function f(x). We essentially get to model a highly nonlinear function using simple linear regression (i.e. OLS).

Wrapping up

Though the polynomial basis set works well in many modeling problems, it may be a poor fit for some applications. Luckily we aren’t limited to using only polynomial basis functions. Other basis sets include Gaussian basis functions, Sigmoid basis functions, and finite impulse response (FIR) basis functions, just to name a few (a future post, we’ll demonstrate how the FIR basis set can be used to model the hemodynamic response function (HRF) of an fMRI voxel measured from brain).