Monthly Archives: September 2012

A Brief Introduction to Markov Chains

Markov chains are an essential component of Markov chain Monte Carlo (MCMC) techniques. Under MCMC, the Markov chain is used to sample from some target distribution. To get a better understanding of what a Markov chain is, and further, how it can be used to sample form a distribution, this post introduces and applies a few basic concepts.

A Markov chain is a stochastic process that operates sequentially (e.g. temporally), transitioning from one state to another within an allowed set of states.

x^{(0)} \rightarrow x^{(1)} \rightarrow x^{(2)} \dots \rightarrow x^{(t)} \rightarrow \dots

A Markov chain is defined by three elements:

  1. A state space x, which is a set of values that the chain is allowed to take
  2. A transition operator p(x^{(t+1)} | x^{(t)}) that defines  the probability of moving from state x^{(t)} to x^{(t+1)} .
  3. An initial condition distribution \pi^{(0)} which defines the probability of being in any one of the possible states at the initial iteration t = 0.

The Markov chain starts at some initial state, which is sampled from \pi^{(0)}, then transitions from one state to another according to the transition operator p(x^{(t+1)} | x^{(t)}).

A Markov chain is called memoryless if the next state only depends on the current state and not on any of the states previous to the current:

p(x^{(t+1)}|x^{(t)},x^{(t-1)},...x^{(0)}) = p(x^{(t+1)}|x^{(t)})

(This memoryless property is formally know as the Markov property).

If the transition operator for a Markov chain does not change across transitions, the Markov chain is called time homogenous.  A nice  property of time homogenous Markov chains is that as the chain runs for a long time and t \rightarrow \infty, the chain will reach an equilibrium that is called the chain’s stationary distribution:

p(x^{(t+1)} | x^{(t)}) = p(x^{(t)} | x^{(t-1)})

We’ll see later how the stationary distribution of a Markov chain is important for sampling from probability distributions, a technique that is at the heart of Markov Chain Monte Carlo (MCMC) methods.

Finite state-space (time homogenous) Markov chain

If the state space of a Markov chain takes on a finite number of distinct values, and it is time homogenous, then the transition operator can be defined by a matrix P, where the entries of P are:

p_{ij} = p(X^{(t+1)} = j | x^{(t)} = i)

This means that if the chain is currently in the i-th state, the transition operator assigns the probability of moving to the  j-th state by the entries of i-th row of P (i.e.  each row of P defines a conditional probability distribution on the state space). Let’s take a look at a finite state-space Markov chain in action with a simple example.

Example: Predicting the weather with a finite state-space Markov chain

In Berkeley, CA, there are (literally) only 3 types of weather: sunny, foggy, or rainy (this is analogous to a state-space that takes on three discrete values). The weather patterns are very stable there, so a Berkeley weatherman can easily predict the weather next week based on the weather today with the following transition rules:

If it is sunny today, then

  • it is highly likely that it will be sunny next week
    • p(X^{(week)}=sunny | X^{(today)}=sunny)=0.8,
  • it is very unlikely that it will be raining next week
    • p(X^{(week)}=rainy | X^{(today)}=sunny)=0.05
  • and somewhat likely that it will foggy next week
    • p(X^{(week)}=foggy | X^{(today)}=sunny)=0.15

If it is foggy today then

  • it is somewhat likely that it will be sunny next week
    • p(X^{(week)}=sunny | X^{(today)}=foggy)=0.4
  • but slightly less likely that it will be foggy next week
    • p(X^{(week)}=foggy | X^{(today)}=foggy)=0.5,
  • and fairly unlikely that it will be raining next week.
    • p(X^{(week)}=rainy | X^{(today)}=foggy)=0.1,

If it is rainy today then

  • it is unlikely that it will be sunny next week
    • p(X^{(week)}=sunny | X^{(today)}=rainy)=0.1,
  • it is somewhat likely that it will be foggy next week
    • p(X^{(week)}=foggy | X^{(today)}=rainy)=0.3,
  • and it is fairly likely that it will be rainy next week
    • p(X^{(week)}=rainy | X^{(today)}=rainy)=0.6,

All of these transition rules can be instantiated in a single 3 x 3 transition operator matrix:

P = \begin{bmatrix} 0.8 & 0.15 & 0.05\\ 0.4 & 0.5 & 0.1\\ 0.1& 0.3 & 0.6 \end{bmatrix}

Where each row of P corresponds to the weather at iteration t (today), and each column corresponds to the weather the next week. Let’s say that it is rainy today, what is the probability it will be sunny next week, in two weeks, or in 6 months? We can answer these questions by running a Markov chain from the initial state of “rainy,” transitioning according to P. The following chunk of MATLAB code runs the Markov chain.

% FINITE STATE-SPACE MARKOV CHAIN EXAMPLE

% TRANSITION OPERATOR
%     S  F  R
%     U  O  A
%     N  G  I
%     N  G  N
%     Y  Y  Y
P = [.8 .15 .05;  % SUNNY
     .4 .5  .1;   % FOGGY
     .1 .3  .6];  % RAINY
nWeeks = 25

% INITIAL STATE IS RAINY
X(1,:) = [0 0 1];

% RUN MARKOV CHAIN
for iB = 2:nWeeks
    X(iB,:) = X(iB-1,:)*P; % TRANSITION
end

% DISPLAY
figure; hold on
h(1) = plot(1:nWeeks,X(:,1),'r','Linewidth',2);
h(2) = plot(1:nWeeks,X(:,2),'k','Linewidth',2);
h(3) = plot(1:nWeeks,X(:,3),'b','Linewidth',2);
h(4) = plot([15 15],[0 1],'g--','Linewidth',2);
hold off
legend(h, {'Sunny','Foggy','Rainy','Burn In'});
xlabel('Week')
ylabel('p(Weather)')
xlim([1,nWeeks]);
ylim([0 1]);

% PREDICTIONS
fprintf('\np(weather) in 1 week -->'), disp(X(2,:))
fprintf('\np(weather) in 2 weeks -->'), disp(X(3,:))
fprintf('\np(weather) in 6 months -->'), disp(X(25,:))

Finite state-space Markov chain for predicting the weather

Here we see that at week 1 the probability of sunny weather is 0.1. The next week, the probability of sunny weather is 0.26, and in 6 months, there is a 60% chance that it will be sunny. Also note that after approximately 15 weeks the Markov chain has reached the equilibrium/stationary distribution and, chances are, the weather will be sunny. This 15-week period is what is known as the burn in period for the Markov chain, and is the number of transitions it takes the chain to move from the initial conditions to the stationary distribution.

A cool thing about finite state-space time-homogeneous Markov chain is that it is not necessary  to run the chain sequentially through all iterations in order to predict a state in the future. Instead we can predict by first raising the transition operator to the T-th power, where T is the iteration at which we want to predict, then multiplying the result by the distribution over the initial state, \pi^{(0)}. For instance, to predict the probability of the weather in 2 weeks, knowing that it is rainy today (i.e. \pi^{(0)} = [0,0,1]):

p(x^{(week2)}) = \pi^{(0)}P^2 = [0.26, 0.345, 0.395]

and in six months:

p(x^{(week24)}) = \pi^{(0)}P^{24} = [0.596, 0.263, 0.140]

These are the same results we get by running the Markov chain sequentially through each number of transitions. Therefore we can calculate an approximation to the stationary distribution from P by setting T to a large number. It turns out that it is also possible to analytically derive the stationary distribution from P (hint: think about the properties of eigenvectors).

Continuous state-space Markov chains

A Markov chain can also have a continuous state space that exists in the real numbers x \in \mathbb{R}^N. In this case the transition operator cannot be instantiated simply as a matrix, but is instead some continuous function on the real numbers. Note that the continuous state-space Markov chain also has a burn in period and a stationary distribution. However, the stationary distribution will also be over a continuous set of variables. To get a better understanding of the workings of a continuous state-space Markov chain, let’s look at a simple example.

Example: Sampling from a continuous distribution using continuous state-space Markov chains

We can use the stationary distribution of a continuous state-space Markov chain in order to sample from a continuous probability distribution: we  run a Markov chain for a sufficient amount of time so that it has reached its stationary distribution, then keep the states that the chain visits as samples from that stationary distribution.

In the following example we define a continuous state-space Markov chain. The transition operator is a Normal distribution with unit variance and a mean that is half the distance between zero and the previous state, and the distribution over initial conditions is a Normal distribution with zero mean and unit variance.

To ensure that the chain has moved sufficiently far from the initial conditions and that we are sampling  from the chain’s stationary distribution,  we will choose to throw away the first 50 burn in states of the chain. We can also run multiple chains simultaneously in order to sample the stationary distribution more densely. Here we choose to run 5 chains simultaneously.

% EXAMPLE OF CONTINUOUS STATE-SPACE MARKOV CHAIN

% INITIALIZE
randn('seed',12345)
nBurnin = 50; % # BURNIN
nChains = 5;  % # MARKOV CHAINS

% DEFINE TRANSITION OPERATOR
P = inline('normrnd(.5*x,1,1,nChains)','x','nChains');
nTransitions = 1000;
x = zeros(nTransitions,nChains);
x(1,:) = randn(1,nChains);

% RUN THE CHAINS
for iT = 2:nTransitions
    x(iT,:) = P(x(iT-1),nChains);
end

% DISPLAY BURNIN
figure
subplot(221); plot(x(1:100,:)); hold on;
minn = min(x(:));
maxx = max(x(:));
l = line([nBurnin nBurnin],[minn maxx],'color','k','Linewidth',2);
ylim([minn maxx])
legend(l,'~Burn-in','Location','SouthEast')
title('First 100 Samples'); hold off

% DISPLAY ENTIRE MARKOV CHAIN
subplot(223); plot(x);hold on;
l = line([nBurnin nBurnin],[minn maxx],'color','k','Linewidth',2);
legend(l,'~Burn-in','Location','SouthEast')
title('Entire Chain');

% DISPLAY SAMPLES FROM STATIONARY DISTRIBUTION
samples = x(nBurnin+1:end,:);
subplot(122);
[counts,bins] = hist(samples(:),100); colormap hot
b = bar(bins,counts);
legend(b,sprintf('Markov Chain\nSamples'));
title(['\mu=',num2str(mean(samples(:))),' \sigma=',num2str(var(samples(:)))])

Sampling from the stationary distribution of a continuous state-space Markov chain

In the upper left panel of the code output we see a close up of the first 100 of the 1000 transitions made by the 5 simultaneous Markov chains; the burn in cutoff is marked by the black line. In the lower left panel we see the entire sequence of transitions for the Markov chains. In the right panel, we can tell from the sampled states that the stationary distribution for this chain is a Normal distribution, with mean equal to zero, and a variance equal to 1.3.

Wrapping Up

In the previous example we were able to deduce the stationary distribution of the Markov chain by looking at the samples generated from the chain after the burn in period. However, in order to use Markov chains to sample from a specific target distribution, we have to design the transition operator such that the resulting chain reaches a stationary distribution that matches the target distribution. This is where MCMC methods like the Metropolis sampler, the Metropolis-Hastings sampler, and the Gibbs sampler come to rescue. We will discuss each of these Markov-chain-based sampling methods separately in later posts.

Read the rest of this entry

Advertisements

Monte Carlo Approximations

Monte Carlo Approximation for Integration

Using statistical methods we often run into integrals that take the form:

I = \int_a^b h(x)g(x) dx

For instance, the expected value of a some function f(x) of a random variable X

\mathbb E[x] = \int_{p(x)} p(x) f(x) dx

and many quantities essential for Bayesian methods such as the marginal likelihood a.k.a “model evidence”

p(x) = \int_\theta p(x | \theta) p(\theta) dx

involve integrals of this form.  Sometimes (not often) such an integral can be evaluated analytically. When a closed form solution does not exist, numeric integration methods can be applied. However numerical methods quickly become intractable for any practical application that requires more than a small number of dimensions. This is where Monte Carlo approximation comes in. Monte Carlo approximation allows us to calculate an estimate for the value of I by transforming the  integration problem into a procedure of sampling values from a tractable probability distribution and calculating the average of those samples. Here’s what I mean:

If the function g(x) fullfills two simple criteria, namely that the function is always positive on the interval (a,b)

g(x) \geq 0, x \in (a,b)

and that the integral of the function is finite

\int_a^b g(x) = C < \infty

then we can define a corresponding probability distribution on the interval (a,b):

p(x) = \frac{g(x)}{C}

Another way to think of it is that g(x) is a probability distribution scaled by a constant C.

Using this link between probability distributions p(x) and g(x), we can restate the original integration as

I = C \int_a^b h(x) p(x)dx = C \mathbb E_{p(x)}[h(x)]

This statement says that if we can sample values of x using p(x), then the value of  the original integral I is simply a scaled version of the expected value of the integrand function h(x) calculated using those samples. Turns out that the expected value \mathbb E_{p(x)}[h(x)] can be easily approximated by the sample mean:

\mathbb E_{p(x)} [h(x)] \approx \frac{1}{N} \sum_i^N h(x_i)

where samples x_i, i = 1..N are drawn independently from p(x). This leads to a simple 4-Step Procedure for performing Monte Carlo approximation to the integral I:

  1. Identify h(x)
  2. Identify g(x) and from it determine p(x) and C
  3. Draw N independent samples from p(x)
  4. Evaluate I = C \mathbb E[h(x)] \approx \frac{C}{N} \sum_i^N h(x_i)

The larger the number of samples N we draw, the better our approximation to the actual value of I. This 4-step procedure is demonstrated in a some toy examples below:

Example 1: Approximating the integral xe^x

Say we want to calculate the integral:

I = \int_0^1 x e^x dx

We can calculate the closed form solution of this integral using integration by parts:

u = x, dv = e^x

du = dx, v = e^x

and

I = uv - \int v du

= xe^x - \int e^x dx

= \left. xe^x - e^x \right|_0^1

= \left. e^x(x-1)\right|_0^1

= 0 - (-1) = 1

Orr…we could calculate the  Monte Carlo approximation of this integral.

Step 1 we identify

h(x) = xe^x

Step 2 we identify

g(x) = 1

and from this can also determine the probability distribution function p(x) \in (a,b) = (0,1). According to the definition expression for p(x) given above we detemine p(x) to be:

p(x) = \frac{g(x)}{\int_a^b g(x)dx} = \frac{1}{b-a}.

Step 3: The expression on the right is the definition for the uniform distribution Unif(0,1), which is easy to sample from using the MATLAB \text{rand()} (Notice too that the constant C = b-a = 1).

Step 4: we calculate the Monte Carlo approximation as

I = C \mathbb E_{p(x)} h(x)

\approx \frac{1}{N}\sum_{i=1}^N x_i e^{x_i},

where each x_i is sampled from the standard uniform distribution. Below is some MATLAB code running the Monte Carlo Approximation for two different values of N

% MONTE CARLO APPROXIMATION OF INT(xexp(x))dx
% FOR TWO DIFFERENT SAMPLE SIZES
rand('seed',12345);

% THE FIRST APPROXIMATION USING N1 = 100 SAMPLES
N1 = 100;
x = rand(N1,1);
Ihat1 = sum(x.*exp(x))/N1

% A SECOND APPROXIMATION USING N2 = 5000 SAMPLES
N2 = 5000;
x = rand(N2,1);
Ihat2 = sum(x.*exp(x))/N2

Comparing the values of the variables \text{Ihat1} and \text{Ihat2} we see that the Monte Carlo approximation is better for a larger number of samples.

Example 2: Approximating the expected value of the Beta distribution

Lets look at how the 4-step Monte Carlo approximation procedure can be used to calculate expectations. In this example we will calculate

\mathbb E[x] = \int_{p(x)} p(x)x dx,

where x \sim p(x) = Beta(\alpha_1,\alpha_2).

Step 1: we identify h(x) = x.

Step 2: the function g(x) is simply the probability density function p(x) due the expression for p(x) above:

p(x)^\star = \frac{p(x)}{\int p(x)dx} = p(x).

Step 3 we can use MATLAB to easily draw N independent samples p(x) using the function \text{betarnd()}. And finally,

Step 4 we approximate the expectation with the expression

\mathbb E[x]_{Beta(\alpha_1,\alpha_2)} \approx \frac{1}{N} \sum_i x_i

Below is some MATLAB code that performs this approximation of the expected value.

rand('seed',12345);
alpha1 = 2; alpha2 = 10;
N = 10000;
x = betarnd(alpha1,alpha2,1,N);

% MONTE CARLO EXPECTATION
expectMC = mean(x);

% ANALYTIC EXPRESSION FOR BETA MEAN
expectAnalytic = alpha1/(alpha1 + alpha2);

% DISPLAY
figure;
bins = linspace(0,1,50);
counts = histc(x,bins);
probSampled = counts/sum(counts);
probTheory = betapdf(bins,alpha1,alpha2);
b = bar(bins,probSampled); colormap hot; hold on;
t = plot(bins,probTheory/sum(probTheory),'r','Linewidth',2)
m = plot([expectMC,expectMC],[0 .1],'g')
e = plot([expectAnalytic,expectAnalytic],[0 .1],'b')
xlim([expectAnalytic - .05,expectAnalytic + 0.05])
legend([b,t,m,e],{'Samples','Theory','$\hat{E}$','$E_{Theory}$'},'interpreter','latex');
title(['E_{MC} = ',num2str(expectMC),'; E_{Theory} = ',num2str(expectAnalytic)])
hold off

And the output of the code:

Monte Carlo Approximation of the the Expected value of Beta(2,10)


The analytical solution for the expected value of this Beta distribution:

\mathbb E_{Beta(2,10)}[x] = \frac{\alpha_1}{\alpha_1 + \alpha_2} = \frac{2}{12} = 0.167

is quite close to our approximation (also indicated by the small distance between the blue and green lines on the plot above).

Monte Carlo Approximation for Optimization

Monte Carlo Approximation can also be used to solve optimization problems of the form:

\hat x = \underset{x \in (a,b)}{\text{argmax }} g(x)

If g(x) fulfills the same criteria described above (namely that it is a scaled version of a probability distribution), then (as above) we can define the probability function

p(x) = \frac{g(x)}{C}

This allows us to instead solve the problem

\hat x = \underset{x}{\text{argmax }}C p(x)

If we can sample from p(x), the solution \hat x is easily found by drawing samples from p(x) and determining the location of the samples that has the highest density (Note that the solution is not dependent of the value of C). The following example demonstrates Monte Carlo optimization:

Example: Monte Carlo Optimization of g(x) = e^{-\frac{(x-4)^2}{2}}

Say we would like to find the value of x_{opt} which optimizes the function g(x) = e^{-((x-4)^2)/2}. In other words we want to solve the problem

x_{opt} = \underset{x}{\text{argmax }} e^{-\frac{(x-4)^2}{2}}

We could solve for x_{opt} using standard calculus methods, but a clever trick is to use Monte Carlo approximation to solve the problem. First, notice that g(x)  is  a scaled version of a Normal distribution with mean equal to 4 and unit variance:

g(x) = C \times \frac{1}{\sqrt{2\pi}} e^{-\frac{(x-4)^2}{2}}

= C \times \mathcal N(4,1)

where C = \sqrt{2\pi}. This means we can solve for x_{opt} by drawing samples from the normal distribution and determining where those samples have the highest density. The following chunk of matlab code solves the optimization problem in this way.

% MONTE CARLO OPTIMIZATION OF exp(x-4)^2
randn('seed',12345)

% INITIALZIE
N = 100000;
x = 0:.1:6;
C = sqrt(2*pi);
g = inline('exp(-.5*(x-4).^2)','x');
ph = plot(x,g(x)/C,'r','Linewidth',3); hold on
gh = plot(x,g(x),'b','Linewidth',2); hold on;
y = normpdf(x,4,1);

% CALCULATE MONTE CARLO APPROXIMATION
x = normrnd(4,1,1,N);
bins = linspace(min(x),max(x),100);
counts = histc(x,bins);
[~,optIdx] = max(counts);
xHat = bins(optIdx);

% OPTIMA AND ESTIMATED OPTIMA
oh = plot([4 4],[0,1],'k');
hh = plot([xHat,xHat],[0,1],'g');

set(gca,'fontsize',16)
legend([gh,ph,oh,hh],{'g(x)','$p(x)=\frac{g(x)}{C}$','$x_{opt}$','$\hat{x}$'},'interpreter','latex','Location','Northwest');

Monte Carlo Optimization

In the code output above we see the function g(x) we want to optimize in blue and the Normal distribution p(x) from which we draw samples in red. The Monte Carlo method provides a good approximation (green) to the real solution (black).

Wrapping Up

In the toy examples above it was easy to sample from p(x). However, for practical problems the distributions we want to sample from are often complex and operate in many dimensions. For these problems more clever sampling methods have to be used. Such sampling methods include Inverse Transform Sampling, Rejection Sampling, Importance Sampling, and Markov Chain Monte Carlo methods such as the Metropolis Hasting algorithm and the Gibbs sampler, each of which I plan to cover in separate posts.

Sampling From the Normal Distribution Using the Box-Muller Transform

The Normal Distribution is the workhorse of many common statistical analyses and being able to draw samples from this distribution lies at the heart of many statistical/machine learning algorithms. There have been a number of methods developed to sample from the Normal distribution including Inverse Transform Sampling, the Ziggurat Algorithm, and the Ratio Method (a rejection sampling technique). In this post we will focus on an elegant method called the Box-Muller transform.

A quick review of Cartesian and polar coordinates.

Before we can talk about using the Box-Muller transform, let’s refresh our understanding of the relationship between Cartesian and polar coordinates. You may remember from geometry that if x and y are two points in the Cartesian plane they can be represented in polar coordinates with a radius r and an angle \theta using the following relationships:

r^2 = x^2 + y^2

\tan(\theta) = \frac{y}{x}, and therefore

x = r\cos(\theta)

y = r\sin(\theta)

Notice that if r \leq 1 and \theta \in [0 , 2\pi], then we map out values contained in the unit circle, as shown in the figure below. Also note that random variables in such a circle can be generated by transforming values sampled from the uniform distribution. Specifically, radii can be sampled from r \sim Unif(0,1) and angle can be sampled from \theta \sim 2\pi \times Unif(0,1). A similar mechanism (i.e. drawing points in a circle using uniform variables) is at the heart of the Box-Muller transform for sampling Normal random variables.

Example of the relationship between Cartesian and polar coordinates

Drawing Normally-distributed samples with the Box-Muller transform

Ok, now that we’ve discussed how Cartesian coordinates are represented in polar coordinates, let’s move on to how we can use this relationship to generate random variables. Box-Muller sampling is based on representing the joint distribution of two independent standard Normal random Cartesian variables X and Y

X \sim N(0,1)

Y \sim N(0,1)

in polar coordinates. The joint distribution p(x,y) (which is circular-symmetric) is:

p(x,y) = p(x)p(y) = \frac{1}{\sqrt{2\pi}}e^{-\frac{x^2}{2}}\frac{1}{\sqrt{2\pi}}e^{-\frac{y^2}{2}}

= \frac{1}{2\pi}e^{-\frac{x^2 + y^2}{2}}

If we notice that the x^2 + y^2 term in the numerator of the exponent is equal to r^2 (as above) we can make the connection between the the Cartesian representation of the joint Normal distribution and its polar representation:

p(x,y) = \left ( \frac{1}{2\pi} \right ) \left ( e^{\frac{-r^2}{2}} \right )

which is the product of two density functions, an exponential distribution over squared radii:

r^2 \sim Exp(\frac{1}{2})

and a uniform distribution over angles:

\theta \sim Unif(0,2\pi)

just like those mentioned above when generating points on the unit circle. Now, if we make another connection between the exponential distribution and the uniform distribution, namely that:

Exp(\lambda) = \frac{-\log (Unif(0,1))}{\lambda}

then r \sim \sqrt{-2\log (Unif(0,1))}

This gives us a way to generate points from the joint Gaussian distribution by sampling from two independent uniform distributions, one for r and another for \theta, and transforming them into Cartesian coordinates via the relationships above. In detail, the procedure goes as follows:

  1. Draw, u_1,u_2 \sim Unif(0,1)
  2. Transform the variables into radius and angle representation r = \sqrt{-2\log(u_1)} , and \theta = 2\pi u_2
  3. Transform radius and angle into Cartesian coordinates: x = r \cos(\theta), y = r \sin(\theta)

What results are two independent Normal random variables, X and Y. A MATLAB implementation of the Box-Muller algorithm is shown below:

% NORMAL SAMPLES USING BOX-MUELLER METHOD
% DRAW SAMPLES FROM PROPOSAL DISTRIBUTION
u = rand(2,100000);
r = sqrt(-2*log(u(1,:)));
theta = 2*pi*u(2,:);
x = r.*cos(theta);
y = r.*sin(theta);

% DISPLAY BOX-MULLER SAMPLES
figure
% X SAMPLES
subplot(121);
hist(x,100);
colormap hot;axis square
title(sprintf('Box-Muller Samples Y\n Mean = %1.2f\n Variance = %1.2f\n Kurtosis = %1.2f',mean(x),var(x),3-kurtosis(x)))
xlim([-6 6])

% Y SAMPLES
subplot(122);
hist(y,100);
colormap hot;axis square
title(sprintf('Box-Muller Samples X\n Mean = %1.2f\n Variance = %1.2f\n Kurtosis = %1.2f',mean(y),var(y),3-kurtosis(y)))
xlim([-6 6])

Box-Muller Samples for Normal Distribution

Wrapping Up

The output of the MATLAB code is shown above. Notice the first, second, and fourth central moments (mean, variance, and kurtosis)  of the generated samples are consistent with the standard normal. The Box-Muller transform is another example of of how uniform variables on the interval (0,1) and can be  transformed in order to sample from a more complicated distribution.

Rejection Sampling

Suppose that we want to sample from a distribution f(x) that is difficult or impossible to sample from directly, but instead have a simpler distribution q(x) from which sampling is easy.  The idea behind Rejection sampling (aka Acceptance-rejection sampling) is to sample from q(x) and apply some rejection/acceptance criterion such that the samples that are accepted are distributed according to f(x).

Envelope distribution and rejection criterion

In order to be able to reject samples from q(x) such that they are sampled from f(x), q(x) must “cover” or envelop the distribution f(x). This is generally done by choosing a constant c > 1 such that  cq(x) > f(x) for all x. For this reason cq(x) is often called the envelope distribution. A common criterion for accepting samples from x \sim q(x) is based on the ratio of the target distribution to that of the envelope distribution. The samples are accepted if

\frac{f(x)}{cq(x)} > u

where u \sim Unif(0,1), and rejected otherwise. If the ratio is close to one, then f(x) must have a large amount of probability mass around x and that sample should  be more likely accepted. If the ratio is small, then it means that f(x) has low probability mass around x and we should be less likely to accept the sample. This criterion is demonstrated in the chunk of MATLAB code and the resulting figure below:

rand('seed',12345);
x = -10:.1:10;
% CREATE A "COMPLEX DISTRIBUTION" f(x) AS A MIXTURE OF TWO NORMAL
% DISTRIBUTIONS
f = inline('normpdf(x,3,2) + normpdf(x,-5,1)','x');
t = plot(x,f(x),'b','linewidth',2); hold on;

% PROPOSAL IS A CENTERED NORMAL DISTRIBUTION
q = inline('normpdf(x,0,4)','x');

% DETERMINE SCALING CONSTANT
c = max(f(x)./q(x))

%PLOT SCALED PROPOSAL/ENVELOP DISTRIBUTION
p = plot(x,c*q(x),'k--');

% DRAW A SAMPLE FROM q(x);
qx = normrnd(0,4);
fx = f(qx);

% PLOT THE RATIO OF f(q(x)) to cq(x)
a = plot([qx,qx],[0 fx],'g','Linewidth',2);
r = plot([qx,qx],[fx,c*q(qx)],'r','Linewidth',2);
legend([t,p,a,r],{'Target','Proposal','Accept','Reject'});
xlabel('x');

Rejection Sampling with a Normal proposal distribution

Here a zero-mean Normal distribution is used as the proposal distribution. This distribution is scaled by a factor c = 9.2, determined from f(x) and q(x) to ensure that the proposal distribution covers f(x). We then sample from q(x), and compare the proportion of cq(x) occupied by f(x). If we compare this proportion to a random number sampled  from Unif(0,1) (i.e. the criterion outlined above), then we would accept this sample with probability proportional to the length of the green line segment and reject the sample with probability proportional to the length of the red line segment.

Rejection sampling of a random discrete distribution

This next example shows how rejection sampling can be used to sample from any arbitrary distribution, continuous or not, and with or without an analytic probability density function.

Random Discrete Target Distribution and Proposal that Bounds It.

The figure above shows a random discrete probability density function f(x) generated on the interval (0,15). We will use rejection sampling as described above to sample from f(x). Our proposal/envelope distribution is the uniform discrete distribution on the same interval (i.e. any of the integers from 1-15 are equally probable) multiplied by a constant c that is determined such that the maximum value of f(x) lies under (or equal to) cq(x).

Rejection Samples For Discrete Distribution on interval [1 15]

Plotted above is the target distribution (in red) along with the discrete samples obtained using the rejection sampling. The MATLAB code used to sample from the target distribution and display the plot above is here:

rand('seed',12345)
randn('seed',12345)

fLength = 15;
% CREATE A RANDOM DISTRIBUTION ON THE INTERVAL [1 fLength]
f = rand(1,fLength); f = f/sum(f);

figure; h = plot(f,'r','Linewidth',2);
hold on;
l = plot([1 fLength],[max(f) max(f)],'k','Linewidth',2);

legend([h,l],{'f(x)','q(x)'},'Location','Southwest');
xlim([0 fLength + 1])
xlabel('x');
ylabel('p(x)');
title('Target (f(x)) and Proposal (q(x)) Distributions');

% OUR PROPOSAL IS THE DISCRETE UNIFORM ON THE INTERVAL [1 fLength]
% SO OUR CONSTANT IS
c = max(f/(1/fLength));

nSamples = 10000;
i = 1;
while i < nSamples
   proposal = unidrnd(fLength);
   q = c*1/fLength; % ENVELOPE DISTRIBUTION
   if rand < f(proposal)/q
      samps(i) = proposal;
      i = i + 1;
   end
end

% DISPLAY THE SAMPLES AND COMPARE TO THE TARGET DISTRIBUTION
bins = 1:fLength;
counts = histc(samps,bins);
figure
b = bar(1:fLength,counts/sum(counts),'FaceColor',[.8 .8 .8])
hold on;
h = plot(f,'r','Linewidth',2)
legend([h,b],{'f(x)','samples'});
xlabel('x'); ylabel('p(x)');
xlim([0 fLength + 1]);

Rejection sampling from the unit circle to estimate \pi

Though the ratio-based acceptance-rejection criterion introduced above is a common choice for drawing samples from complex distributions, it is not the only criterion we could use. For instance we could use a different set of criteria to generate some geometrically-bounded distribution. If we wanted to generate points uniformly within the unit circle (i.e. a circle centered at (y,x) = 0 and with radius r = 1), we could do so by sampling Cartesian spatial coordinates x and y uniformly from the interval (-1,1)–which samples form a square centered at (0,0)–and reject those points that lie outside of the radius r = \sqrt{x^2 + y^2} = 1

Unit Circle Inscribed in Square

Something clever that we can do with such a set of samples is to approximate the value \pi: Because a square that inscribes the unit circle has area:

A_{square} = (2r)^2 = 4r^2

and the unit circle has the area:

A_{circle} = \pi r^2

We can use the ratio of their areas to approximate \pi:

\pi = 4\frac{A_{circle}}{A_{square}}

The figure below shows the rejection sampling process and the resulting estimate of \pi from the samples. One-hundred thousand 2D points are sampled uniformly from the interval (-1,1).  Those points that lie within the unit circle are plotted as blue dots. Those points that lie outside of the unit circle are plotted as red x’s. If we take four times the ratio of the area in blue to the entire area, we get a very close approximation to 3.14 for \pi.

Rejection Criterion

The MATLAB code used to generate the example figures is below:

% DISPLAY A CIRCLE INSCRIBED IN A SQUARE

figure;
a = 0:.01:2*pi;
x = cos(a); y = sin(a);
hold on
plot(x,y,'k','Linewidth',2)

t = text(0.5, 0.05,'r');
l = line([0 1],[0 0],'Linewidth',2);
axis equal
box on
xlim([-1 1])
ylim([-1 1])
title('Unit Circle Inscribed in a Square')

pause;
rand('seed',12345)
randn('seed',12345)
delete(l); delete(t);

% DRAW SAMPLES FROM PROPOSAL DISTRIBUTION
samples = 2*rand(2,100000) - 1;

% REJECTION
reject = sum(samples.^2) > 1;

% DISPLAY REJECTION CRITERION
scatter(samples(1,~reject),samples(2,~reject),'b.')
scatter(samples(1,reject),samples(2,reject),'rx')
hold off
xlim([-1 1])
ylim([-1 1])

piHat = mean(sum(samples.*samples)<1)*4;

title(['Estimate of \pi = ',num2str(piHat)]);

Wrapping Up

Rejection sampling is a simple way to generate samples from complex distributions. However, Rejection sampling also has a number of weaknesses:

  • Finding a proposal distribution that can cover the support of the target distribution is a non-trivial task.
  • Additionally, as the dimensionality of the target distribution increases, the proportion of points that are rejected also increases. This curse of dimensionality makes rejection sampling an inefficient technique for sampling multi-dimensional distributions, as the majority of the points proposed are not accepted as valid samples.
  • Some of these problems are solved by changing the form of the proposal distribution to “hug” the target distribution as we gain knowledge of the target from observing accepted samples. Such a process is called Adaptive Rejection Sampling, which will be covered in another post.

Inverse Transform Sampling

There are a number of sampling methods used in machine learning, each of which has various strengths and/or weaknesses depending on the nature of the sampling task at hand. One simple method for generating samples from distributions with closed-form descriptions is Inverse Transform (IT) Sampling.

The idea behind IT Sampling is that the probability mass for a random variable X distributed according to the probability density function f(x) integrates to one and therefore the cumulative distribution function C(x) can be used to map from values in the interval (0,1) (i.e. probabilities) to the domain of f(x). Because it is easy to sample values z uniformly from the interval (0,1), we can use the inverse of the CDF C(x)^{-1} to transform these sampled probabilities into samples x. The code below demonstrates this process at work in order to sample from a student’s t distribution with 10 degrees of freedom.

rand('seed',12345)

% DEGREES OF FREEDOM
dF = 10;
x = -3:.1:3;
Cx = cdf('t',x,dF)
z = rand;

% COMPARE VALUES OF
zIdx = min(find(Cx>z));

% DRAW SAMPLE
sample = x(zIdx);

% DISPLAY
figure; hold on
plot(x,Cx,'k','Linewidth',2);
plot([x(1),x(zIdx)],[Cx(zIdx),Cx(zIdx)],'r','LineWidth',2);
plot([x(zIdx),x(zIdx)],[Cx(zIdx),0],'b','LineWidth',2);
plot(x(zIdx),z,'ko','LineWidth',2);
text(x(1)+.1,z + .05,'z','Color','r')
text(x(zIdx)+.05,.05,'x_{sampled}','Color','b')
ylabel('C(x)')
xlabel('x')
hold off

IT Sampling from student’s-t(10)

However, the scheme used to create to plot above is inefficient in that one must compare current values of z with the C(x) for all values of x. A much more efficient method is to evaluate C^{-1} directly:

  1. Derive C^{-1}(x) (or a good approximation) from f(x)
  2. for i = 1:n
  • – draw z_i from Unif(0,1)
  • x_i = CDF^{-1}(z_i)
  • – end for

The IT sampling process is demonstrated in the next chunk of code to sample from the Beta distribution, a distribution for which C^{-1}  is easy to approximate using Netwon’s method (which we let MATLAB do for us within the function icdf.m)

rand('seed',12345)
nSamples = 1000;

% BETA PARAMETERS
alpha = 2; beta = 10;

% DRAW PROPOSAL SAMPLES
z = rand(1,nSamples);

% EVALUATE PROPOSAL SAMPLES AT INVERSE CDF
samples = icdf('beta',z,alpha,beta);
bins = linspace(0,1,50);
counts = histc(samples,bins);
probSampled = counts/sum(counts)
probTheory = betapdf(bins,alpha,beta);

% DISPLAY
b = bar(bins,probSampled,'FaceColor',[.9 .9 .9]);
hold on;
t = plot(bins,probTheory/sum(probTheory),'r','LineWidth',2);
xlim([0 1])
xlabel('x')
ylabel('p(x)')
legend([t,b],{'Theory','IT Samples'})
hold off

Inverse Transform Sampling of Beta(2,10)

Wrapping Up

The IT sampling method is generally only used for univariate distributions where C^{-1} can be computed in closed form, or approximated. However, it is a nice example of how uniform random variables can be used to sample from much more complicated distributions.

Derivation: Ordinary Least Squares Solution and Normal Equations

In a linear regression framework, we assume some output variable y is a linear combination of some independent input variables X plus some independent noise \epsilon. The way the independent variables are combined is defined by a parameter vector \beta:

\Large{\begin{array}{rcl} y &=& X \beta + \epsilon \end{array}}

We also assume that the noise term \epsilon is drawn from a standard Normal distribution:

\Large{ \begin{array}{rcl}\epsilon &\sim& N(0,I)\end{array}}

For some estimate of the model parameters \hat \beta, the model’s prediction errors/residuals e are the difference between the model prediction and the observed ouput values

\Large{\begin{array}{rcl} e = y - X\hat \beta \end{array}}

The Ordinary Least Squares (OLS) solution to the problem (i.e. determining an optimal solution for \hat \beta) involves minimizing the sum of the squared errors with respect to the model parameters, \hat \beta. The sum of squared errors is equal to the inner product of the residuals vector with itself \sum e_i^2 = e^Te :

\Large{\begin{array}{rcl} e^T e &=& (y - X \hat \beta)^T (y - X \hat \beta) \\  &=& y^Ty - y^T (X \hat \beta) - (X \hat \beta)^T y + (X \hat \beta)^T (X \hat \beta) \\  &=& y^Ty - (X \hat \beta)^T y - (X \hat \beta)^T y + (X \hat \beta)^T (X \hat \beta) \\  &=& y^Ty - 2(X \hat \beta)^T y + (X \hat \beta)^T (X \hat \beta) \\  &=& y^Ty - 2\hat \beta^T X^T y + \hat \beta^T X^T X \hat \beta \\  \end{array}}

To determine the parameters, \hat \beta, we minimize the sum of squared residuals with respect to the parameters.

\Large{\begin{array}{rcl} \frac{\partial}{\partial \beta} \left[ e^T e \right] &=& 0 \\  &=& -2X^Ty + 2X^TX \hat \beta \text{, and thus} \\  X^Ty &=& X^TX \hat \beta  \end{array}}

due to the identity \frac{\partial \mathbf{a}^T \mathbf{b}}{\partial \mathbf{a}} = \mathbf{b}, for vectors \mathbf{a} and \mathbf{b}. This relationship is matrix form of the Normal Equations. Solving for \hat \beta gives  the analytical solution to the Ordinary Least Squares problem.

\Large{\begin{array}{rcl} \hat \beta &=& (X^TX)^{-1}X^Ty \end{array}}

Boom.