Why is the distribution of mean firing rates of a neuron Gaussian?

196 Views Asked by At

I'm reading "Principles of Neural Information Theory" by James V Stone and in section 3.5 he says that the distribution of firing rates (of a single neuron) is generally assumed to be approximately Gaussian. He proceeds to give a mathematical argument for this.

If we measure the activity during a period of $T$ seconds over intervals $\Delta t$, there are $N=T/ΔtN=T/\Delta t$ possible positions for spikes to occur. The probability of $n$ spikes occurring during this period is $$p(n)=\frac{N!}{n!(N-n)!}P^nQ^{N-n}$$ Where $P$ is the probability or a spike occurring and $Q$ is the probability of a spike not occuring. If $N$ and $T$ increase while firing rate $r$ stays constant, $p(n)$ approaches the Poisson distribution: $$p(n)=\frac{(rT)^n e^{-rT}}{n!}$$ If $T$ is held constant then a simple change of variables can be used to obtain a distribution for the firing rate (because $r=n/T$). If $N$ is large and $P$ is small, apparently this distribution is approximated by a Gaussian distribution.

I don't really understand this derivation. It would be nice if someone could provide an in-depth version of the argument with all steps fully fleshed-out. I'm particularly confused about how one goes from p(n) to a Poisson distribution. By looking at the plot, I can see why a Poisson distribution is approximated by a Gaussian distribution, but it would nice to have rigorous justification for this as well.

2

There are 2 best solutions below

0
On BEST ANSWER

The asymptotic connections between the binomial, Poisson, and normal distributions are discussed in many textbooks on probability theory. Here, I'll explain the connection between the two versions of $p(n)$ in your question.

Your first version of $p(n)$,

$p(n)=\frac{N!}{n!(N-n)!}P^{n}Q^{N-n}$

is simply the binomial probability distribution with parameters $N$ and $P=1-Q$.

Using the fact that $P+Q=1$, we can rewrite this as

$p(n)=\frac{N!}{n!(N-n)!}P^{n}(1-P)^{N-n}$.

Let $T$ be our time period and $r$ be the firing rate. The firing rate is equal to the probability of an individual neuron firing times the number of neurons, divided by the time period $T$. Thus $r=PN/T$ and $P=rT/N$. Notice that if $rT$ is kept constant and $N$ increases, $P$ must decrease. Substituting this into our formula for $p(n)$, we get

$p(n)=\frac{N!}{n!(N-n)!}\left( \frac{rT}{N} \right)^{n} \left( 1-\frac{rT}{N} \right)^{N-n}$.

$p(n)=\frac{N!}{n!(N-n)!}\left( \frac{rT}{N} \right)^{n} \left( 1-\frac{rT}{N} \right)^{N} \left( 1-\frac{rT}{N} \right)^{-n}$.

In the limit as $N$ goes to infinity we have (from freshman calculus),

$\lim_{N \rightarrow \infty} \left( 1- \frac{rT}{N} \right)^{N}=e^{-rT}$.

Also,

$\lim_{N \rightarrow \infty, P \rightarrow 0} \frac{N!}{(N-n)!} \left( \frac{P}{1-P} \right)^{n}=(rT)^{n}$.

Combining these results gives

$\lim_{N \rightarrow \infty, P \rightarrow 0} p(n)=\frac{(rT)^{n}e^{-rT}}{n!}$.

0
On

The probability distribution for the $n$ spikes is a $N\sim$ Binomial(N, P) that we can express as the sum of independent and identical distributed $X_{i}\sim$ Bernoulli(P) random variables, so when $N\to \infty$ we are actually suming over infinitely many iid Bernoullis and thus by the Central Limit Theorem $$\frac{\sum_{k=1}^{n}X_{i}-NP}{(P(1-P)N)^{1/2}}\xrightarrow[]{D}\mathcal{N}(0,1)$$

This happens because the characteristic function of this transformation (The fourier transform of the density that uniquely define each random variable/distribution) can be shown to converge to the $\mathcal{N}(0,1)$ characteristic function. The proof is easily found in any probability/inference book or in the internet.