Understanding this derivation of a conditional distribution for an autoregressive process

171 Views Asked by At

In a current paper I am reading, the following section shows up as a side note. I am not sure I understand the derivation the authors propose here:

Consider the following auto-regressive process:

$$x_{t+1}=ax_{t}+\sigma \nu_{t+1}, \hspace{1cm} x_1\sim\mathcal{N}\left( 0,\frac{\sigma^2}{1-a^2} \right) \tag{1}$$

which admits through the proper application of Bayes rule

$$p\left( x_t|x_{t+1} \right) = \frac{p\left( x_{t+1}|x_{t} \right) p\left( x_{t} \right)}{p\left( x_{t+1} \right)} \tag{2}$$

and marginalization the following backward kernel:

$$p\left( x_{t} |x_{t+1} \right) = \mathcal{N}\left( ax_{t+1},\sigma^2 \right) \tag{3}$$

How do the authors arrive at this result? Unfortunately, no further information nor definition of the variables used is provided. Do you have an intuition what $\nu_{t+1}$ is? Some error term?

1

There are 1 best solutions below

0
On BEST ANSWER

You probably figured out in equation (1) that the authors are assuming a (weakly) stationary AR(1) process, thus the $\nu_{t+1}$ are White Noise(0,1), "disturbance" or "innovation" terms, so that $\sigma\nu_{t+1}\in \mathcal N(0,\sigma^2)$.

The innovations $\nu_t$ essentially subsumes all other variables that are not modeled. Another assumption is that these are independent or at least orthogonal to $x_t$.

Also by (weak) stationarity, $\mathbb E[x_{t+1}]=\mathbb E[x_t]$ by definition and so taking expectations on both sides of (1) shows that $(1-a)\mathbb E[x_t] = (1-a)\mathbb E[x_1]=0$. Thus for $a\ne 1$, $\mathbb E[x_1]=0$.

Using independence of $\nu$ and $x$, and taking variance of both sides of (1), (and weak stationarity, $\mathbb V[x_{t+1}]=\mathbb V[x_t]=\mathbb V[x_1]$) then gives, $\mathbb V[x_1]=\frac{\sigma^2}{1-a^2}=:\gamma^2$.

Explicitly, (1) means $$ p(x_t) = \frac{1}{\sqrt{2\pi\gamma^2}}\exp[-(\frac{x_t}{\gamma})^2], $$ $$ p(x_{t+1}) = \frac{1}{\sqrt{2\pi\gamma^2}}\exp[-(\frac{x_{t+1}}{\gamma})^2], $$ while $$ p(x_{t+1}|x_t) = \frac{1}{\sqrt{2\pi\sigma^2}}\exp[-(\frac{x_{t+1}-a x_t}{\sigma})^2]. $$ Next just mindlessly form the Bayesian product (2) with these explicit formulas for the distributions, simplify, and note that the result is $\mathcal N(a x_{t+1},\sigma^2)$.

There probably is a better less computational, and less mindless way, but...