Independence of Gaussian function and Gradient at each point

292 Views Asked by At

I am studying Gaussian processes and I found this results, that seems pretty remarkable to me, but yet I can not find anything online about it. I would be very grateful if someone could help me, with a proof or a good reference to read about it.

So if we have $ {f(t), t\in T\subseteq R^2} $, $ C^1 $ centered stationary Gaussian process then why for any fixed t, $ \nabla f(t) $ and $ f(t) $ are jointly continuous (apparently we can just assume this?), but independent?

Thank you very much for your help in advance!

1

There are 1 best solutions below

2
On

All the relevant details are contained in the one dimensional case, therefore letting $s,t \in \mathbb{R}$ suppose you have $$ \mbox{cov}(f(s),f(t)) = \omega\left( \frac{|s-t|^2}{2} \right), $$ now Gaussian processes are closed under linear transformations, taking the derivative is a linear transformation of your original gaussian processes with new covariance function given by $$ \begin{align*} \mbox{cov}\left(\dot{f}(s) , f(t) \right) &= (s-t) \cdot \omega^{\prime}\left( \frac{|s-t|^2}{2}\right), \\ \mbox{cov}\left(f(s), \dot{f}(t) \right) &= (t-s) \cdot \omega^{\prime}\left( \frac{|s-t|^2}{2}\right), \\ \mbox{cov}\left( \dot{f}(s), \dot{f}(t) \right)&= \frac{\partial^2}{\partial s \partial t}\omega\left(\frac{|s-t|^2}{2} \right). \end{align*} $$ therefore whenever $s=t$ the covariance between $f(t)$ and $\dot{f}(t)$ vanishes and you get the desired independence. I should stress that we can claim independence because they are jointly Gaussian, infact supposing $f(t)$ is a mean zero Gaussian process with covariance function as above then $$ \begin{bmatrix} f(s) \\ f(t) \\ \dot{f}(s) \\ \dot{f}(t) \end{bmatrix} \sim \mathcal{N}\left( \textbf{0} , \begin{bmatrix} \Sigma_{11} & \Sigma_{12} \\ \Sigma_{21} & \Sigma_{22} \end{bmatrix} \right) $$ with $$ \begin{align} \Sigma_{11} &= \begin{bmatrix} \mbox{cov}(f(s),f(s)) & \mbox{cov}(f(s),f(t))\\ \mbox{cov}(f(t),f(s)) & \mbox{cov}(f(t),f(t)) \end{bmatrix}\\ \Sigma_{12} &= \begin{bmatrix} \mbox{cov}(f(s),\dot{f}(s)) & \mbox{cov}(f(s) ,\dot{f}(t)) \\ \mbox{cov}(f(t),\dot{f}(s)) & \mbox{cov}(f(t) ,\dot{f}(t)) \end{bmatrix} \\ \Sigma_{22} &= \begin{bmatrix} \mbox{cov}(\dot{f}(s),\dot{f}(s)) & \mbox{cov}(\dot{f}(s),\dot{f}(t))\\ \mbox{cov}(\dot{f}(t),\dot{f}(s)) & \mbox{cov}(\dot{f}(t),\dot{f}(t)) \end{bmatrix} \end{align} $$ The same argument will go through in exactly the same way for higher dimensions just by redefining the covariance function as $\omega(\frac{1}{2}\| \textbf{s} - \textbf{t} \|^2)$.

Here is a reference that contains the necessary details for infinitely differentiable squared exponential kernel Derivative observations in Gaussian Process Models of Dynamic Systems.