Maximum of integration of function over multivariate gaussian w. r. t. mean parameter

76 Views Asked by At

Suppose a function $f : \mathbb{R}^n \to \mathbb{R}$ that takes an unique maximum at $x_0$ (and is sufficiently well-behaved, e. g. Continuos) Suppose that for a fixed $\Sigma$, for all $\mu$, $ g(\mu) := \int f \ \mathcal{N} (\mu, \Sigma) < \infty$

Intuitively, I would suppose that $g$ takes a (unique) maximum at $x_0$, but I don't know how one would prove this.

Under which conditions is the above true? How could i prove it?

Edit:

Based on the above comments, some thoughts:

Let $h$ be nonnegative, decreasing, with unique maximum at $x_0$.

Furhtermore, let $f(x) = h((x-x_0)^T\Sigma^{-1}(x-x_0))$.

Then $ g(\mu) = \int h((x-x_0)^T\Sigma^{-1}(x-x_0)) exp( (x-\mu)^T \Sigma^{-1} (x-\mu)) dx$

$= \int h((y^T\Sigma^{-1}y) exp( (y - (\mu - x_0)^T \Sigma^{-1} (y-(\mu-x_0)) dy$

By substitution. Assuming we can pull the derivative inside of the integral, we get $ \frac{\partial g}{\partial \mu} (\mu) =\int 2y^T\Sigma^{-1}(\mu-x_0) h((y^T\Sigma^{-1}y) exp( (y - (\mu - x_0)^T \Sigma^{-1} (y-(\mu-x_0)) dy$.

Clearly, this has a zero at $x_0$. But if we now evaluate the Hessian at $x_0$, it reads $\frac{\partial g}{\partial \mu \partial \mu^T}(x_0) = \int 2 y^T \Sigma^{-1} h((y^T\Sigma^{-1}y) exp( (y^T \Sigma^{-1} y) dy \\ = \int 2 (\Sigma^{-\frac{1}{2}}y)^T \Sigma^{\frac{1}{2}} \Sigma^{-1} h((y^T\Sigma^{-1}y) exp( (y^T \Sigma^{-1} y) dy\\ $ Substituting again, we get $\\ \int 2 u \Sigma^{-\frac{1}{2} } h(u^Tu) exp(u^Tu) = 0 $ because this is an odd function. So this does not help us deciding on whether this is a maximum..

Edit 2 - Proof for bowl-shaped functions

Apparently, in the situation where $h$ is "bowl-shaped" (that means the sublevel sets $K_c = \{x| h(x) \leq c\}$ are symmetric and convex) there is a proof for the above assumption. The result is known as "Andersons Lemma" and a nice (and not very involved) proof can be found here .

1

There are 1 best solutions below

2
On

I think the claim is false, the follwing is a dirty proof of this.

Lets look at $n=1$, $f(x) = \begin{cases}-ax^2&\text{if } x<0\\-\frac{x^2}{x^2+1}&\text{if }x\geq 0\end{cases}$ for some $a>0$, this is differenciable everywhere (derivative in $0$ is $0$). Then the maximum is attained at $0$ and is unique. If your claim is true, then the maximum of $g$ is attained at $0$ independently of $a$, but if you augment $a$ you decrease the value of $g(0)$, actually you can find $a$ such that $g(0)\ll -1$. Now if you look at $g(x)$ for $x\gg 0$, you can make it arbitrarily close to $-1$ for a fixed $a$, hence $g(x)\gg g(0)$.