Perturbing a measure $\mu$ so that the integral $\int fd\mu$ becomes nonzero

119 Views Asked by At

Let $X$ be a compact subset of $\mathbb{R}^d$, let $f\in L^2(X)$ be an unknown function with $\lVert f\rVert_2=1$ for which we may assume suitable regularity (e.g. Lipschitz, $C^1$), and let $\mu$ be a Borel probability measure on $X$. Suppose $\int fd\mu=0$. Is there an efficient (in an algorithmic sense, time complexity not exponential in $d$) way to find a perturbation $\mu'$ close to $\mu$ such that $\int fd\mu'\neq 0$?

For example, we could divide the domain into $N^d$ hypercubes $A_1,A_2,\cdots$ and try the density proportional to $\mu+\epsilon 1_{A_i}$ until we hit a region of nonzero $f$. Then the integral value is guaranteed to change due to Lipschitzity for large enough $N$. However, this incurs the curse of dimensionality in $d$. For a similar reason, random perturbations also seem to require exponential time to detect nonzero $f$. I am unsure if this is inevitable.

For context, I am studying the stability of gradient dynamics of a functional on the space of probability measures, and trying to come up with a scheme that will always find an unstable direction to escape to, if one exists. This can be quantified by the magnitude of the perturbation in the direction of an unstable eigenfunction $f$.

2

There are 2 best solutions below

0
On BEST ANSWER

Idea 1. Suppose $\phi$ is an odd function such that $\phi(x)>0$ whenever $x>0$. If we set $\mu'$ by

$$ \mathrm{d}\mu' = C (1+ \varepsilon \phi(f)) \, \mathrm{d}\mu $$

for constants $C, \varepsilon > 0$, then

$$\int_X f \, \mathrm{d}\mu' = C\epsilon \int_X f\phi(f) \, \mathrm{d}\mu >0$$

because $f$ is not identically zero (in $\mu$-a.e. sense) and $f\phi(f)>0$ whenever $f\neq 0$.

One issue is that $\mu'$ is in general neither a probability measure nor a positive Borel measure. However, if we assume $f$ is bounded, then:

  1. Choosing $\varepsilon$ sufficiently small makes $\mu'$ positive. This requires knowing the value of $\|\phi(f)\|_{\infty}$.

  2. Choosing an appropriate $C$ normalizes $\mu'$ so that $\mu'(X) = 1$. This requires knowing the value of $\int_X \phi(f) \, \mathrm{d}\mu$.

Estimating $\|\phi(f)\|_{\infty}$ and $\int_X \phi(f) \, \mathrm{d}\mu$ in the general scenario seems suffering from the curse of dimensionality. However, there is one occasion where this becomes cheap:

  • Suppose $f$ is Lipschitz with the Lipschitz constant $\|f\|_{\text{Lip}}$. If we fix $x_0 \in X$, then it is clear that $$ \| f\|_{\infty} \leq |f(x_0)| + \|f\|_{\text{Lip}} \operatorname{diam}(X). $$ This allows us to choose $\epsilon$. Also, set $\phi(x) = x$. Then we automatically know $\int_X \phi(f) \, \mathrm{d}\mu = 0$, hence we may set $C = 0$. The resulting $\mu'$ takes the form $$ \mathrm{d}\mu' = (1+ \varepsilon f) \, \mathrm{d}\mu $$ where $\epsilon \leq \frac{1}{|f(x_0)| + \|f\|_{\text{Lip}} \operatorname{diam}(X)}$.

Idea 2. Assume $f$ is sampled from a distribution $\mathbb{P}$ on $L^2(\mu)$ such that $\mathbb{P}(V) = 0$ for every proper closed subspace $V$ of $L^2(\mu)$. (This essentially tells that $\mathbb{P}(V) = 0$ for every subspace of the form $V = \langle g \rangle^{\perp}$ for some non-zero $g \in L^2(\mu)$.)

Then by fixing an arbitrary non-zero $g \in L^2(\mu)$ in the start, we have $\int_X fg \, \mathrm{d}\mu \neq 0$ with probability one. If in addition $g$ is assumed to be bounded, then we may set $\mu'$ by

$$ \mathrm{d}\mu' = C (1+ \varepsilon g) \, \mathrm{d}\mu, \qquad C = \frac{1}{\int_X (1+ \varepsilon g) \, \mathrm{d}\mu} $$

for any $\epsilon \leq 1 / \|g\|_{L^{\infty}(\mu)}$. Since any non-zero $g$ will work, we may choose $g$ to be nice enough so that $\int_X g \, \mathrm{d}\mu$ is easy to estimate, if necessary.

2
On

I presume your $L^2(X)$ is with respect to Lebesgue measure (let's call it $m$). Let $A = \{x \in X: f(x) > 0\}$ and $B = \{x: f(x) < 0\}$. At least one of $m(A)$ and $m(B)$ is nonzero. If $m(A) \ne 0$, let $\mu' = (1-\epsilon) \mu + \epsilon I_A\; m/m(A)$ (where $I_A$ is the indicator function of $A$); or if $m(B) \ne 0$, let $\mu' = (1-\epsilon) \mu + \epsilon I_B\; m/m(B)$, taking $\epsilon > 0$ small enough to make $\mu'$ "close" to $\mu$.

The question of efficiency may be somewhat complicated; I don't know what model of computation you're using. Depending on what estimates on $f$'s smoothness you have available, it may be difficult just to find a point where $f \ne 0$, let alone determine whether $m(A)$ or $m(B)$ is nonzero.

[EDIT] OK, if probability $1$ is enough, let's try this. If $f$ is a nonzero real-valued continuous function on the compact set $X$, $$ \mathcal F(f)(z) = \int_X e^{zx} f(x)\; dm(x) $$ is analytic and nonconstant on $\mathbb C^n$, and thus is $0$ only on a set of $2n$-dimensional Lebesgue measure $0$ in $\mathbb C^n$. Choose $z = s + i t \in \mathbb C^n$ at random (with a continuous density) and $\theta \in [0,2\pi)$ at random with a continuous density, then with probability $1$ $$\text{Re}(e^{i\theta}\mathcal F(f)(z)) = \int_X \cos(\theta + t \cdot x) e^{s \cdot x} f(x)\; dm(x) \ne 0$$ Choose the density so that $\cos(\theta + t \cdot x) > 0$ for $x \in X$. Then let $$\mu' = (1-\epsilon) \mu + \epsilon \frac{g\; m}{\int_X g \; dm}$$ where $g(x) = \cos(\theta + t \cdot x) e^{s \cdot x}$, and $\epsilon > 0$ is small enough to make this close to $\mu$.