Approximating the inverse of a stochastic function

180 Views Asked by At

Assume that I have a stochastic function $f(x)=g(x)+\epsilon$, where $f:\mathbb{R}^D\rightarrow \mathbb{R}^D$ is composed of a deterministic function $g:\mathbb{R}^D\rightarrow \mathbb{R^D}$ and some random additive error $\epsilon$ (for example, white noise), and $x\in\mathbb{R}^D$. In my project, I am interested in emulating/approximating that inverse of the function $f(x)$. To train the emulator, you may assume that I have $N$ samples $x\in\mathbb{R}^D$ as well their corresponding function evaluations $f(x)$ (or $g(x)$ and their error components $\epsilon$, if you prefer).

Question: Can you recommend an algorithm/approach for emulating/approximating the inverse of multi-dimensional stochastic functions?

Here are my thoughts so far: If $f(x)$ were deterministic (i.e., $f(x)=g(x)$), one possible approach would be to create $D$ radial basis function interpolations, taking each component of $f(x)$ as an argument and returning an interpolation of the corresponding $x$ value. However, this approach does not account for the fact that $f(x)$ is stochastic. If I were to approximate the forward function, I could simply approximate the deterministic function $g(x)$ and add the additive noise after the emulation, but since I am interested in the inverse function, this isn't an option. Do you have any recommendations on how to approach this conundrum?

1

There are 1 best solutions below

0
On

I would recommend checking out "diffusion models" (see https://lilianweng.github.io/lil-log/2021/07/11/diffusion-models.html) which learn the approximate inverse of a stochastic markov process. This may not be exactly what you're looking for but should provide some clues (also see https://arxiv.org/pdf/2105.05233.pdf for a modern application / result of this method)