Is there a way to exploit local redundancy in a function to speed up Monte Carlo integration?

124 Views Asked by At

In every Monte Carlo method I've ever seen, $f$ must be recomputed from scratch for each point that is (somehow randomly) selected to contribute to the overall integral.

However, most functions have output values that are close to each other for close input values (or maybe only smooth differentiable functions? I'm not a mathematician). With a large enough number of samples points for an integral, it seems as though you could exploit this fact to reduce the overall computation time.

I'm not quite sure how this would be done... I know that $e^{aD}$ corresponds to the shift operator where $D$ is the differential operator such that $e^{a \frac{d}{dx}}f(x) = f(x+a)$, and if you expand $e^{aD}$ in its power series (it's defined this way, right?) then it basically relies on the infinite number of derivatives at $f(x)$. Which means that there is enough information contained at $f(x)$ (assuming $f$ is infinitely, or at least highly, differentiable) that is sufficient to calculate $f$ at a different location.

In this sense, assuming derivatives are easy to calculate, perhaps the Monte Carlo integration could utilize this fact somehow?

So what I want to know is 1) does my idea have any promise and 2) is there a different way (that has already been discovered) to exploit redundancy in a function to speed up Monte Carlo integration?