Suppose one has a differentiable function $f:\Bbb R^n\to\Bbb R$ that one can evaluate but for which one has no expression for the derivative. There exist several procedure for gaining an estimate for the gradient, for example one could evaluate $$\nabla f(x)\approx\sum_i \frac{f(x+\epsilon\,e_i)-f(x-\epsilon\, e_i)}{2\epsilon} e_i,\tag{1}$$ or $$\nabla f(x) \approx \left\langle\sum_i \frac{f(x+\Delta)-f(x-\Delta)}{2\Delta_i}e_i\right\rangle_{p(\Delta)}.\tag{2}$$ where the brackets denote the expectation value in which $\Delta$ is drawn from some symmetric probability distribution $p(\Delta)$ centered around $0$ (this is known as SPSA).
These methods are generic in the sense that for $\epsilon\to0$ or $\mathrm{Var}(p(\Delta))\to0$ they will always converge to the gradient of $f$. However they may be slow, for example $(1)$ requires very many evaluations in high dimensions.
Sometimes one already has a candidate function $\overline{\nabla f}$ for $\nabla f$. This candidate will in general be wrong but can for example lie in the correct quadrant. My question:
Is there a method for estimating the gradient of $f$ that uses the candidate $\overline{\nabla f}$ as a suggestion?
The expected benefit of using $\overline{\nabla f}$ would be a speed-up of the procedure. I'm sure such a thing exists, but I can't find any literature because I do not know what to search for.
I'm not sure if math stack exchange is the right place to ask such a question, so I would also appreciate it if you could point me in a direction in which I am more likely to get an answer.