Approximating Hessian matrix (or its inverse) without gradient information

1.4k Views Asked by At

I have access to a black-box function, $f(\mathbf{x})$, with $\mathbf{x}\in\mathbb{R}^n$. Is there a way to efficiently approximate the Hessian matrix at a point, $\mathbf{x_0}$, without access to the gradient information?

I know the BFGS algorithm approximates the inverse of Hessian matrix by updating the matrix in each iteration with new gradient information. However, if the gradient information is unavailable, I haven't seen any algorithm that approximate the Hessian matrix.

1

There are 1 best solutions below

2
On BEST ANSWER

You can use finite-difference Quasi-Newton (e.g., BFGS). Finite differences (O(n) per point) are used to evaluate the gradient, and Quasi-Newton update is used to approximate the Hessian (or its inverse).

This as opposed to O(n^2) per point to evaluate the Hessian by finite differences of gradients calculated by finite differences.