How can be a conservative field constraint be efficiently implemented in a continuous optimization problem?

60 Views Asked by At

Suppose we have the following continuous optimization problem:

$$ \underset{x}{\mathrm{minimize}}f\left(x\right) $$ subject to $$ \exists X:\nabla X=Jac\left(X\right)=x $$ where $f$ is a function $f:M_{v}\left(_{n}D_{v}\right)\rightarrow M_{v}\left(_{n}D_{v}\right)$, where $M_{v}\left(_{n}D_{v}\right)$ is essentially a $v$-dimensional matrix space over polynomial functions $\mathbb{R}^{v}\rightarrow\mathbb{R}$.

With care and attention to detail, the matrices can be treated as vectors for certain purposes.

In other words, function $f$ is minimized subject to the constraint that $x$ is a conservative field.

Another way to write the constraint is by using the symmetry of second derivatives property, which is $$ D_{i}f_{jk}\left(x\right)=D_{k}f_{ji}\left(x\right) $$

How can this constraint be efficiently implemented in practice, for example in case of simple gradient descent?

It is assumed that the dimension $n$ is relatively high, such as $n=16$ and possibly higher, so that applying all the numerous constraints in brute-force fashion is undesirable.

There is an equivalent way to formulate this problem with objective function $\tilde{f}\left(X\right)$, implementation of which poses the same practical efficiency difficulties as the original problem.