I'm trying to parse a derivation of a lagrange-multiplier fourier technique, and can't quite grasp an intermediate step. The author sets up
$$ e_x(z,t) = \frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{+\infty}{(f(\omega) e^{- j (\omega/c_0)n(\omega)z})\ e^{j\omega t} d\omega} $$
Where n() is a complex function of $\omega$. c, z, are real, and $f(\omega)$ is the function of interest. Now they set up a functional to minimize,
$$\xi = -e_x(z,t) + \lambda W $$
and then take the gradient of e_x W.R.T $f(\omega)$.
$$ \nabla \xi = - \frac{1}{\sqrt{2 \pi}} \left( e^{- j (\omega/c_0)n(\omega)z} \right)^\star \ e^{-j\omega T} +\ ...$$
where * is the complex conjugate.
Both Mathematica's VariationalD and Matlab's functionalDerivative both seem to balk at taking the gradient of that improper integral for some reason.
This is probably super straightforward, but...where did that complex conjugate come from? Previous questions would seem to suggest that the gradient of F(f(omega)) should be f(omega).
I still have no mathematical understanding of how this could possibly be the case, but (as discussed in Waveform Optimizations for Ultra-Wideband Radio Systems, David M. Pozar) this problem is exhaustively considered in L. E. Franks, Signal Theory, revised edition, Dowden & Culver, Stroudsberg, PA, 1981..
In that text, the gradient of a transfer function is indeed given as the transfer function multiplied by its complex conjugate.
Note also that this isn't obviously the same kind of variational problem that tools like functionalDerivative are set up to solve; they seem to assume that you want to minimize the functional -f(x) + lambda W integrated over some path with well-defined endpoints - see Feynman's Principle of Least Action, but it's not obvious to me that this construction makes any sense for this problem. I might be grossly misunderstanding, however.