My objective is to (numerically) maximize an expectation of the form of
$$
E[\exp(g(\mathbf{X}))]
$$ where $\mathbf{X}$ is random variable and $g$ is a non-convex function on $\mathbf{X}$. If I consider the random variable $\mathbf{Y} := g(\mathbf{X})$, it seems to me that due to the convexity of the $\exp$ function, I can apply Jensen's inequality to write:
$$
\exp({E[\mathbf{Y}]}) \leq E[\exp({\mathbf{Y}})] \\
\exp(E[g(\mathbf{X})]) \leq E[\exp(g(\mathbf{X}))]
$$ and then simply choose to maximize the logarithm of the LHS, i.e. $E[g(\mathbf{X})]$, despite the whole transformation not being convex.
Is there a fault in my argument?
Also, if the above is correct, supposing that $g(\mathbf{X}) = g_1(g_2(...g_n(\mathbf{X})))$ where all of the $g_i$ apart from $g_n$ are convex, can I repeatedly apply the same argument to ultimately maximize $E[g_n(\mathbf{X})]$?
Thanks.
I'm assuming X is a parameterized random variable and you are trying to maximize with respect to the parameters. Otherwise, I don't see what we are trying to optimize. E(X) is simply E(X), there's nothing to optimize over...
That being said, although it is true (by Jensen's Inequality) that:
$exp(E[Y])≤E[exp(Y)]$
Maximizing LHS is not necessarily equivalent to maximizing the RHS. So no, we cannot do this unless you prove that maximizing the LHS is equivalent to maximizing the RHS in your particular case.