Motivation
Let $X$ be $\mathcal{N}\Big(-\frac{\sigma^2}{2},\sigma^2\Big)$ random variable, i.e. probability density function $f(x)$ is given by \begin{equation} f(x)=\frac{1}{ \sqrt{2\pi\sigma^2} } \exp\Bigg(\frac{-\big(x+\frac{\sigma^2}{2}\big)^2}{2\sigma^2}\Bigg)\quad \text{and}\quad F(x)=\int_{-\infty}^{x}f(x)dx \end{equation} With this choice of mean and variance $\mathbb{E}\big[e^X\big]=1$,$\forall x$. Assume $\mathbb{E}\big[X^2\big]\leq80$. This constraint gives that $0<\sigma\leq4$. Now we want to maximise $\mathbb{E}\big[(e^X-1)^+\big]$ w.r.t. $F$ (or $f=F'$), so that the following integral is maximised \begin{equation} \int_{0}^{\infty}(e^x-1)f(x)dx \end{equation} It turns out it is maximised by choosing $\sigma=4$, i.e. the second moment is highest possible.
Now consider the generalisation of this problem in the sense that we allow $X$ to be any continuous random variable and thus we try to find the maximising density or distribution function. I think this leads to the Variational Calculus problem.
PROBLEM
Let $F:\mathbb{R}\to[0,1]$ be a non-decreasing absolutely continuos function such that \begin{equation} \lim_{x\to-\infty}F(x)=0 \quad \text{and} \quad \lim_{x\to\infty}F(x)=1 \end{equation} and $\exists$ $f:\mathbb{R}\to [0,\infty]$ such that \begin{equation} F(x)=\int_{-\infty}^xf(u)du \end{equation} i.e. $F'(x)=f(x)$. Now consider the problem \begin{equation} \max_{F\in\mathcal{C}^1}J\big[x,F'(x)\big] \end{equation} where \begin{equation} J\big[x,F'(x)\big]:=\int_{-\infty}^\infty(e^x-1)\mathbb{1}_{(x\geq0)}F'(x)dx \end{equation} subject to \begin{equation} I_0\big[F'(x)\big]:=F'(x)\geq0 \quad \forall x \end{equation}
\begin{equation} I_1\big[x,F'(x)\big]:=\int_{-\infty}^\infty e^xF'(x)dx-1=0 \end{equation}
\begin{equation} I_2\big[F'(x)\big]:=\int_{-\infty}^\infty F'(x)dx-1=0 \end{equation}
\begin{equation} I_3\big[x,F'(x)\big]:=Q-\int_{-\infty}^\infty x^2F'(x)dx\geq0 \end{equation} for some real constant $Q>0$. Note that I defined $J$ and $I$'s to be functions of $x$ and $F'(x)$ only since they do not depend on $F(x)$ explicitly.
I am trying to follow the Euler-Lagrange Equation (ELE) method. So my Lagrangian is \begin{equation} L\big[ x,F,F'\big]= J+ \lambda_0(x) I_0+\lambda_1 I_1 +\lambda_2 I_2 +\lambda_3 I_3 \end{equation} Then ELE reads \begin{equation} \frac{dL}{dF}-\frac{d}{dx}\Big(\frac{dL}{dF'}\Big)=0 \end{equation} Which I do not know how to compute in this case (my calculations give some non-sensical results). Also I think Lagrangian may not be correct since last constraint (with $I_3$) is inequality.
My attempt: since none of the integrals depend on $F$ and just on $x$ and $F'$ \begin{equation} \frac{d}{dx}\Big((e^x-1)\mathbb{1}_{(x\geq0)} + \lambda_0(x)+\lambda_1e^x + \lambda_2+ \lambda_3x^2\Big) =0 \end{equation} and thus \begin{equation} e^x\mathbb{1}_{(x\geq0)} + \lambda_0(x)+\lambda_1e^x +\lambda_2+ \lambda_32x=c \end{equation} If I am correct now I should take partial derivatives of $L$ w.r.t. each of lambda and that would yield the constraints $I_0$, $I_1$, $I_2$, $I_3$. I guess I am doing something wrong or just cannot see how equation obtained from ELE would help me to find $F$.
**Note: ** I think I am having problems with this because the optimal solution may be discontinuous and that is why ELE does not work.
I don't have a solution, but I want to make a few remarks. I still don't know if your problem has, or not, a solution (among distributions), but here is some help to understand the problem, I hope.
First remark: in your optimization problem, it makes more sense to search for $f$ than for $F$, as everywhere only $F'$ appears. Therefore your optimization problem can be seen as:
$$\max\left\{\int_0^{\infty} (e^x-1)f(x)dx, \;\;f\geq 0, \int_{\mathbb{R}} f(x)dx=\int_{\mathbb{R}} e^x.f(x)dx=1, \int_{\mathbb{R}} x^2.f(x)dx\leq Q\right\}.$$
Second remark: Euler-Lagrange equation is not a magic tool that will always give you the solution to your problem in every case. In particular here, the constraint $f\geq 0$ is pointwise, which makes the optimization problem peculiar. It leads to a Lagrange multiplier which is a function (that you denoted $\lambda_0$), which is an unknown to the EL equation. In order to help you study EL equation, remember that when the constraint is an inequality, then the Lagrange multiplier has a sign: with you notations, $\lambda_0$ and $\lambda_3$ are nonpositive. More importantly, especially about $\lambda_0$, these Lagrange multiplier vanishes when the constraint is not saturated. Precisely, you have the information (if $f$ is a measure, you can give a weak formulation of this information): $$\forall x, \;\;\lambda_0(x)=0\textrm{ or }f(x)=0,$$ which helps a lot, in general, to study the optimization problem; also $\lambda_3=0$ if $\int_{\mathbb{R}} x^2.f(x)dx< Q$.
Third remark: as @gerw noticed, you should question existence of a maximizer. When you restrict yourself to Normal distribution, you optimize in $\sigma$, which means you have a one-dimensional optimization problem, which easily have a solution as you optimize a continuous function on a compact set. Here, not fixing the type of distribution, you get an optimization problem in infinite dimension, which is much more tricky to handle. In particular, it seems much more natural to me to optimize among measures, as you get more easily compactness in this setting. Let me mention that, even if you could solve the EL equation, you wouldn't have a proof that it is a solution of your optimization problem unless you actually have a proof of existence of a solution first. In other words, solving EL equation only tells you "if there is a min/maximizer, then it can only be that one". However, I believe @gerw's proofs (both) of the fact that there is no solution are invalid (probably it was written before you put the constraint f\geq 0)
Fourth remark: as @gerw also noticed, your functional is linear, which is a very important specificity of this problem. In particular it says that, if there is a solution, then it should be an extremal point of your set of admissible functions, which probably means that it should be a sum of Dirac masses (which means $F$ is piecewise constant). In that case, the Euler-Lagrange equation seems quite useless, as the unknown $f$ (or $F$) does not appear in the equation (though a small part of it implicitely does through $\lambda_0$ which will tell you where $f$ vanishes). At least, this last remark explains that if there is a solution, you should expect that the second moment is maximal, as certainly extremal points lead to $\int x^2f(x)dx=Q$.