I have the following maximization problem originating in stochastic control theory. However, I shall present it as a general optimization problem.
For a $U \subseteq \mathbb{R}$ (may as well consider $U = \mathbb{R}$) let us denote by $\mathcal{P}(U)$ the set of all density functions on $U,$ that may also depend on $x \in \mathbb{R}.$ That is $\int_U \pi(x,u) du = 1$ and $\pi(x,u) \geq 0$ for each $x$ and $u.$ We are looking for a density $\pi(x,u)$ such that it maximizes the integral \begin{equation} \int_U \big(h(x,u) - \lambda \ln(\pi(x,u)) \big) \pi(x,u) du, \end{equation}
for some function $h(x,u)$ for which the only restraint is that all the integrals are finite.
The paper I am studying says that the optimal $\pi^*(x,u)$ is of the form
\begin{align} \pi^*(x,u) = \frac{\exp{(\frac{1}{\lambda}h(x,u))}}{\int_U \exp{(\frac{1}{\lambda}h(x,u))} du}. \end{align}
There is no other explanation in the paper. I could figure out that if we insert the optimal density in the integral and denote $K(x) = \int_U \exp{(\frac{1}{\lambda}h(x,u))} du$ we get that the integral equals to $\lambda \ln(K(x)),$ so we get rid of $h(x,u)$ although it still hides in $K(x)$ indirectly. But I can't see why is it so obvious that this is the maximal possible value of this integral. What am I missing? Thank you for your help.