Please note that I don't know (almost) anything about the calculus of variations, and I'm familiar with analysis only at an undergraduate level (i.e., Baby-Rudin level).
Let $[a, b] \subset \mathbb{R}$, and $p: [a,b] \to \mathbb{R}_{\geq 0}$ be such that $\int_{\mathbb{R}}p = 1$. Let $$H(p) = -\int_{\mathbb{R}}p\log p\text{.}$$ We define $p(x)\log[p(x)] = 0$ whenever $p(x) = 0$. Consider the problem
$$\max H(p) \text{ subject to }\int_{\mathbb{R}}p = 1\text{.}$$
(This is the uncountable-set analogue of a previous question I asked.)
According to this textbook I have, this is solved using the calculus of variations. Every time I read something about the calculus of variations, the Euler-Lagrange equations always pop up. However, this is slightly different from most of the examples I've seen online, in that there is a constraint applied.
I know that the solution is apparently (and this is provided in the textbook)
$$p(x) = \dfrac{1}{b-a}\mathbf{1}_{[a, b]}(x)$$ where $\mathbf{1}_{A}(x) = 1$ for $x \in A$, and $0$ otherwise.
But I'm not sure at all how to show this.
We can think of this constrained problem as finding the "stationary points" of the objective function
$$ -\int_a^b \left[ p(t) \log p(t) + \lambda p(t) \right] \mathrm d t, $$
where $\lambda$ is a Lagrange multiplier for the constraint that $p$ must integrate to unity on $[a,b]$.
The Euler-Lagrange equation in this instance reads
$$ \log p + \lambda +1 = 0. $$
Notice that this implies that $p$ must be constant, and the only distribution with a constant density is the uniform distribution.
Observe also that one does not need to appeal to the Euler-Lagrange equation to do this; we can attempt to naively maximise our objective pointwise and discover that pointwise-maximisation works. Hence explaining it to an undergraduate necessitates only some facility in simple constrained optimisation instead of having to resort to the Calculus of Variations (the Lagrange multiplier method is standard material in any good undergraduate economics course).