What kind of distribution do we get if we constrain the range to be the unit interval and also constrain the mean to be $\alpha$?
If we read this table, we see the following two examples of maximum entropy distributions.
The standard normal distribution is the maximum entropy distribution with mean 0 and variance 1.
Similarly, if we restrict out attention to distributions with bounded support, then the uniform distribution is the maximum entropy distribution.
I'm writing a small library to estimate quantiles in a stream of data and am currently keeping track of the sample minimum, maximum, and mean, but am not using the sample mean to estimate quantiles directly.
So, what kind of distribution do we get if we constrain the range to be the unit interval and also constrain the mean to be $\alpha$?
In general, is there a good strategy for figuring out a closed form for the pdf of a maximum entropy distribution satisfying an arbitrary collection of properties?
Let $\alpha \in (0, 1)$ and $\mathcal{A}$ denote the set of all $f \in L^1([0,1])$ with the constraints
$$f \geq 0, \qquad \int_{0}^{1} f(x) \, \mathrm{d}x = 1, \qquad \int_{0}^{1} x f(x) \, \mathrm{d}x = \alpha. \tag{*} $$
We claim that the differential entropy $h(\cdot)$ is maximized over $\mathcal{A}$ by
$$f_{\max}(x) = e^{ax+b} \mathbf{1}_{[0,1]}(x)$$
for some $a, b \in \mathbb{R}$. (Note that such $a$ and $b$ are uniquely determined from $\text{(*)}$.) Indeed, for any $f \in \mathcal{A}$, the Kullback-Leibler divergence $D(f||f_{\max})$ is non-negative, and so, we get
$$ 0 \leq D(f||f_{\max}) = \int_{0}^{1} f(x) \log \left(\frac{f(x)}{e^{ax+b}}\right) \, \mathrm{d}x = -h(f) - (a\alpha + b). $$
On the other hand, by a direct computation we get $h(f_{\max}) = -(a\alpha + b)$. So it follows that
$$ h(f) \leq h(f_{\max}) $$
for all $f \in \mathcal{A}$. This proves that $f_{\max}$ is a maximizer of $h(\cdot)$ over $\mathcal{A}$. (Together with the fact that $h(\cdot)$ is strictly concave, it is easy to prove that $f_{\max}$ is the unique maximizer up to modification on null sets.)
For the reference, see Theorem 5.1 of Keith Conrad. Probability Distributions and Maximum Entropy.