I have some parameter $\theta$ which has a certain probability distribution in the interval $\theta \in [0,1]$ and I know that the first two raw moments are equal $E[\theta^2] = E[\theta] = p$.
So $Var\theta = E[\theta^2] - E[\theta]^2 = p(1-p)$.
So it looks and walks like a Bernoulli distribution. Is it? Why?
Edit: Thanks for the hints, all I could reason about for now is that if by absurd it wasn't a Bernoulli distribution then it most exist a point $a \in (0,1)$ s.t. $f(a) > 0$ (where $f$ is the pdf). So:
$$
E[\theta] = \int_0^1 x f(x) dx = p =
E[\theta^2] = \int_0^1 x^2 f(x) dx \\
\int_0^1 x f(x) dx= \int_0^1 x^2 f(x) dx \\
\int_0^1 (x - x^2) f(x) dx = 0
$$
Now the first part $(x - x^2)$ is strictly positive for any $x \in (0,1)$ and is zero only for $x=0$ or $x=1$. $f(x)$ is non-negative by definition since it's a pdf. So in order for an integral of a non-negative function to be exactly 0 we need that non-negative function to be $g(x) = 0 \ \forall x \in [0,1]$. So: $$ (x - x^2) f(x) = 0 \ \forall x \in [0,1] \iff f(x) = 0 \ \forall x \in (0,1) $$
Is this proof valid? Could this be proved more easily? The fact that I'm talking about a distribution for which I don't know if it's discrete or is continuous makes it very difficult to me.
Since $θ∈[0,1],$ we have $$θ−θ^2≥0.$$ On the other hand, by hypothesis, $$E(θ−θ^2)=0.$$ Therefore, almost surely, $θ−θ^2=0$ i.e. $θ∈\{0,1\},$ i.e. $θ$ has a Bernoulli distribution.