In my lectures it's discussed the estimation of $Var(X)$ in the case I have already estimated $E(X)$, but there is something I don't know if it's a problem or not.
Suppose $Var(X) = f(E(X))$ , for example in the case of a bernoulli r.v. we have $Var(X) = E(X)(1-E(X))$ , besides I have estimated $ E(X) $ to be $ \hat{\theta} $ then I could use as a estimate of $Var(X)$ the function $f(\hat{\theta})$ .
However, in this reasoning I think there is something not so clear about the error in the estimate because if the estimate of $E(X)$ is not exact (which is always the case) then the estimate of $Var(X)$ could be much more inaccurate depending on the actual function $f$.
Is this method used because it's assumed that the estimate $\hat{\theta}$ is equal to the actual value of $E(X)$? I don't have much knowledge in statistical inference so I don't know if this is a real problem. Is this a true problem?
What @Ian said, is true, though not complete: for special distributions (like Bernoulli), there may be a sufficient statistic which (loosely speaking) contains all the information in the sample about the paramer(s). For a more formal discussion, see here: https://en.wikipedia.org/wiki/Sufficient_statistic#Bernoulli_distribution
So while in general, you'd estimate a variance by the sample variance, you'd better use a function of the sufficient statistic in this special case. Now $\hat{\theta}(1-\hat{\theta})$ would be biased, because an easy calculation shows $\displaystyle\mathbb{E}\hat{\theta}(1-\hat{\theta})=\frac{n-1}n\,p(1-p)$, but the remedy is obvious: use $\displaystyle\frac{n}{n-1}\,\hat{\theta}(1-\hat{\theta})$ as an estimator, then.