Consider a random variable $X$ whose pdf is $f(x;θ)=θx^{θ−1}$ for $0<x<1$ and zero otherwise.
i) Show this is a density function
ii) determine the moment estimate of theta on the basis of a random sample $x_1,\ldots,x_n$
for part ii) does this mean use the method of moments to find the estimator?
I think Andre Nicholas adequately answered your question regarding f>0. I will address the moment estimate:
The idea of a moment estimate is to relate a moment to the desired parameter, then estimate it using the sample moment estimates. Lets try the expected value of X:
$E[X] = \int\limits_{0}^1 x(\theta x^{\theta-1}) dx = \int\limits_{0}^1 \theta x^{\theta} dx = [\frac{\theta x^{\theta+1}}{\theta+1}]_{x=0}^{x=1} = \frac{\theta}{1+\theta}$
Therefore, you can estimate $\theta$ with $\frac{\hat\theta}{1+\hat\theta}= \bar x\rightarrow \hat\theta = \frac{-\bar x}{\bar x-1}$. That is the moment estimate. Contrast this with the maximum likelihood estimate of a sample:
Likelihood ($L$) of sample of values, $x=\{x_1...x_N\}$ is $L(x;\theta)=\theta^N\prod\limits_{i=1}^Nx_i^{\theta-1}$. Taking the logarithm, we get the log-likelihood function, which is easier to optimize:
$\mathcal{L}(x;\theta) = N\ln(\theta) + (\theta-1)\sum\limits_{i=1}^N \ln(x_i)$
Take the derivative wrt $\theta$ and set equal to $0$. Solving for $\theta$ we get:
$\hat\theta_{mle} = \frac{-N}{\sum\limits_{i=1}^N \ln(x_i)}$ which will in general give different estimates from the method of moments.
Sometime the MLE is a better (i.e., more efficient) estimator. I tested this by simulating from your density for $\theta = 3$, and seeing the sampling distribution of each estimator for a sample size of 20. See below:
As you can see, the distributions are almost identical, so neither is really preferred here.