I'm having trouble estimating parameters $N_1$ and $N_2$ for the distribution below:
$$\Pr[X=k] = \frac{k}{\varphi(N_2)-\varphi(N_1-1)} \text{ for } k= N_1, N_1+1,...N_2$$ where $\varphi(n) = \frac{n(n+1)}{2} $.
The problem with the MLE method is that when I take derivatives for the log likelihood function, the sample values disappear altogether.
The likelihood function in this case is:
$$L(N_1,N_2) = \frac{\prod\limits_{i=1}^{n} x_i}{[\varphi(N_2)-\varphi(N_1-1)]^n}$$
Giving the log likelihood function as:
$$l(N_1,N_2)=-n\log(\varphi(N_2)-\varphi(N_1-1))+\sum{\log(x_i)}$$
The second term is a constant, thus when differentiating w.r.t. $N_1$ (or $N_2$) it will disappear.
How is it possible that the derivative of the log likelihood does not depend on the sample values?
Thanks in advance.
You know that $N_2$ has to be at least $max[x_i]$ and $N_1$ at most $min[x_i]$ else the likelihood would be zero as the $x$ would be out of range. Its not hard to see that to maximize the likelihood set $N_1=min[x_i]$ and $N_2=max[x_i]$ as this would make the denominator as small as possible.