I'm given the following instructions;
Let $H:(0,\infty)$ be an integrable function and $f(x;\theta)$ the PDF that is defined by $$f(x;\theta)= \left\{ \begin{array}{ll} \alpha(\theta)⋅H(x) & 0<x<\theta \\ 0 & x\notin(0,\theta)\\ \end{array} \right. $$ Given a random sample $\underline{X}=X_1, X_2, ..., X_n$ from $f(x;\theta)$, find a sufficient statistical function for $\theta$. The result is pretty straightforward here (at least I think so), I need to show that $f(x;\theta)$ can be written in the form of $g\Bigl(\underline{T}(\underline{X});\theta\Bigr)⋅h(\underline{x})$ (where the underlined $\underline{x}$ means I'm talking about the whole sample). So, I can write $$f(\underline{x};\theta)=f(\underline{X}=\underline{x};\theta)=f(X_1=x_1, X_2=x_2, ..., X_n=x_n;\theta)$$ $$=f(X_1=x_1;\theta)⋅f(X_2=x_2;\theta) ⋅ ...⋅f(X_n=x_n;\theta)$$ (due to independence) $$=\prod_{i=1}^{n}f(X_i=x_i;\theta)=\prod_{i=1}^{n}[\alpha(\theta)⋅H(x_i)]=(\alpha(\theta))^n\cdot\prod_{i=1}^{n}H(x_i)$$ that is indeed in the above form, $g\Bigl(\prod_{i=1}^{n}H(x_i);\theta\Bigr)\cdot h(x)$, with $g(t;\theta)=(\alpha(\theta))^n \cdot t$ and $h(x)=1$. (I hope that the above are indeed correct.)
Now, where I'm a bit confused, is when I move forward to the next exercise, where I'm given the PDF $$f(x;\theta)= \left\{ \begin{array}{ll} \alpha(\theta_1,\theta_2)⋅H(x) & \theta_1<x<\theta_2 \\ 0 & x\notin(\theta_1,\theta_2)\\ \end{array} \right. $$ and I am asked to evaluate a sufficient statistical function for three cases
- for the parameter $(\theta_1,\theta_2)$
- for $\theta_1$, when $\theta_2$ is considered known
- for $\theta_2$, when $\theta_1$ is considered known
My question is, how would this new information and the three seperate cases change my result above (assuming that it -and the reasoning- were correct)? If the two funcstions $\alpha$ and $H$ were known, maybe I could manipulate the result a bit in order to form a slightly "cleaner" result, more elegant, maybe for example with $h(x)$ not being simply equal to $1$. But in my case, how is the procedure going to change and give a different result? Or it's not?
Thanks in advance, and sorry for the long question, at least I hope the matter is clear.
While manipulating the form of the joint PDF, I would have the domain $0<x<\theta$ for the PDF to give positive results, and eventually get to $$f(x;\theta)= \left\{ \begin{array}{ll} (\alpha(\theta))^n\cdot\prod_{i=1}^{n}H(x_i) & 0<x<\theta \\ 0 & x\notin(0,\theta)\\ \end{array} \right.$$ In order to add the domain in the function, I could still use the indicator $I$, not for every $x_i$ seperately, but for the maximum and minimum of them; $I_{(0,\infty)}(x_{(1)}) \cdot I_{(-\infty,\theta)}(x_{(n)})$. This would evaluate zero if the minimum of the sample was less than $0$ or if the maximum was greater than $\theta$, and would evaluate $1$ if both are in $(0,\theta)$.
So, in total, the joint PDF would come in the form of $$f(x;\theta)= (\alpha(\theta))^n\cdot\prod_{i=1}^{n}H(x_i) \cdot I_{(0,\infty)}(x_{(1)}) \cdot I_{(-\infty,\theta)}(x_{(n)})$$ and my sufficient statistical function would be $\underline{T}(\underline{x})=\bigl(\prod_{i=1}^{n}[H(x_i)], x_{(1)}, x_{(n)} \bigr)$, since the PDF has come to the form of $g\bigg(\prod_{i=1}^{n}[H(x_i)], x_{(1)}, x_{(n)} ; \theta \bigg) \cdot h(x)$, with $g(t_1, t_2, t_3 ; \theta) = (\alpha(\theta))^n \cdot t_1 \cdot I_{(0,\infty)}(t_2) \cdot I_{(-\infty,\theta)}(t_3)$ and $h(x)=1$.