I am going through an excercise on sufficient statistics and the factorisation theorem which states that the statistic $\mathbf{U} = h(\mathbf{Y})$ is a sufficient statistic for the parameter $\theta$ if and only if we can find functions $b$ and $c$ such that:
$f_{\mathbf{Y}}(y;\theta) =b(h(y), \theta)c(y) $
In the following example the book delivers a factorisation that I don't quite see how it fits into the form specified above—where one factor is a function of $h(y)$ and $\theta$, and the other only of $y$.
Example given :
A random sample $\mathbf{Y} = (Y_{1}\dots Y_{n})$, with $Y$ ~ $N(0,\sigma^{2})$.
The joint mass function of the sample can be factorised as follows:
$f_{\mathbf{Y}}(\mathbf{y} ; \sigma^{2}) = \displaystyle \prod_{i=1}^{n}\frac{1}{\sqrt[]{2\pi \sigma^{2}}} \exp \left[ -\frac{y_{i}^{2}}{2\sigma^{2}}\right] = \left(\frac{1}{\sqrt[]{\pi\sigma^{2}}}\right)^{n}\exp\left[ -\frac{\sum_{i=1}^{n}y_{i}^{2}}{2\sigma^{2}}\right]$
This apparently tells us that the sum $\displaystyle\sum_{i}^{n}Y_{i}^{2}$ is sufficient for $\sigma^{2}$. But does this form really fit what is speciffied by the theorem above? What am I missing ?
In this case, $b(h(y),\theta) = f_Y(y;\sigma^2)$ and $c(y) = 1$, so yes, the factorization holds and $h(y) = \sum_{i=1}^n y_i^2$ is sufficient.