Suppose I have $X_i \sim \operatorname{Unif}(a,b)$. I have that the joint distribution is given by $$\frac{1}{\left(b-a\right)^n}\prod_{i=1}^n I(x_i \in (a,b)) = \frac{1}{\left(b-a\right)^n}I(\min(x_i) \in (a,b))I(\max(x_i)\in (a,b)).$$
Now, my question is why does this satisfy the factorization theorem? Don't $I(\min(x_i) \in (a,b))$ and $I(\max(x_i)\in (a,b))$ still depend on $a$ and $b$? If they don't, then don't we also have that $\prod_{i=1}^n I(x_i \in (a,b))$ doesn't depend on $a$ or $b$, and so, we can factor the original joint distribution as required, without any sufficient statistic.
I think I am misunderstanding something about sufficiency here.
I think you may be confused about the factorization theorem: if you can factor the
$$f(x_1,\ldots,x_n ; \theta) = \phi(T;\theta)\cdot h(x_1,\ldots,x_n)$$
then $T$ is sufficient for $\theta$. The idea is that you can factor it into two pieces:
one that depends on only the statistic and the parameter(s)
one that depends only on the data and not the parameter
For your example, $h = 1$, which is independent of $\theta$ and $\phi$ depends only on $\max\{x_i\}$ and $\min\{x_i\}$.