Given uniform distribution $(0, \theta)$, we know the complete sufficient statistics is $X_{(n)}$. We can show it is complete by definition of the completeness.
Another uniform distribution is $(\theta, 2\theta)$. Now the sufficient statistics is $(X_{(1)}, X_{(n)})$. It is not complete, however. The uniform $(\theta, 2\theta)$ is a scale family, with standard pdf f(z) ~ uniform (1,2). So if $z_1,...,z_n$ is a random sample from a uniform $(1,2)$, then $x_1=\theta z_1,..., x_n = \theta z_n$. So the distribution of $(X_{(1)}, X_{(n)})$ does not depend on $\theta$. It is thus not complete.
My question: why we cannot say uniform distribution $(0, \theta)$ is a scalar distribution with Uniform (0,1)?
You can say $U(0,\theta)$ is a scale family of distributions with scale parameter $\theta$, and say that the standard version of this is $U(0,1)$ when $\theta=1$. This does not affect the obvious completeness of $X_{(n)}$ as a one-dimensional statistic, since there is no non-trivial function of it not depending on $\theta$ which has expectation $0$.
The point about $U(\theta,2\theta)$ being a scale family is that it implies that the statistic $(X_{(1)}, X_{(n)})$ will not be a complete statistic: for example the distribution of $\frac{X_{(1)}}{X_{(n)}}$ will not depend on $\theta$ since the scale cancels, so subtracting a suitable constant (depending on $n$) and taking its expectation will give $0$.
Going back to $U(0,\theta)$, this would also show $(X_{(1)}, X_{(n)})$ would not be a complete statistic, but then nobody ever thought it might be.