Let's say that we have sample $X = (X_1,\dots,X_n)$ from distribution given by: (or rather "derived from" or maybe "defied by"? How should I phrase it correctly in English? I would be grateful for an advice in comments :-))
$$f(x) = \frac{1}{\sigma}\exp\left\{-\frac{x-m}{\sigma} \right \}\mathbf{1}_{(m,\infty)}(x)$$
From this, we can write a density for whole sample
$$f(X) = \frac{1}{\sigma^n}\exp \left \{{\frac{mn}{\sigma}}\right\}\exp\left\{-\frac{n \overline{X}}{\sigma} \right \}\mathbf{1}_{(m,\infty)}(X_{1:n}),$$
where $X_{1:n} = X_{(1)} = \min\{X_1,\dots,X_n\}$ (not sure how it's usually denoted in English literature).
We can see from here (Factorization Theorem) that we need both $X_{1:n}$ and, for example, $\overline{X}$, for our sufficient statistics of parameter $\theta=(m,\sigma)$. However, how do we denote it?
Is it $T(X) = (X_{1:n},\overline{X})$, as $\overline{X}$ would be sufficient statistics for $m$, if we had known the $\sigma$, so we put it first? Or maybe the order doesn't matter and we are free to write $T(X) = (\overline{X},X_{1:n})$? Or maybe there is a reason to write it in the second form?
Extra question:
If we consider $\frac{f(X)}{f(Y)}$ for two samples $X$ and $Y$, we quickly get that this is minimal sufficient statistics. Can we somehow "quickly" tell, whether it's complete statistics, as we can do with certain types of exponential families (where we can apply Lehmann–Scheffé theorem)?
Are there some nice "tricks", when it goes to showing that there's no completeness with distributions like this one, i.e. when density's support depends on parameter?
Sufficient statistics are not unique. For a given parametric distribution, there are infinitely many different sufficient statistics.
For example, the iid sample $\boldsymbol x = (x_1, \ldots, x_n)$ is always trivially sufficient--although there is no data reduction achieved. Any permutation of the observations in the sample is also sufficient. Some sufficient statistics achieve some data reduction but not the maximum possible.
Minimal sufficient statistics--i.e., those belonging to this last category--are also again not unique; any bijective function of a minimal statistic is also minimal, and this includes the class of such statistics that are vector-valued. So a permutation of its components is realized by a (invertible) multiplication by a $(0,1)$-matrix.