It is common to distinguish between descriptive and inferential statistics.
However, it seems to me that any descriptive statistic can be immediately used to generate realizations of the data. In the simplest (often uncomputable) case this can be done through uniform sampling of all possible data samples which exactly match the value of the statistic. Such a simple approach, while reminiscent of Monte Carlo simulations or maximum-entropy models, actually has no parameters or mechanistic specifications (at least not in an explicitly defined sense).
But it seems that this above approach is still a generative model of the data -- albeit one for which it is hard, or may be impossible, to compute the likelihood. Is this correct? And if yes, does this imply that there is no fundamental distinction between descriptive and inferential statistics?
Here is a simple example. Suppose I have a data sample of $m$ real numbers, such that each number is in the range of $[0, b]$, where $m$ and $b$ represent some known constraints of the system in question.
I also know that the mean of my data sample is $\bar{x}$, but I don't know anything else about the distribution of these numbers. In other words, I can make no assumptions that the numbers are distributed uniformly or in any other way.
I can construct a generative model which samples uniformly all sets $S=\{a_1, a_2, ..., a_m\}$, such that all $a_i \in [0, b]$ and $\frac{1}{m}\sum_{a_i \in S} a_i = \bar{x}$. This model will, by definition, match the mean of my data but may otherwise be a poor representation of its distribution.
Now suppose I gradually acquire additional arbitrary descriptive statistics about my data (e.g. the first quartile, the kurtosis, or any other statistic). I can incorporate these statistics into my generative model in exactly the same way as above (i.e. through uniform sampling of all sets that match these statistics). With more descriptive statistics, my model becomes a more accurate representation of the data.
Here is again my question. It seems that the descriptive statistics (and the initial constraints of the system) allow me to construct an arbitrarily precise generative model of my data without performing any inference. Is this correct? And if yes, does this imply that there is no fundamental distinction between descriptive and inferential statistics?
Any pointers to the relevant literature would also be greatly appreciated.
You are hinting at some interesting ideas related to 're-sampling'. But I will need to make some minor changes in your example in order to explore them.
(1) Suppose you have $n$ observations $X_1, \dots, X_n$ sampled at random from $\mathsf{Unif}(0,b),$ with sample mean $\bar X.$ If you generate $n$ new random variables $Y_i$ from $\mathsf{Unif}(0, 2\bar X)$ you will get roughly what you want. But $2\bar X$ might turn out to be somewhat below $b$ or somewhat above $b$, so you could re-scale them as $Z_i = (\frac{b}{2\bar X})Y_i \sim \mathsf{Unif}(0,b).$
A brief demonstration in R statistical software illustrates this for $n = 15$ and $b = 10.$
(2) A related idea is the nonparametric bootstrap. Suppose you don't know the population distribution (except that you know it has a mean $\mu),$ but you have have $n$ observations and their sample mean. Then you can bootstrap your sample (by re-sampling) to get a confidence interval for the population mean $\mu$.
You can read about 'nonparametric bootstrap' on Wikipedia or other Internet sites. Also, you can take a look at my related Answer to another Question.