suppose I have unknown Gaussian distribution $\mathcal{N}(\mu, \sigma)$, and $N$ values generated by this distribution: $X_1$, $X_2$, $\cdots$, $X_N$. We can of course compute sample mean and variance to infer the ``most likely'' distribution. However, in principle any other Gaussian distribution can lead to the observations, however with smaller probability.
Considering all possible pairs $(\mu, \sigma)$, can we somehow quantify the relative likelihood of each pair? In other words, can we compute the distribution over all univariate Gaussian distributions given $X_1$, $X_2$, $\cdots$, $X_N$?
I'd be grateful for the answer, or for suggestions as to what area of maths/statistics deals with this question.
The likelihood function gives the probability, given $(\mu,\sigma)$, to obtain the sample $X_1,\ldots, X_N$: $$ L(X_1,\ldots,X_N\mid \mu,\sigma) = f_{\mu,\sigma}(X_1)\cdot\ldots \cdot f_{\mu,\sigma}(X_N) = \dfrac{1}{\sigma^n} \exp\left(-\frac{1}{2\sigma^2}\sum_{i=1}^n (X_i-\mu)^2\right). $$ As I understood, you need to found what are more likely values of $(\mu,\sigma)$ given sample $X_1,\ldots, X_N$. This problem can be formulated only in the frame of the bayesian approach. And to state this problem, we need to define prior distribution of $(\mu,\sigma)$.
If $\mu,\sigma$ are random values with prior distribution with pdf $p_{\mu,\sigma}(x,s)$, then posterior pdf of $(\mu,\sigma)$ given sample $X_1,\ldots,X_N$ is
$$ p_{\mu,\sigma\mid X_1,\ldots,X_N}(x,s \mid x_1,\ldots, x_N) \propto {L(x_1,\ldots,x_N\mid x, s)\cdot p_{\mu,\sigma}(x,s)} $$ Look also at https://en.wikipedia.org/wiki/Normal_distribution#Bayesian_analysis_of_the_normal_distribution