Let $X_1, \ldots, X_n$ be independent and identically distributed random variables with mean $\mu \in \mathbb R$ and variance $\sigma^2 > 0$. Define the sample mean by $\overline{X}_n := \frac{1}{n}\sum_{j = 1}^n X_j$ and the empirical variance by $S_n^2 := \frac{1}{n-1}\sum_{j = 1}^n (X_j - \overline{X}_n)$. It is well known that if $X_1, \ldots, X_n$ follow a normal distribution, then $\frac{n-1}{\sigma^2}S_n^2 \sim \chi^2_{n-1}$.
Let $\mu_0 \in \mathbb R$ be given. We assume that $\mu$ and $\sigma^2$ are unknown. Now we want to test the hypothesis $H_0 : \mu = \mu_0$ against the alternative $H_1 : \mu \ne \mu_0$. If the distribution of the $X_i$'s would be normal, then one would simply use the $t$-test statistic $T_n = \sqrt{n}\frac{\overline{X}_n - \mu_0}{S_n}$, which is known to follow a $t_{n-1}$-distribution under $H_0$. On wikipedia it is written, that in my case, where the distribution of the $X_i$'s is unknown, one can use the fact that $T_n$ is approximately $t_{n-1}$-distributed because of the central limit theorem. How does this work in detail? I know that $\sqrt{n}\frac{\overline{X}_n - \mu}{\sigma} \xrightarrow{d} N(0,1)$ but what happens with the denominator of the t-statistic?
Formally, you have convergence of $T_n$ to an $N(0,1)$ distribution using the usual central limit theorem plus Slutsky's theorem. You also have convergence of $t_{n-1}$ to an $N(0,1)$ distribution for the same reasons. So in terms of the approximation in the limit, if you have no exact distribution available, for large $n$ it does not matter whether you take $N(0,1)$ quantiles or $t_{n-1}$ quantiles.