Say $\{X_n\}$ is a sequence of normal random variables with means $0$ and variances $\sigma_n^2$. Also suppose that $X_n\to X$ (everywhere) and $\sigma_n^2\to \sigma^2.$ Then, using characteristic functions and the uniqueness theorem, it follows that $X$ is normal with mean $0$ and variance $\sigma^2$.
My question is simply if there is another, more elementary, way of reaching the same conclusion, without using the uniqueness theorem for characteristic functions?
(Of course, I wouldn't consider the highfalutin notions of weak convergence, or convergence in distribution, to be "more elementary.")
An elementary proof need not be easier, it just requires less theorems and techniques. Let's see an elementary proof:
$$X_n \sim N(0, \sigma_n) \Rightarrow P(X_n \in A) = \int_A \frac{1}{(2 \pi \sigma_n^2)^{1/2}} \exp\bigg\{-\frac{x^2}{2 \sigma_n^2}\bigg\}\, dx$$
therefore since $X_n \to X$ everywhere, you have that $P_n \xrightarrow[]{*} \bar{P}$ (where $P_n (A):= P (X_n \in A) $ and $\bar{P}(A) = P(X \in A)$ and convergence means convergence in distribution) now you can use the bounded convergence theorem for $a>0$ and $A = (a, b)$ and $\bar{A} = (-b, -a)$. You can't use bounded convergence theorem for a set including $0$ since, if $\sigma_n \to 0$ then $\frac{1}{(2 \pi \sigma_n^2)^{1/2}} \exp\bigg\{-\frac{0^2}{2 \sigma_n^2}\bigg\} \to \infty$.
Now you conclude that:
$$\bar{P}(\,(a,b)\,) = \lim_n\int_a^b \exp\bigg\{-\frac{x^2}{2 \sigma_n^2}\bigg\}\, dx = \int_a^b \exp\bigg\{-\frac{x^2}{2 \sigma^2}\bigg\}\, dx$$
since $\bar{P}(\,(-\infty,-\epsilon)\,)=\bar{P}(\,(\epsilon, \infty)\,)\xrightarrow[\epsilon \to 0]{}1/2$ we conclude that $$\bar{P}(\{0\}) = 1 - \lim_{\epsilon \to 0} \big(\bar{P}(\,(-\infty,-\epsilon)\,)+\bar{P}(\,(\epsilon, \infty)\,)\big) = 0$$
therefore
$P(\,X \in (-\infty, x)\,)=\bar{P}(\,(-\infty, x)\,) = \int_{-\infty}^x \exp\bigg\{-\frac{x^2}{2 \sigma^2}\bigg\}\, dx$ allows us to conclude that $X \sim N(0, \sigma)$