I am fairly new to statistics and managed to completely confuse myself. Any input would be appreciated. Suppose we have centered random vectors $X_n$ each with varaince-covaraince matrix $\Sigma_n$. Does it follow from Cramer-Wold $$\Sigma_n^{-1/2} X_n \rightarrow \mathcal{N}(0, I) \iff \frac{c^TX_n}{\sqrt{(c^T\Sigma_n c)}} \rightarrow \mathcal{N}(0, 1) $$ or is there an easy way to show it? Usually, I see Cramer-Wold apllied to a common limit matrix $\Sigma$, an article I came across however does not even mention such limit, though I suppose it should exist? The idea is that we have a sequence of estimators, each has a varaince matrix that depends on n (as it depends on the n-th estimator) and are interested in the behaviour relative to its variance.
Furthermore, apparently also for any linear transformation, i.e by matrix A, $$ \frac{AX_n}{\sqrt{(A\Sigma_n A^T)}} \rightarrow \mathcal{N}(0, I) $$
My thoughts: for any non-zero vector by Cramer-Wold theorem $$\Sigma_n^{-1/2} X_n \rightarrow \mathcal{N}(0, I) \iff var(c^T \Sigma_n^{-1/2} X_n)^{-1/2} c^T \Sigma_n^{-1/2} X_n \rightarrow \mathcal{N}(0, 1) $$ which did not really help me.
It does not follow from Cramer-Wold. The reverse implication is false in general:
Let $X_1,X_2$ be a pair of uncorrelated standard normal variables which are not jointly normal (see here for an example). Consider the following sequence of bivariate random vectors: $X_n=\begin{bmatrix} X_1 \\ \frac{1}{\sqrt{n}} X_2 \end{bmatrix}$. Then $\Sigma_n=\begin{bmatrix} 1 & 0\\ 0 & \frac{1}{n} \end{bmatrix}$. For any non-zero vector $c=\begin{bmatrix} c_1 \\ c_2 \end{bmatrix}$ we have $$\frac{c^TX_n}{\sqrt{(c^T\Sigma_n c)}} \overset{p}\rightarrow \begin{cases}\text{sgn}(c_1)X_1 \quad\text{ if }\quad c_1\neq 0 \\ \text{sgn}(c_2)X_2 \quad \text{ if } \quad c_1=0 \end{cases}$$ However $\Sigma_n^{-1/2} X_n = \begin{bmatrix} X_1 \\ X_2 \end{bmatrix}$ for each $n$.
Added: The implication
$$ \Sigma_n^{-1/2} X_n \rightarrow \mathcal{N}(0, I) \implies \frac{c^TX_n}{\sqrt{(c^T\Sigma_n c)}} \rightarrow \mathcal{N}(0, 1)$$
for all non-zero $c$ is indeed true. To see this note that we can write
$$ Y_n:=\frac{c^TX_n}{\sqrt{(c^T\Sigma_n c)}} = c_n^T \Sigma_n^{-1/2} X_n \quad \quad c_n=\frac{\Sigma_n^{1/2}c}{\sqrt{(c^T\Sigma_n c)}}$$
with $\|c_n\|=1$ for each $n$.
Now, let $(n')$ be an arbitrary subsequence. Since $c_{n'}$ is bounded we can find a further subsequence $(n'')$ such that $c_{n''}$ converges, say to $b$ with $\|b\|=1$. By Slutsky's theorem we get $ Y_{n''} \rightarrow \mathcal{N}(0, 1)$.
We have shown that every subsequence of $Y_n$ contains a further subsequence converging in distribution to a standard normal variable. This is sufficient to show that $ Y_{n} \rightarrow \mathcal{N}(0, 1)$ (see here).
The same argument can be used to show the implication
$$\Sigma_n^{-1/2} X_n \rightarrow \mathcal{N}(0, I) \implies \frac{A X_n}{\sqrt{(A \Sigma_n A^T)}} \rightarrow \mathcal{N}(0, I)$$
for all matrices $A$ having full column rank.