According to my understanding of the Universal Approximation Theorem (one of the mathematical foundations of neural networks) when applied to functions $f:\mathbb{R} \rightarrow \mathbb{C}$ it says that any function $f(t)$ can be approximated as:
$$f(t) = \sum_j c_j\phi(a_jt+b_j) $$
where $\phi$ can be any non-polynomial function and $a_i$,$b_i$,$c_i$ are finite sequences of real numbers.
If we choice $\phi(t)=e^{it}$ we obtain:
$$f(t) = \sum_j c_je^{i\left(a_jt+b_j\right)} $$
Comparing with the inverse Fourier transform:
$$f(t) = \int F(w)e^{iwt}dw $$
both expressions agrees if:
$$ F(w) = \sum_j c_j e^{ib_j}\delta(a_j-w) $$
that means that any function has a discrete spectrum, something not true.
I do not know if my interpretation of the Universal Approximation Theorem is incorrect and/or these calculus contains an error.
I think I found an argument which should make your arguments invalid.
According to Wikipedia, the universal approximation theorem says that $f$ can be approximated (arbitrarily well) on every compact subset of $\Bbb R$.
This is not the same as $f(t) = \sum_j c_j\phi(a_jt+b_j) $ for all $t\in\Bbb R$!
In the case of $\phi(t)=e^{it}$, one can show that such an approximation is possible on each closed interval (using theory for Fourier Series), but in general, one cannot make such a claim for all $t\in\Bbb R$ at once.