Theorem : Let $X$ be a real random variable such that $\phi_X \in L^1$ i.e $\int_{\mathbb{R}} \vert \phi_X(t) \vert < \infty$, then $X$ has density $f_X(x) \in C_b(\mathbb{R})$ given by $$f_X(x) = \frac{1}{2\pi}\int_{\mathbb{R}}\phi_X(t)e^{-itx}dt$$
(Where $\phi_X$ denotes the characteristic function of $X$)
The proof articulates in two parts : The first where we assume we already know $X$ has a density $f$ and we prove the inequality above, the second where we don't know such $f$ exists and we lead back to the first case using the trick of taking $X+\epsilon N$ where $N \sim N(0,1)$.
The sketch of the first part of the proof, which is where my problems are is the following : We consider a $g \geq 0, g \in C_{b}(\mathbb{R})$, such that $g = 0$ outside a compact, i.e $g \in C_{c}(\mathbb{R})$ and we apply the isometry lemma to $f_X + g$ which leads us to (Using Fubini-Tonelli and the fact that $\frac{1}{2\pi}\int_{\mathbb{R}}\phi_X(t)e^{-itx}dt$ is real)
$$\int g(x)f_X(x)dx = \mathbb{E}[g(X)] = \int g(x)[\frac{1}{2\pi}\int_{\mathbb{R}}\phi_X(t)e^{-itx}dt]dx$$
Which concludes the proofs for $g$ assumed as mentioned at the beginning. But how to extend the result to just continuos and bounded functions ?
During this next proof the following three observation are made :
$1) \frac{1}{2\pi}\int_{\mathbb{R}}\phi_X(t)e^{-itx}dt$ is continuos in $x$ (and I was able to prove it as a consequence of dominated convergence theorem, prooving continuity by sequences).
$2) \frac{1}{2\pi}\int_{\mathbb{R}}\phi_X(t)e^{-itx}dt \geq 0$
3)$\hspace{0.1cm} \exists \hspace{0.1cm} g_n \uparrow 1 $ with $g_n \in C_b(\mathbb{R})$
I was able to prove step $2)$ in the following way with an hint which I was unable to prove (so this should be the way to prove it). The proof goes like this : if it was strictly negative, by continuity (prooved in the first observation) exists an open set $U$ where the function is still strictly negative, and then I can find a $g > 0, g \in C_b(\mathbb{R})$ defined on $U$ and conclude by contradiction thanks to the fact that in this case we would have $0 > \frac{1}{2\pi}\int_{\mathbb{R}}\phi_X(t)e^{-itx}dt = E[g(X)] > 0$, thanks to $g$.
As far concerned the third point I don't know where to start, I thought it could be useful to use a simplified version of Urysohn Lemma but unsuccefully. I was unable to prove the highlighted sentences, there are any way to explicit such functions ? Any direct proof, explicit or not, would be appreciated, and some reference of using Fourier trasfrom in Probability as well, those seemed nice tricks to know.
You do not need to assume that $g$ has compact support in the first part, I believe.
You can use a trick very similar to what you propose in the beginning to show that \begin{align} \lim_{n\to\infty}f_{X + \frac{1}{n}N}(t) = \frac{1}{\sqrt{2\pi}}\int_{R}e^{-\text{i}ts}\varphi_{X}(s)\text{d}s = f(t) \end{align} where $N\sim N(0,1)$. Thus it follows, that $f$ is strictly positive and measurable, and we can therefore consider the measure $\nu$ with density $f$.
Now, if we can show that \begin{align} \int_{\mathbb{R}}\psi \text{d}\nu = \int_{\mathbb{R}}\psi \text{d}\mathbb{P}_{X}, \quad \psi \in C_{c}(\mathbb{R}) \end{align} then $\nu = \mathbb{P}_{X}$, which would imply that $f = f_{X}$, as desired. But showing that the integrals are equal is simply using dominated convergence on the integrals wrt. $\mathbb{P}_{X+\frac{1}{n}N}$.