I have some doubts about the proof of this theorem. Essentially a proof of it can be found in the book "Real Analysis Modern Techniques And Their Applications" by Folland, but I followed also "Linear Funcional Analysis" by J. Cerda. A point of the demonstration I changed, following a lemma of another book. To better expose the demonstration, do a series of recalls.
Main notations and preliminary results:
$\mathcal{E}(\mathbb{R}^n)$ is the space of $C^\infty$-functions.
$\mathcal{E}'(\mathbb{R}^n)$ is the space of distributions with compact support.
$\mathcal{S}'(\mathbb{R}^n)$ is the space of tempered distributions.
$H^s(\mathbb{R}^n):=\lbrace u \in \mathcal{S}'(\mathbb{R}^n) : \Lambda^s u \in L^2(\mathbb{R}^n) \rbrace$, where $\Lambda^s u = \mathcal{F}^{-1}(\omega_s \widehat{u})$ $\forall u \in \mathcal{S}'(\mathbb{R}^n)$, is the Hilbert-Sobolev spaces with $s \in \mathbb{R}$.
$H_{loc}^s(\Omega):=\lbrace u \in \mathcal{D}'(\Omega) : \psi u \in H^s(\mathbb{R}^n), \forall \psi \in \mathcal{D}(\Omega)\rbrace$
$Theorem (1)$. If $s-k > n/2$, then we have continuous inclusion $H^s(\mathbb{R}^n) \hookrightarrow \mathcal{E}^k(\mathbb{R}^n)$. In particular $\bigcap_{s \in \mathbb{R}} H^s(\mathbb{R}^n) \subset \mathcal{E}(\mathbb{R}^n)$.
$Theorem (2)$. If $m -k > n/2$, then we have continuous inclusion $H^m(\Omega) \hookrightarrow \mathcal{E}^k(\Omega)$.
$Theorem (3)$. If $k \in \mathbb{N}$ and $s \in \mathbb{R}$, then $H^s(\mathbb{R}^n)= \lbrace u \in \mathcal{S}'(\mathbb{R}^n) : D^\alpha u \in H^{s-k}(\mathbb{R}^n), \forall |\alpha| \leq k \rbrace$ and $\left \| u \right \|_{H^s}$, $\sum_{|\alpha|\leq k} \left \| D^\alpha u \right \|_{H^{s-k}}$ are two equivalent norms.
$Lemma (2)$. Each distribution with compact support $u \in \mathcal{E}'(\mathbb{R}^n)$ is an element of set $H^s(\mathbb{R}^n)$ for some $s \in \mathbb{R}$, i.e. $\mathcal{E}'(\mathbb{R}^n) \subset \bigcup_{s \in \mathbb{R}} H^s(\mathbb{R}^n)$
$Lemma (3)$. A differential operator with constant coefficients $P(D)=\sum_{|\alpha| \leq m} a_\alpha D^\alpha$ of order $m \in \mathbb{N}$ is elliptic if and only if $\forall |\xi| \geq R >0$ there is a constant $C >0$ such that $|P(\xi)| \geq C |\xi|^m$.
$Lemma (4)$. Let $P(D)$ an elliptic operator with constant coefficients of order $m \in \mathbb{N}$. If $u \in H^s(\mathbb{R}^n)$ and $P(D)u \in H^s(\mathbb{R}^n)$, then $u \in H^{s+m}(\mathbb{R}^n)$.
$Elliptic-Regularity-Theorem$. Let $L=P(D)$ an elliptic operator with constant coefficients of order $m \in \mathbb{N}$, and $u \in \mathcal{D}'(\Omega)$ a distribution. If exists $s \in \mathbb{R}$ such that $Lu \in H_{loc}^s(\Omega)$, then $u \in H^{s+m}_{loc}(\Omega)$. If $Lu \in \mathcal{E}(\Omega)$ then $u \in \mathcal{E}(\Omega)$.
$Proof$. Note that if $Lu \in \mathcal{E}(\Omega)$, then $\varphi Lu \in H^s(\mathbb{R}^n)$ for all $\varphi \in \mathcal{D}(\Omega)$ and for all $s \in \mathbb{R}$, i.e. $Lu \in H_{loc}^s(\Omega)$. Now, since $L$ is of order $m \in \mathbb{N}$ we want to apply the $lemma (4)$ to verify that $\varphi u \in H^{s+m}(\mathbb{R}^n)$ $\forall \varphi \in \mathcal{D}(\Omega)$, and then $u \in H^{s+m}_{loc}(\Omega)$ by definition. With an application of the $theorem(2)$ follow that $\varphi u \in \mathcal{E}(\Omega)$ $\forall \varphi \in \mathcal{D}(\Omega)$, and then we can assume that $u \in \mathcal{E}(\Omega)$.
Let $\mathrm{supp}(\varphi) \subset U$, where $U$ is a open set with compact closure $\overline{U} \subset \Omega$, by regular version of the Urysohn lemma there is $\psi \in \mathcal{D}(\Omega)$ such that $\overline{U} \prec \psi \prec \Omega$ (this notation means that $\psi(x)=1$ $\forall x \in \overline{U}$ and $0\leq \psi(x) \leq 1$). Therefore $\psi u \in \mathcal{E}'(\mathbb{R}^n)$ is a distribution with compact support and by $lemma (2)$ exists $\sigma \in \mathbb{R}$ such that $\psi u \in H^{\sigma}(\mathbb{R}^n)$.
If $\sigma$ decreases we can assume that $s+m-\sigma=k \in \mathbb{N}$. (why this assumption? in the sense of the $theorem(1)$ or $theorem(2)$?)
Consider $\psi_0=\psi$, $\psi_k=\varphi$ and define $\psi_1,...,\psi_{k-1}$ by recurrence, so that \begin{align*} (\star) \displaystyle \mathrm{supp}(\psi_{j+1}) \prec \psi_j \prec U_j \subset \lbrace \psi_{j-1} =1 \rbrace \end{align*} Then it will be sufficient to prove that $\psi_j u \in H^{\sigma +j}(\mathbb{R}^n)$, since for $j=k$, we have that $\varphi u := \psi_k u \in H^{\sigma + k} (\mathbb{R}^n) = H^{s+m}(\mathbb{R}^n)$.
By definition of regular Urysohn functions as in $(\star)$ (is it correct this statement?) it will be sufficient to prove that if $\varphi, \psi \in \mathcal{D}(\Omega)$ satisfy $\mathrm{supp}(\varphi) \prec \psi$ and $\psi u \in H^\sigma(\mathbb{R}^n)$, then $\varphi u = \psi_k u \in H^{\sigma + 1}(\mathbb{R}^n)$. Now, by induction on $|\alpha|$ you can prove that \begin{align*} \displaystyle [L, \varphi]u:=L(\varphi u) - \varphi Lu \end{align*} is a differential operator of order $|\alpha| \leq m-1$, so that in general for distributional derivative $D^\alpha$ and by $theorem(3)$ this means that $f \in H^{\sigma}(\mathbb{R}^n)$ imply $D^\alpha f \in H^{\sigma-(m-1)}(\mathbb{R}^n)$. Therefore in terms of differential operator and since $\mathrm{supp}(\varphi) \prec \psi$, we have that \begin{align*} \displaystyle [L,\varphi]u=[L, \varphi](\psi u) \in H^{\sigma-(m-1)}(\mathbb{R}^n) \end{align*} Moreover, by hypothesis $\varphi L u \in \mathcal{D}(\mathbb{R}^n)$ and then we can conclude that $L(\varphi u)=[L,\varphi](\psi u) + \varphi Lu \in H^{\sigma-(m-1)}(\mathbb{R}^n)$, and by $lemma(4)$ follow that $\varphi u \in H^{\sigma + 1}(\mathbb{R}^n)$.
Thanks for any help or suggestions.
About the doubts wich I had, we can assume that $s+m-\sigma= k \in \mathbb{N}$ essentially to have an inductive argument, and if $\sigma$ decreases then $-\sigma$ increase and $k > 0$.
Concerning the second question, in fact to how we choose our Urysohn functions we have that if $\mathrm{supp}(\varphi) \prec \psi$ (means that $\psi(x)=1$ $\forall x \in \mathbb{supp}(\varphi)=\mathbb{supp}(\psi_k)$ by definition) and $\psi u \in H^{\sigma}(\mathbb{R}^n)$ therefore it will be sufficient to prove that $\varphi u = \psi_k u \in H^{\sigma + 1}(\mathbb{R}^n)$ because by $(\star)$ and $\forall j=1,...,k$ we have that $\psi_j u \in H^{\sigma + j}(\mathbb{R}^n)$.
It should be said better, but the sense is to pass from a form of the inductive argument to another, thanks to the particular choice of Urysohn functions.