It is not difficult to prove the following theorem.
Theorem 1. Let $f_n\in C^1[a,b]$ such that $f_n\rightrightarrows f$ and $f'_n\rightrightarrows g$. Then $g=f'$ on $[a,b]$. Here by $\rightrightarrows$ we mean the uniform convergence.
This theorem helps us to prove that $C^k[a,b]$ is a Banach space with norm $\lVert f \rVert=\sum_{i=0}^{k}\max\limits_{x\in [a,b]}|f^{(i)}(x)|$.
But my question is actually different: I want to prove the following theorem.
Theorem 2. Let $A$ be a closed subset of $\mathbb{R}^2$. Then exists $f\in C^1(\mathbb{R}^2;\mathbb{R})$ such that $\{x\in \mathbb{R}^2: f(x,y)=0\}=A$. Here by $C^1(\mathbb{R}^2;\mathbb{R})$ I denote the space of functions $f:\mathbb{R}^2\to \mathbb{R}$ which are continuously differentiable.
Proof. If $A=\mathbb{R}^2$, then one can take $f:\mathbb{R}^2\to \mathbb{R}$ such that $x\mapsto 0$. If $A\subsetneq \mathbb{R}^2$, then $U:=\mathbb{R}^2\setminus A$ is an open set and it is not difficult to show that $U=\cup_{j=1}^{\infty} B_j$, where $B_j$ are open balls with rational centers and rational radii since $\mathbb{R}^2$ is a separable metric space.
For each $j\geq 1$, we can construct $\phi_j\in C^1(\mathbb{R}^2;\mathbb{R})$ such that $$\phi_j(x,y) = \begin{cases} \text{>0}, & \text{if } (x,y)\in B_j \\ =0, & \text{if } (x,y)\notin B_j \end{cases}$$ and $0\leq \phi_j\leq \frac{1}{2^j}$, $|\frac{\partial \phi_j}{\partial x}|\leq \frac{1}{2^j}$, and $|\frac{\partial \phi_j}{\partial y}|\leq \frac{1}{2^j}$.
One can define $f:\mathbb{R}^2\to \mathbb{R}$ as follows: $$f(x,y):=\sum_{j=1}^{\infty}\phi_j(x,y).$$ For any $(x,y)\in \mathbb{R^2}$, the series $\sum_{j=1}^{\infty}\phi_j(x,y)$ is convergent since $0\leq \phi_j\leq 2^{-j}$. Also it is easy to show that $f(x,y)=0$ if and only if $(x,y)\in A$. So we need to show that $f\in C^1(\mathbb{R}^2;\mathbb{R})$. I can justify that intuitively in the following way: from the estimates it follows $\sum_{j=1}^{n}\phi_j(x,y)\rightrightarrows f(x,y)$, $\sum_{j=1}^{n}\frac{\partial \phi_j}{\partial x}(x,y)\rightrightarrows f_1(x,y)$, and $\sum_{j=1}^{n}\frac{\partial \phi_j}{\partial y}(x,y)\rightrightarrows f_2(x,y)$. Then some variation of Theorem 1 implies that $\frac{\partial f}{\partial x}=f_1$ and $\frac{\partial f}{\partial y}=f_2$, where $f_1$ and $f_2$ are continuous functions since $f_i$ is a uniform limit of continuous functions. Hence $f\in C^1(\mathbb{R}^2;\mathbb{R})$.
I am bit confused with the last step of the proof where we need to apply the something like a Theorem 1. Can anyone explain that moment please?
Thank you!
Addendum: After some thoughts I realized that there is no need to prove some variation of Theorem 1. Our goal is to show that $\frac{\partial f}{\partial x}=f_1$ for any $(x_0,y_0)\in \mathbb{R}^2$. Fix $(x_0,y_0)$ and consider the interval $[a,b]:=[x_0-1,x_0+1]$. One can define the function $F_n:[a,b]\to \mathbb{R}$ as $F_n(x)=\sum_{j=1}^{n}\phi_j(x,y_0)$. Since $\sum_{j=1}^{n}\phi_j(x,y) \rightrightarrows f(x,y)$, then it follows that $F_n(x)\rightrightarrows f(x,y_0)$ on $[a,b]$. Moreover, one can verify that $F_n(x)\in C^1[a,b]$ and $F'_n(x)\rightrightarrows f_1(x,y_0)$. So by Theorem 1, we have $f'(x,y_0)=f_1(x,y_0)$ for any $x\in [a,b]$. However, the LHS of the last equality is $\frac{\partial f}{\partial x}(x,y_0)$, i.e., $\frac{\partial f}{\partial x}(x,y_0)=f_1(x,y_0)$ for any $x\in [a,b]$. In particular, $\frac{\partial f}{\partial x}(x_0,y_0)=f_1(x_0,y_0)$. Is that reasoning correct?