I'm reading [1] and I saw this part.
...

Briefly speaking, the authors insist that by function approximation a function $V: K_x \times \mathbb{R}^{+} \to \mathbb{R}$, which is at least continuously differentiable with respect to the first argument, can be represented by a linearly parametrised function, where the parameter $w(s)$ is also a continuous function.
As I understood, the function approximation theory only guarantees that for a continuous function $f$ on a compact set $K \subset \mathbb{R}^n$, there exists an approximation $f_n$ such that $f_n$ uniformly converges to $f$ as $n \to \infty$. To apply this, for all $s \in \mathbb{R}^{+}$, we can find a parameter $w(s)$ such that $V(\cdot, s) = \sum \phi_i(\cdot), w_i(s)$. My question is, how can we show that such mappings $w_i(s)$'s are continuous. What is "uniform convergence theorem" shown in the paper? How can we show that $w_i \in C^{0}(\mathbb{R}^{+})$? Is there any related theory?
[1] T. Bian and Z.-P. Jiang, “Reinforcement Learning and Adaptive Optimal Control for Continuous-Time Nonlinear Systems: A Value Iteration Approach,” IEEE Trans. Neural Netw. Learning Syst., pp. 1–10, 2021, doi: 10.1109/TNNLS.2020.3045087.