Approximating a general state-space system by a neural network

51 Views Asked by At

I need validation or falsification for the following idea:

suppose I have a general, discrete state-space system on a compact Manifold $M$: $$\phi:M\rightarrow M$$ $$y:M\rightarrow \mathbb{R}^m$$ where $\phi$ is a diffeomorphism, $y$ is continuous and invertible.

Now I have an observation of the system at time $t$: $$x_t=y(p_t\in M)$$

and describe the evolution of the system by $$\tilde{y}=y\circ\phi\circ y^{-1}$$ which maps $x_t$ to $x_{t+1}$: $$x_{t+1}=\tilde{y}(x_t)$$

Since $M$ is compact and $y$ is continuous, $I=y(M)$ is a compact subset of $\mathbb{R}^m$ as well with $x_t, x_{t+1} \in I$. Now, a universal approximator $f$ can approximate any function on a compact subset of $\mathbb{R}^m$ up to abitrary exactness, which means that I could achieve $$||\tilde{y}(\cdot)-f(\cdot)||^2<\epsilon$$ for example via a neural network, using observations $x_i, i \in \mathbb{Z}$

I also assume that I can still approximate $\mathbb{E}[\tilde{y}(\cdot)]$ if $\phi$ is stochastic since the evolution is still on a compact manifold but I struggle to come up with any proof.

So my question: Is my logic right or are there any flaws?

Thanks a lot, Sarem