Assume that $$ V(x)=\{x\in\Bbb R^n | F(x)=0\}, $$ where $F(x)$ is the set of polynomials in $n$ variables (i.e. $F(x)=0$ is the system of algebraic geometric equations), is the solution set - the cloud of points in the $n$-dimensional vector space.
Assume, that there is a neural network layer of $n$ neurons (and hence - $n$-dimensional vector of activations of $n$ neurons) and that the solution set $V(x)$ can have some meaning, for example if the vector of neuron activations belong to $V(x)$, then the input picture to the neural network contains dog, and if the vector of neuron activations is outside $V(x)$, then the input picture contains cat.
Let us go futher - the $n$-dimensional neural layer is connected to the $m$-dimensional neural layer, there is the usual neuronal transformation from the $n$-dimensional layer to the $m$-dimensional layer: $$ y=\sigma(Wx+b), $$ where $x\in\Bbb R^n$, $y\in\Bbb R^m$, $W$ is a matrix of weights, $b$ is a vector of biases and $\sigma$ applies componentwise (i.e. $\sigma$ is actually a set of $m$ functions, usually - nonlinear).
Finally let us assume, that the neural transformation $y=\sigma(\ldots)$ is applied to $V(x)$ and the new set $T(y)$ (belonging to $\Bbb R^m$) is attained. Of course, $T(y)$ can be the solution set for another system of algebraic geometric equations $G(y)=0$.
My question is - is there some mathematical theory, some results, theorems, that describe the properties of $T(y)$ wrt $V(x)$ under the neural transformtions, or that describe the properties of $G(y)$ wrt $F(x)$?
I have heard, that the solution sets can be generalized to the schemes and that each scheme can be attributed to a particular ring. So - my question could also ask for the properties of rings under the neural transformations.
There can be far more deeper questions, like - under which circumstances the set of the solution sets forms the particular category of sets (subcategory of the category of sets) that is cartesian closed. It is known, that such categories can be an interpretation of some type system/logic and so - in such circumstances the set of solution sets can form some interpretation of some logic. I.e., such set of solution sets can encode some meaning. And this deeper connection forms the motivation for my current question.
I have found some efforts https://arxiv.org/abs/1805.07091, but I would like to know more about what is already known about the behavior of solution sets (and their systems of polynomials) under the neural transformations.
My question is about the neural transformations, i.e. about the inference direction. But similar questions can be stated about the backpropagation direction as well - i.e. about the transformations that happen during the learning of neural networks.
Maybe one can consider the algebraic topological properties of the solution sets (e.g. chain of cohomology groups) and see how they transform under the neural transformations?