Can every shape in computer vision and every motion in robotics be represented by algebraic geometric objects and mappings between them (for representation of motion)?
Motivation for my question: I am going through https://web.math.princeton.edu/~jkileel/thesis.pdf and it seems to me that this is possible, albeit it may be impractical. But then algebraic topology and homotopy theory (as tools for more coarse-grained analysis and homological algebra as still more coarse grained view/linearization of homotopy) can be the right tools for abstraction and generalization of the shapes and motion patterns and hence - whole universe of shapes/motions and their whole abstraction can be represented by mathematical tools. This elaborate paper https://arxiv.org/abs/2106.14587 describes the composition and learning of (deep) neural networks and if mathematics can represent every shape or motion, then the categorical language of deep neural networks (developed in that paper) fully describes any possible deep neural network and the whole search for the optimal architectures and optimal learning algorithms boils down to the manipulation of the categories and finding distinct (optimal) objects in them. If some machine learning tasks consist from some mathematically definable objects (e.g. shapes of some kind of classes) then moving this information in the categorical framework of the cited article can give answer about deep architecture and learning algorithm that is optimal for the provided class of shapes or motion patterns.