Several general sources (https://en.wikipedia.org/wiki/Category_of_topological_spaces, https://en.wikibooks.org/wiki/Topology/Manifolds/Categories_of_Manifolds) mention the forgetful functor from category of manifolds M (or differential manifolds Diff) to the category of topological spaces Top: Diff -> Top.
My question is about the inverse (or even adjoin?) functor Top -> Diff? Does math say something about it, is there theories about it? Some algorithms, that allow to list such manifolds. Just some hints what is happening about this functor?
https://golem.ph.utexas.edu/category/2008/05/convenient_categories_of_smoot.html is one dicussion that mentions the problems of defining categories of manifolds and scarcely mentions the inverse functor in which I am interested.
I have neural network applications in mind. From the one hand there is topological approaches to model theories (e.g. sheaves for HOL by Awodey), so, the mapping theory <-> topological spaces has been taken care for. From the other hand all the empirical data of the functioning of the neural network are point clouds. Also the first efforts to build analytical theory of neural networks work with manifolds, with functions. So - if we could connect manifolds (concrete instance) with topological spaces (concrete instance) then we could proceed all the way from empirical data about functioning of neural networks up to the theories and mappings betwen theories. While the resulting structure can be hopelessly formidable, the algebraic topology methods (computing homological and cohomological groups for neural manifolds and topological spaces) can tame this structure and give some general results. So, my question is about this missing link.
Here is an abbreviated answer.
Consider two categories: $Diff$, the category of smooth manifolds and $Top$, the category of topological manifolds. (There is one more important intermediate category, namely $PL$, the category of piecewise-linear manifolds, I will ignore it for the sake of brevity.)
There is an obvious forgetful functor $Diff\to Top$ and one of the fundamental questions analyzed by many (some of the best) geometric topologists for the last 100 or so years is "to which extent is this functor an isomorphism?" There are several issues one needs to address in order to get a meaningful question out of this, since there are vastly more continuous maps than smooth maps between smooth manifolds. On the other hand, every continuous map between smooth manifolds is the limit of a sequence of smooth maps. One arrives to the following set of questions:
Is every topological manifold smoothable? I.e. does every topological manifold admit a smooth atlas?
If a topological manifold does admit a smooth structure, is it unique (up to a diffeomorphism)? A bit more refined version of this question is: Suppose that $M_1, M_2$ are smooth manifolds and $h: M_1\to M_2$ is a homeomorphism. Is $h$ isotopic to a diffeomorphism? (An isotopy here means that one has a 1-parameter family of homeomorphisms which starts with the given homeomorphism and terminates at a diffeomorphism.)
It turns out that the answers to these questions depend on the dimension: If manifolds in question have dimension $\le 3$, then both questions have positive answer. In contrast, starting in dimension 4, the answer to both questions is negative. It does not mean that the story ends there. For instance, there is a series of invariants of algebraic-topological nature which are responsible for telling if given topological manifold is smoothable or if given homeomorphism is isotopic to a diffeomorphism.
Much more can be said here, but, IMHO, all this is irrelevant if one is interested in data analysis and, more specifically, neural networks. My guess is that your questions stems from what is known as the "manifold hypothesis," which boldly proclaims that "interesting"/"natural"/"frequently occurring" data sets in higher-dimensional Euclidean spaces $E^n$ should be well-approximated by (much) lower-dimensional smooth submanifolds. Personally, I do not believe the manifold hypothesis, but if one takes it seriously, then a natural question arises on what submanifolds should serve as approximants and how to find these. (One should keep in mind here that in the area of area of data analysis, people frequently confuse and conflate submanifolds of $E^n$ and smooth maps of compact manifolds $M$ into $E^n$. However, if the dimension of $M$ is $<n/2$, then one can always perturb a smooth map a bit to get an embedded submanifold as the image.) Now, if one assumes that a (finite) data cloud $C$ in $E^n$ is a finite approximation to some compact subset $K\subset E^n$ (which may or may not be a submanifold), then one can treat the problem of finding an optimal submanifold approximating $C$ as a problem of finding a kind of an adjoint functor to the forgetful functor, namely, one going from the category of compact subsets of $E^n$, back to the category of compact smooth submanifolds of $E^n$. (Which connects us to your original question.) There are papers claiming to give algorithms for finding such optimal submanifolds, but, as I said earlier, I simply disagree with the manifold hypothesis. My personal take on this is that besides smooth submanifolds, as Mandelbrot liked to say, "fractals are everywhere" and fractals should be treated as such and not approximated by compact low-dimensional submanifolds. How this is actually supposed to be done (finding self-similar patterns in large data sets), I do not know...