There is ongoing work http://archive.cone.informatik.uni-freiburg.de/pubs/real-world-halting.pdf (https://ip.ios.semcs.net/articles/computability/com190250 is more recent result in top computability journal) on the algorithmic approximations of the uncomputable functions, e.g., the mentioned article finds approximations for the halting problem (actually - I am not aware of any other work along similar lines. And I am only starting to read https://link.springer.com/book/10.1007/978-3-319-43669-2 and I am a bit afraid of this text, it can be a bit more esoteric than the approximation efforts in the mentioned article).
Neural networks (even the shallow ones) are known as the universal approximators for the computable functions due to Kolmogorov-Arnold theorem https://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Arnold_representation_theorem
My question is - is there Arnold-theorem like approximation result for the neural networks and uncomputable functions?
Of course, such results can be obtained by deriving the symbolic approximation algorithm as the first step and then by the simple neuralization/approximation of this approximation algorithm by neural network.
But maybe some results can be obtained for neural networks without requirement that symbolic (white-box, no-neural) algorithm should be devised in the first step?