I believe I understand the definition of a noncomputable problem from an introductory computer science class, but I don't understand what it really means.
One of my hypothesis was that a noncomputable problem really means that you would need an increasing program size to compute, say, all instances of the problem up to size n in time at most finite T. Since the definition of TM requires finite program size, this means that the problem is noncomputable. Is there a theorem along those lines?
Do we have orders of growth for programs generating instances up to size n of problems? (ie for noncomputable it seems to me that it would be anything above constant)
This seems to suggest further computability classes, like c(computable), ln(n), n, exp(n),...
Am I making any sense here?
Any particular instance of the problem has an answer (even if we don't happen to know it). The same for any finite collection of instances. If you're using a finite alphabet, there are only finitely many instances up to size $n$, and so there's certainly a "lookup table" style program that computes the answers to all these. So saying there's a program that solves all instances up to size $n$ in finite time isn't really saying anything useful.
EDIT: Similarly, from the fact that there are only finitely many programs of a given size, you can see that if for every $n$ there is a program of size $\le N$ that solves all instances of size $\le n$, there must be a program of size $\le N$ that solves all instances.