As a response to comments,i'd like to put it in an abstract way,hoping this will make things clearer: f is a well-defined function of countably many inputs:f(a1,...,an,...). For a set of n objects {a1,a2,...,an},we define a series of function f[n] (a1,...,an) = f(a1,...,an,0,0,...), and consider the relationship between algorithm computational complexity of solving f[n] and the computability of f.
To illustrate my point, please consider a problem (which is a packing problem of discrete mathematics) :given n squares, determine the smallest(inf) area of an enclosing rectangle that will contain all of them without any overlap.
Suppose the problem can't be solved in O(1),which means when the input is n squares,then the algorithm complexity A(n) would be more than O(1),which means it's unbounded. If we let n tend to infinity,A(n) will tend to infinity,so the algorithm will "diverge".
Now if we want that given a infinite sequence of squares,there to be a theorem which can,for any given sequence,determine the area defined above.(It is a finite number as long as the sum of areas of squares converges.)Is it true that such a theorem cannot exist,or alternatively,the problem concerning infinite squares is non-computable?
(If it is computable,how can the complexity of its algorithm diverges when n tends to infinity?)
If the above is true,can we conclude that if a problem whose input concerns infinity is computable,then its corresponding finite problem must be O(1)?