Complexity classes like P, NP, and #P is defined to have a time complexity bounded under a polynomial. But over what is the polynomial? Let's say it's $n$. $n$ is defined as:
For integers, its number of digits (as in for example, addition)
For fixed-point numbers, its number of precision (as in for example, $\exp$ function)
For containers, its number of entries (as in for example, sorting algorithm)
For square matrices, its dimension (as in for example, matrix multiplication)
Is there a general definition of $n$ that derives these ideas? I assumed it was the number of bits encoding the input/output, and asked this question, but it looks like it's wrong.
Moreover, I've encoded all computable real numbers in Haskell like this:
data Computable = Exact Rational | Inexact (Word -> Integer)
Inexact and the function it labels are defined as (using lambda calculus):
$$ \text{Inexact}^{-1}(x) = \lambda k. \lfloor10^k \times x \rceil $$
So Inexact labels every Cauchy sequence whose limit is a computable number, where the sequence's entries are multiplicated by $10^k$. This makes the numbers of bits encoded by Computable... well... infinite.
So how is $n$ really defined?
Edit: A more elaborate re-statement of the question would be: In addition algorithm, $n$ is defined as the number of bits of the inputs, rather than the inputs themselves. I expected this would be the general definition, but the Computable type above clearly contradicts this idea.
Usually $n$ is the number of input elements, where an input element is atomic in the sense of the computational model, i.e. for which constant-time operations are available.
For general complexity evaluations, $n$ is defined freely. For instance, you might discuss the complexity of a sorting algorithm in terms of the initial number of inversions.
Note that the parameter(s) need not even be counts of something. For instance, you could evaluate the cost of a hashing strategy in terms of the probability of a collision.