Let $L$ be any Turing-complete language.
Let $d_L(n)$ be the number of distinct algorithms expressible in $L$ using at most $n$ bits.
I'm not sure how to properly define "distinct algorithm", but broadly it's what you would expect: multiple halting programs returning the same value are considered equivalent to one another, as are cyclic programs stepping through the same sequence, as are runaway programs exhibiting identical growth.
I assert that $$\frac{d_L(n)}{n} \to D(n)\qquad\text{as }n \to \infty,$$
where $D$ is some unknown but very real function. I would imagine $D$ is nearly or completely uncomputable, and perhaps monotonically increasing with $n$. Most importantly, I assert this holds regardless of choice of $L$, in contrast to $\Omega_L$ and related functions.
The short version of my reasoning for this claim is that it is an immediate consequence of the no free lunch theorem, which states that over a uniform distribution of all possible binary strings, no compression algorithm will outperform any other. My understanding is that it's equivalent or similar to the principle that there is no such thing an ideal optimizing compiler (the "full-employment theorem").
In this case, since any candidate language $L$ is aggregating its results over all possible binary strings up to length $n$, no one language should be able to come out ahead in terms of economy of representation. This should also be apparent by a simple counting argument.
This means that, in general, any choice of $L$ will encode the same total number of bits after running it over all $2^n -1$ possible descriptions. However, I think it may be the case that that only holds perfectly when the encoding is perfectly efficient, as for example with universal prefix-free languages.
Even if I've gotten that wrong, thanks to Kolmogorov invariance there will be some constant $c$ (given a finite set of languages) such that $$d_{L_1}(n) \leq d_{L_2}(n) \leq d_{L_1}(n+c)$$ for any choice of $L_1,L_2$. This provides a bound of $$2^{-c} \leq \frac{d_{L_2}(n)}{d_{L_1}(n)} \leq 2^c.$$ Since $c$ should become negligible as $n$ grows without bound, I would expect that to converge to $1$ in the long run, which would imply that the number of distinct descriptions between languages do indeed all approach some single curve $D(n)$ in the limit.
While I can't back this argument up formally yet, I see only one potential issue with it; $D$ may be dependent on the set of languages being used. I'm not clear on whether that is likely to be true, but even if it is, so long as you don't let the set of languages grow without bound, the distinct-description proportions should still converge to a single curve.
My question: Does this sound generally right, or am I off track?
If right, I assume it's a known result and would appreciate a keyword to hunt for. If not, where did I go awry or what is unclear?