Algorithm related to partitions

118 Views Asked by At

I hope this is not a foolish question...

Define $T(n,k)$ = number of partitions of $n$ in which the greatest part is $k$, $1 \le k \le n$. The triangle for $n$ = {1,..,10} looks like:

1: [1]

2: [1, 1]

3: [1, 1, 1]

4: [1, 2, 1, 1]

5: [1, 2, 2, 1, 1]

6: [1, 3, 3, 2, 1, 1]

7: [1, 3, 4, 3, 2, 1, 1]

8: [1, 4, 5, 5, 3, 2, 1, 1]

9: [1, 4, 7, 6, 5, 3, 2, 1, 1]

10: [1, 5, 8, 9, 7, 5, 3, 2, 1, 1]

( See oeis.org/A008284)

I have coded an algorithm that builds the triangle above in ascending order. I have searched for any similar algorithm without success. Everything I find typically uses recursion to build this triangle in descending order. I have looked at Jerome Kelleher's ruleAsc algorithm which enumerates all partitions of $n$ in ascending order, but my algorithm does not require such enumeration. Is anyone aware of another algorithm that can build this triangle in ascending order?

1

There are 1 best solutions below

4
On BEST ANSWER

The definition of algorithm is not completely precise, so there is sometimes ambiguity as to whether two computer programs (or other processes) are implementing the same algorithm or related but distinct algorithms. For this reason, I'm think your question is rather hard to pin down and I'm not sure that a precise answer can be given. I will attempt to explain some general background, hoping to pitch it at the right level that it answers the question behind the question and gives you the insight you want.

Every recursive function can be written as an iterative loop, and every iterative loop can be written as a recursive function. (If you've studied extremely simple models of computation, this is evident from the equivalence of the Turing machine and the lambda calculus). The most general transformation from a recursive function to an iterative loop is to use a stack data structure to essentially simulate the call stack. However, there is another style of transformation from recursive function to iterative loop called dynamic programming. The canonical illustration is with the Fibonacci numbers. We start with a recursive definition:

fib(n) {
    if (n == 0) return 0;
    if (n == 1) return 1;
    return fib(n - 1) + fib(n - 2);
}

This runs extremely slowly, because if we call fib(1000) it calls fib(999) (which calls fib(998), ...) and then fib(998). The total number of calls is exponential in n.

An intermediate step is to memoise the function, or cache its results.

fibcache = { 0 => 0 , 1 => 1 };

fib(n) {
    if (n in fibcache) return fibcache[n];

    result = fib(n - 1) + fib(n - 2);
    fibcache[n] = result;
    return result;
}

This now only performs each sum once, because thereafter it is cached. It gives an exponential speedup, to time linear in n. The dynamic programming step is to invert the call sequence and build the cache from the bottom up.

fib(n) {
    fibcache = { 0 => 0 , 1 => 1 };
    i = 2;
    while (i <= n) {
        fibcache[i] = fibcache[i - 1] + fibcache[i - 2];
    }
    return fibcache[n];
}

Obviously with the Fibonacci numbers you can go further: ditch the cache and just keep the previous two values; or use a matrix power to get a further exponential speed-up. But the dynamic programming step is the point of this particular exposition.

What you appear to be describing is precisely a dynamic programming iterative formulation of the recursive function

T(n, k) {
    if (k < 1 or k > n) return 0;
    if (k == 1 or k == n) return 1;
    return T(n - 1, k - 1) + T(n - k, k)
}

If your question is whether anyone has previously implemented such, the answer is yes. For example, I could point you at an implementation I wrote a bit over three years ago in a language which you've almost certainly not heard of and probably shouldn't bother learning. But the recurrence has been known for a long time (it's presented as folklore in Partitions - A Survey, Hansraj Gupta, Journal of Research of the National Bureau of Standards - B. Mathematical Sciences, Vol. 74B, No.1, January-March 1970; see section 4), and it was probably implemented by dynamic programming on paper before computers even existed. I wouldn't bet against Euler having already derived and implemented it, again in "descending order".