Is there a better way to do this?
The question as it was asked of me was to create an algorithm that counted the total number of ways an integer N could be partitioned into parts of size 6 or less. The partitions' order matter, so [6, 6, 3] is counted in addition to [6, 3, 6].
I am uneducated in the field, but I eventually realized that the problem was more about permutations than it was about combinations. I assumed no rule about total number of parts k and that each part must be positive. My idea was to pick a first part, find the permutations of the remainder, subtract 1, and repeat.
[4]
[3, 1]
[2, 2] [2, 1, 1]
[1, 3] [1, 2, 1] [1, 1, 2] [1, 1, 1, 1]
$f(4) = f(0) + f(1) + f(2) + f(3)$
Sweet. I assume a more experienced mathematician would have predicted the $f(n)=2f(n-1)$ pattern, though. Somewhat awesomely, this means the total permutations of partitions for n is $2^{n-1}$. This works up to the part size limit 6 though.
- At $f(7,6)$,
[7]must be dropped. - At $f(8,6)$,
[8],[7, *]must be dropped plus now the loss from $f(7)$.
I wrote out the pattern.
f(6,6) = f(0) + f(1) + f(2) + f(3) + f(4) + f(5)
f(7,6) = f(1) + f(2) + f(3) + f(4) + f(5) + f(6)
f(8,6) = f(2) + f(3) + f(4) + f(5) + f(6) + f(7,6)
f(9,6) = f(3) + f(4) + f(5) + f(6) + f(7,6) + f(8,6)
Once $N>m$, then $f(N,m)=2f(N-1,m)-f(N-m-1,m)$.
In case it helps explain my problem, a working implementation of this is:
const N = 9
const M = 6
const P = [1]
for (let i = 1; i <= N; i++) {
P[i] = i > M
? 2 * P[i - 1] - P[i - M - 1]
: Math.pow(2, i - 1)
}
Is there a way to perform this calculation without the recursion/iteration? This specific problem seems unasked in general web searches and I do not know enough about combinations to apply them. This is my first question here--I hope I was eloquent enough.
Let $k$ be the fixed maximum part size.
Let $a_n$ be the number of ways to write $n$ as a sum of positive integers at most $k$ (the order matters).
As you noted we can get a recursion for $a_n$ by classifying with respect to the size of the first summand.
We get $a_n=a_{n-1}+\dots+a_{n-k}$ (we let $a_0=0$ and $a_j=0$ for negatives). This recursion evidently gives us a method to calculate $a_n$ in time $\mathcal O(nk)$
We can make this faster, notice that the recursion matrix is the $k\times k$ matrix $$A=\begin{pmatrix} 1 & 1 & 1\dots & 1 & 0\\ 1 & 0 & 0 \dots & 0 & 0 \\ 0 & 1 & 0 \dots & 0 & 0 \\ 0 & 0 & 1 \dots & 0 & 0\\ \vdots& \vdots & \cdots & \vdots & \vdots \\ 0 & 0 & 0 \dots & 1 & 0 \\ \end{pmatrix}$$
We clearly have that $A\begin{pmatrix} a_j\\ a_{j-1}\\ a_{j-2}\\ \vdots \\ a_{j-k+1} \\ \end{pmatrix}=\begin{pmatrix} a_{j+1}\\ a_{j}\\ a_{j-1}\\ \vdots \\ a_{j-k+2} \\ \end{pmatrix}$.
Hence we have $A^n\begin{pmatrix} 1\\ 0\\ 0\\ \vdots \\ 0 \\ \end{pmatrix}=\begin{pmatrix} a_n\\ a_{n-1}\\ a_{n-2}\\ \vdots \\ a_{n-k+1} \\ \end{pmatrix}$.
This gives us a method to calculate $a_n$ in time $\mathcal O(\log(n)k^3)$ by using logarithmic exponentiation ( we can change $3$ for a smaller constant by multiplying matrices efficiently).