I am having difficulty modeling a combinatorial analysis on a particular problem, I wanted to isolate some generic form to count how many valid arrangements exist in a given problem, can anyone help me?
The problem is very simple to understand, it is a generalization of the problem: "How many permutations can be formed using 'n' items, for example: "1 2 3 4 5", which would be 5!"
The difference is that for each place I can use only a certain set of items and without repeating a certain item, eg:
[1, 2, 3] # I can only use these for first place
[2, 3, 4] # I can only use these for second place
[3, 4, 5] # I can only use these for third place
Examples of valid arrangements: (1, 2, 3) or (3, 2, 5) Examples of invalid arrangements: (2, 2, 3)
I would like to find some calculation for this problem, I have solved the same using dynamic programming, but I believe that it is possible to isolate some mathematical method ..
The input is defined by x and y, x means the number of numbers in each group, y means the number of groups. The first group starts with [1, .. x], the second group onwards begins with [1 + j, .... x + j] where j is the group that goes from 0 ... (y - 1), eg:
x = 3, y = 4
# represents the count of:
[1, 2, 3]
[2, 3, 4]
[3, 4, 5]
[4, 5, 6]
x = 5, y = 3
# represents the count of:
[1, 2, 3, 4, 5]
[2, 3, 4, 5, 6]
[3, 4, 5, 6, 7]
For example, the result of this case x=2, y=2 would be 7:
[1, 2, 3] # Valid numbers for first place
[2, 3, 4] # Valid numbers for first place
# the valid numbers is (1, 2), (1, 3), (1, 4), (2,
# 3), (2, 4), (3, 2), (3, 4), so the answer is 7.
Can someone help me?
The problem is "permutations with restrictions on absolute positions." One way to solve this is with rook polynomials. Another way is by computing a $0-1$ permanent. Although computation of a $0-1$ permanent is #P complete, my own experience is that it is quite reasonable so long as $n$ isn't too big and you take some care programming it.
Don't do any multiplications. When you expand by minors, you either add the permanent of the minor or you don't. If the coefficient is $0$, you don't compute the permanent of the minor at all. Also, when computing the permanent, either at the top level or recursively, expand by the line (row or column) with the fewest ones.
EDIT
The Wikipedia article mentions Ryser's method, which is faster that the approach I outlined. I had forgotten about this method. If I recall correctly, though, my approach is fast enough for reasonable $n$. Ryser's method isn't very hard to implement, if I recall correctly, but I'd definitely recommend doing the simple thing first. If it's fast enough, fine. If not, you can use it to make test results for your implementation of Ryser's method.
EDIT
In response to your question, I really recommend trying permanents before rook polynomials. They are easier to understand, and to program. I had a lot of fun many years ago writing a C program to count permutations with restrictions on absolute positions using rook polynomials, but part of the fun was that I achieved a speedup of between $100$ and $1000$ over the course of the development. I no longer have the program, and my memory of it is very dim, so whatever I described would be at least $100$ times slower than it ought to be. Also, I think it's unlikely that rook polynomials are faster than Ryser's method. I don't see how they could be, since ultimately, rook polynomials just give a way of computing a $0-1$ permanent, and Ryser's method is supposed to be the fastest known method to do that.
As to how to use permanent so this problem, there are examples on the wiki, but let me show you how to do it for the problem you give:
We have the matrix $$\begin{bmatrix} 1&1&1&0&0\\0&1&1&1&0\\0&0&1&1&1\\1&1&1&1&1\\1&1&1&1&1 \end{bmatrix}$$
Each row $r$ represents a position and each column $c$ represents an element. There is a one at $a_{r,c}$ if and on if element $c$ can be used in position $r$. Computing a permanent is just like computing a determinant, except that there are no minus signs, and because permanents don't have the nice algebraic properties of determinants, the computation is much harder.
One thing I forgot to mention about computing the permanent. If, when you are trying to find the line with the smallest number of ones in a $k\times k$ minor, it turns out that the smallest number of ones in a row is $k$ so that all the elements of the minor are $1,$ then the permanent is $k!$.