I am running a simulation in $R$ of a queueing system. One of the problems I have is that the running time is heavily impacted by a read operation on a matrix that has to be performed multiple times (at each event for my event based simulation). In this matrix I store the job/server compatibilities, however due to the nature of the model this matrix has a size of $\binom{N}{d} \times N$, where $d$ is fixed and I am especially interested in the model behaviour as $N$ grows large. As you can see this is a problem as the number of rows is $O(N^d)$. The matrix itself is very sparse, each row only contains $d$ entries $True$ and $N-d$ entries $False$ (it is a logical matrix).
One of the problems with this is that the running time of my simulation grows exponentially due to the read operation on this enormous matrix. I have tried various methods to speed it up, but I'm not sure where to continue. The matrix I have currently is a sparse matrix, created by first creating via
$Matrix(compatibilityMatrix,sparse=T).$
I know that casting a matrix to sparse is not very efficient, but this is just a one time operation at the start of the simulation and does not impact the running time. During the simulation I need to read random rows of this matrix, of which there are $\binom{N}{d}$. I have tried transposing the matrix and doing reads on the columns just in case the matrices are stored in column order, but to no avail. Does anyone know whether there is a faster method to read entries from such a matrix?