When computing the Atanassov-Halton sequence, a set of "admissible integers" must be chosen. These are defined as follows:
Let $p_1, \ldots, p_s$ be distinct primes. The integers $k_1,\ldots, k_s$ are called "admissible" for them, if $p_i \nmid k_i$ and for each set of integers $m_1,\ldots,m_s$, $p_i \nmid m_i$, there exists a set of integers $\alpha_1, \ldots, \alpha_s$, satisfying the congruences $$ k_i^{\alpha_i} \prod_{\substack{j=1\\j\ne i}}^{s} p_{j}^{\alpha_{j}} = m_{i} \bmod p_{i} $$
In "Generating and Testing the Modified Halton Sequences", Atanassov states that "it is easy to see that [the admissible integers can be computed] by a diagonalization procedure similar to Gaussian elimination." Regrettably, it is not easy for me to see how this is achieved. Could anyone help shed some light on this and suggest integer Gaussian elimination routines to help calculate the admissible integers?
The solution is not unique, but a set of admissible integers for the first few primes is given in haltondat.h in the file halton.tar.gz.
The way to turn the problem into linear algebra is to use primitive roots.
Let $g_i$ be a primitive root modulo $p_i$. Then $p_j \equiv g_i^{a_{ij}} \pmod {p_i}$ and $k_i \equiv g_i^{r_i} \pmod{p_i}$ for some to-be-determined $r_i$. (Finding $r_1, \dots, r_s$ is equivalent to finding the admissible integers.)
To solve the system of congruences for $\alpha_1, \dots, \alpha_s$, write each $m_i$ as $g_i^{b_i} \bmod p_i$ and rephrase everything in terms of $g_i$: we have $$ k_i^{\alpha_i} \prod_{j \ne i} p_j^{\alpha_j} \equiv g_i^{r_i \alpha_i + \sum_{j \ne i} a_{ij} \alpha_j} \pmod {p_i} $$ and so we want to solve $$ g_i^{r_i \alpha_i + \sum_{j \ne i} a_{ij} \alpha_j} \equiv g_i^{b_i} \pmod{p_i} \iff r_i \alpha_i + \sum_{j \ne i} a_{ij} \alpha_j \equiv b_i \pmod{p_i - 1}. $$ Now we have a linear system of equations, which we can write in matrix form as $$ \begin{bmatrix} r_1 & a_{12} & \cdots & a_{1s} \\ a_{21} & r_2 & \cdots & a_{2s} \\ \vdots & \vdots & \ddots & \vdots \\ a_{s1} & a_{s2} & \cdots & r_s \end{bmatrix} \begin{bmatrix}\alpha_1 \\ \alpha_2 \\ \vdots \\ \alpha_s\end{bmatrix} = \begin{bmatrix}b_1 \\ b_2 \\ \vdots \\ b_s\end{bmatrix} $$ except that the $i^{\text{th}}$ row is actually an equality modulo $p_i - 1$.
One way to guarantee that this has a solution, no matter what the moduli are, and what values $b_1, \dots, b_s$ we are given, is to make sure that this matrix, as an integer matrix, has determinant $\pm 1$, in which case it has an integer inverse. To achieve this, we are free to choose $r_1, \dots, r_s$ however we like; the other entries of this matrix are determined by $p_1, \dots, p_s$.
So here is the "procedure similar to Gaussian elimination".
Here is an example with $p_1 = 7, p_2 = 11, p_3 = 13$. We choose the primitive roots $g_1 = 3$, $g_2 = 2$, $g_3 = 2$. Since $11 \equiv 3^4 \pmod 7$, $13 \equiv 3^3 \pmod 7$, $7 \equiv 2^7 \pmod{11}$, $13 \equiv 2^1 \pmod{11}$, $7 \equiv 2^{11} \pmod{13}$, and $11 \equiv 2^7 \pmod{13}$, the matrix is equal to $$ \begin{bmatrix} r_1 & 4 & 3 \\ 7 & r_2 & 1 \\ 11 & 7 & r_3\end{bmatrix}. $$ Set $r_1 = 1$ and row-reduce by subtracting multiples of the first row from the others, getting $$ \begin{bmatrix} 1 & 4 & 3 \\ 0 & r_2-28 & -20 \\ 0 & -37 & r_3-33\end{bmatrix}. $$ Set $r_2 = 29$ and row-reduce by adding $37$ times the second row from the third, getting $$ \begin{bmatrix} 1 & 4 & 3 \\ 0 & 1 & -20 \\ 0 & 0 & r_3-773\end{bmatrix}. $$ Set $r_3 = 774$, completing the algorithm.
We can reduce $r_i$ modulo $p_i-1$, so actually $r_1 = 1$, $r_2 = 9$, and $r_3 = 6$ are just as good as $r_1 = 1, r_2 = 29$, and $r_3 = 774$. In either case, we get $k_1 = 3^{1} \bmod 7 = 3$, $k_2 = 2^9 \bmod 11 = 6$, and $k_3 = 2^6 \bmod 13 = 12$.