Fastest way to solve congruency equation

141 Views Asked by At

I've developed an equation that solves a problem I'm working on, but the only way I know how to solve it is by incrementally trying values of $n$ from $1 \to \infty$ until I arrive at the solution. I'm hoping someone can point me to some relevant research/theory that finds solutions in a different way.

The equation is of the following form, where all values are positive integers and $a$ and $b$ are constant:

$$a + 2n(b-n) \equiv 0 \mod (b-2n)$$

For example,

$$74003 + 2n(47595 - n) \equiv 0 \mod (47595 - 2n)$$

is solved when $n = 5456$. The values of $a$ and $b$ are chosen such that a solution always exists before $2n \ge b$. I'm only interested in the smallest $n$ that first satisfies the equation, not all possible solutions. Any suggestions or reformulations of the equation are welcome.

4

There are 4 best solutions below

3
On BEST ANSWER

I'm adding a new answer instead of updating or appending to my original one since this is quite different based on the new information provided. The answers by Ethan Bolker and Fabio Somenzi involve using

$$a + 2n(b - n) = k(b - 2n) \tag{1}$$

to create a quadratic equation in $n$, i.e.,

$$n^2 - (k + b)n + \frac{bk - a}{2} = 0 \tag{2}$$

which has a discriminant of

$$D = (k + b)^2 - 2(bk - a) = k^2 + b^2 + 2a \tag{3}\label{eq3}$$

The value of $n$ is

$$n = \frac{k + b \pm \sqrt{D}}{2} \tag{4}\label{eq4}$$

The smallest positive integer $n$ comes from subtracting $\sqrt{D}$ where $D$ is a perfect square of the same parity as $k + b$, and less than but as close as possible to $(k + b)^2$. Thus, from \eqref{eq3}, you want to start with the smallest $k$ such that $bk - a \gt 0$, say

$$k_0 = \left\lfloor \frac{a}{b} \right\rfloor + 1 \tag{5}\label{eq5}$$

This gives

$$b \ge bk_0 - a > 0 \; \Rightarrow \; -2b \le -2(bk_0 - a) \lt 0 \tag{6}\label{eq6}$$

Using this in \eqref{eq3},

$$(k_0 + b)^2 - 2(bk_0 - a) \ge k_0^2 + 2bk_0 + b^2 - 2b \tag{7}\label{eq7}$$

However, note that

$$(k_0 + (b - 1))^2 = k_0^2 + 2bk_0 + b^2 - 2b + (1 - 2k_0) \tag{8}\label{eq8}$$

Thus, the next smaller perfect square of $D$ is $(k_0 + (b - 1))^2$. Note going from $m^2$ to $(m + 1)^2$ involves adding $2m + 1$, then to $(m + 2)^2$ involves adding $2m + 3$, then add $2m + 5$ to get $(m + 3)^2$, etc. In other words, even for very large integers, you can quickly and easily go from a perfect square to the next perfect square by adding a value which you increment by $2$ each time. This is generally faster than incrementing a value and then squaring it, as https://agner.org/optimize/optimizing_cpp.pdf says at section 14.4 Integer multiplication:

Integer multiplication takes longer time than addition and subtraction (3 -10 clock cycles, depending on the processor).

For very large integers, using some integer handling package, multiplication can be quite a bit slower. In fact, a very fast method to handle extremely big calculations, in $O(n\log(n))$ for $n$ digits, was just announced recently, e.g., the April $4$, $2019$ article at Mathematicians Develop New Algorithm for Multiplying Large Numbers.

Starting at $k = k_0$ (from \eqref{eq5}) in \eqref{eq3}, and using \eqref{eq8}, you can quickly determine each next value for $k$ and compare it to the next possible perfect square, incrementing each value(s) as appropriate, until the values match, and the result gives an integer in \eqref{eq4}. This is relatively efficient, and is generally considerably faster than doing integer divisions (e.g., for factoring, such as in my original answer, or to check modulo values). For example, https://agner.org/optimize/optimizing_cpp.pdf says at section 14.5 Integer division:

Integer division takes much longer time than addition, subtraction and multiplication (27 - 80 clock cycles for 32-bit integers, depending on the processor).

This speed issue will vary with aspects like the size of the integers, the compiler being used, etc., but I believe in basically all cases integer division will be quite slow.

2
On

Possible strategy.

You want to find the smallest $n$ for which there is a $k$ such that $$ a+2n(b−n)=k(b−2n). $$ That is equivalent to $$\ n^2 + (b-k)n + bk-a = 0 $$ (if I did the algebra correctly).

When you solve that equation for $n$ as a function of $a$, $b$ and $k$ the answer will be an integer only if the discriminant $$ (b-k)^2 - 4(bk-a) = k^2 -6bk +4a + b^2 > 0 $$ is a perfect square (if I did the algebra correctly).

It may be easier to test that with values of $k$ until you get a square than to test for $n$ in the original form of the problem.

4
On

The congruence equation to solve for is

$$a + 2n(b-n) \equiv 0 \pmod{(b-2n)} \tag{1}\label{eq1}$$

You can do several manipulations of it to eliminate the $n$ on the left hand side as follows:

\begin{align} a + 2n((b - 2n) + n) & \equiv a + 2n^2 \\ & \equiv 2a + 4n^2 \\ & \equiv 2a + (-2n)^2 \\ & \equiv 2a + ((b - 2n) - b)^2 \\ & \equiv 2a + b^2 \\ & \equiv 0 \pmod{(b-2n)} \tag{2}\label{eq2} \end{align}

Assuming $a$ and $b$ are not too large, there are relatively efficient algorithms to factorize $2a + b^2$ which you can find online (e.g., Stack Overflow's Efficient Prime Factorization for large numbers), use an appropriate library or perhaps your programming language even has a quite good one built in. You can then check the factors to find the largest one which is less than $b$ and has the same parity. There may also be methods which can quite quickly find factors close to a given value without necessarily factorizing the entire value, but a quick search didn't show me anything. Using something like this could be generally faster than checking each value of $n$ in turn, but it depends on things like how efficient your factoring algorithm is, the speed of checking congruences, etc. As such, likely doing some timing testing may be the only way to determine which method is actually usually faster in your particular situation.

One other issue to consider with getting all of the factors of $2a + b^2$ is that, although you're only interested in the smallest $n$ now, if you ever find you would like to know all such $n$, then for each of the factors $f$ less than $b$ with the same parity, you get $n = \frac{b - f}{2}$.

Update: Based on the feedback of the OP in the comments re: "$a$ and $b$ can grow to be arbitrarily large ...", an alternative which may work better is to use a loop to check either $a + 2n^2$ or $2a + b^2$ for each $b - 2n$ value instead of the original $a + 2n(b - n)$. However, I'm not sure how much, if any, this will improve the speed, but my suspicion is that it won't be very much. As I stated earlier, it may be difficult to tell what will work better by analysis and, instead, testing various methods may be the only way to determine, with relatively strong confidence, which method works best for solving this problem.

7
On

This answer should be a comment to @EthanBolker's answer, but is a bit too long for that.

From $a+2n(b-n) = k(b-2n)$ we get

$$ n^2 -(k+b)n + \frac{bk-a}{2} = 0 \enspace. $$

The discriminant of this quadratic is

$$ k^2 + b^2 + 2a \enspace. $$

If $b$ is odd, then $k = \frac{b^2 + 2a - 1}{2}$ makes the discriminant a perfect square, but this is the largest value of $k$, not the smallest.

Suppose then $k^2 + b^2 + 2a = (k+\delta)^2$. Then

$$ \delta(2k+\delta) = b^2 + 2a \enspace. $$

The smallest $k$ for which $k^2 + b^2 +2a$ is a perfect square is obtained when $\delta$ is a factor of $b^2 + 2a$ close to the square root of $b^2 + 2a$. In a way, we have connected to the observation by @JohnOmielan about the usefulness of factoring $b^2 + 2a$.