The Linear Congruential Generator used as a basis for Universal Hashing is defined by the equation using parameters $a$, $c$ and $m$:
$$X_{n+1} = (a\cdot X_{n} + c) \mod m$$
with the following conditions (among others, but these are the ones I want to ask about):
$$1 < a\ \mathbf{< m}$$
$$0 < c\ \mathbf{< m}$$
My questions is: why are these bound by $\mathbf{<m}$ ? Does it truly matter that they are smaller than $m$?
It seems to me that there is no reason why this wouldn't work the same even if $a > m$ and/or $c > m$. What am I missing?
The purpose of the bounds is to make sure two different a and c don't define the same Linear Congruential Generator. Suppose we use $a, c > m$. Then $$ X_{n+1} = a \cdot X_n + c \text{ mod }m = \left( a \cdot X_n + c\right) \text{mod } m\text{ mod } m = \left( a\text{ mod }m \cdot X_n + c\text{ mod }m\right)\text{ mod } m, $$
and $a\text{ mod } m, c \text{ mod } m < m$ so we have found an equivalent LCG with a, c < m. You could just as easily use $a>m$ or $c>m$ in an algorithm you program, but there is an equivalent LCG with $c<m$ that gives all the same values. I imagine using a very large $c$ might cause problems when you actually implement the algorithm on a computer depending on how the processor/programming language evaluates modular arithmetic.
If you like I can give you a proof that $(ab + c)\text{ mod }m = (a\text{ mod }m) (b\text{ mod }m) + c \text{ mod }m$.