I found the following lecture from CS271 at UC Berkeley interesting and was taking a look at some of the examples, namely 3.1 on the first page: Link
Here is a summary of the problem: Alice has a $n$-bit integer $a$ and Bob has a $n$-bit integer $b$. Can Alice send Bob $O(log \ n)$ bits such that Bob can verify whether or not $a = b$ with high probability?
Here is the algorithm: Alice picks prime number $p$ uniformly at random from $\{2,...,cn \ \text{ln} \ n\}$ and computes $F(a) = a \ \text{mod} \ p$. Alice sends $F(a)$ and $p$ to Bob. Bob computes $F(b) = b \ \text{mod} \ p$ and checks whether or not $F(a) = F(b)$. The algorithm returns true if they are equal, and false if not (with a small probability of failure of $\frac{1}{c} + o(1)$ if $F(a) \neq F(b)$ due to hash collision.
My question is whether or not you can extend the algorithm to work for negative integers as well and still maintain that Pr[error] = $\frac{1}{c} + o(1)$. Intuitively, if $a$ and $b$ are both negative it should work just fine but I am unsure of the case where one of $a$ or $b$ is negative and the other is positive.
Would the algorithm work if you extend the problem such that $a$ and $b$ have can be negative as well?
The sign is essentially just an extra bit, i.e., performing the experiment for tha number range $-2^n,\ldots, 2^n-1$ is the same as doing it for $0,\ldots, 2^{n+1}-1$.