Justifying a randomization technique for efficiently checking equalities in a prime-order group

42 Views Asked by At

In cryptography papers, I have seen a technique used to confirm that two equalities hold in a group that requires performing only a single computation in the group. This is apparently more efficient than checking the equalities separately. However, I am having trouble justifying the method, or finding a paper containing a justification.

The method is as follows.

Suppose $G$ is a group of order $p$, where $p$ is prime, and suppose we are given $g_0,g_1,h_0,h_1\in G$. If we want to check whether $g_0=h_0$ and $g_1=h_1$, we pick $x\in\mathbb{F}_p^*$ uniformly at random and instead check whether $g_0g_1^x = h_0h_1^x$—or, equivalently, we check $g_0g_1^x(h_0h_1^x)^{-1} = 1_G$. If this holds, then supposedly $g_0=h_0$ and $g_1=h_1$ with high probability (in terms of $p$). The underlying assumption of the setup is also that, while in theory the group elements are given, we don't actually know their values individually. Instead, we can think of it as if we are given formulas to compute the group elements, but we don't evaluate the formulas until we actually perform the single check involving $x$, at which point we evaluate $g_0g_1^x(h_0h_1^x)^{-1}$ in a manner that does not tell us the value of the individual group elements.

I am likely missing some elementary argument, but I am having trouble proving that this method is sound. My attempt so far has given that if the check is satisfied, then the probability that $g_0 \neq h_0$ or $g_1\neq h_1$ is at most $1/2$, which is nowhere near good enough. And this is assuming that all of the group elements are uniformly random and mutually independent. Does anyone know a proof of the soundness of this method, or of a paper that contains such a proof? And is the uniformity and mutual independence assumption necessary?

1

There are 1 best solutions below

0
On BEST ANSWER

Suppose $G$ is a group of order $p$, where $p$ is prime, and suppose we are given $g_0,g_1,h_0,h_1\in G$. If we want to check whether $g_0=h_0$ and $g_1=h_1$, we pick $x\in\mathbb{F}_p^*$ uniformly at random and instead check whether $$g_0g_1^x = h_0h_1^x\quad (1) $$ or, equivalently, we check $g_0g_1^x(h_0h_1^x)^{-1} = 1_G$. If this holds, then supposedly $g_0=h_0$ and $g_1=h_1$ with high probability (in terms of $p$).

Given the assumptions of uniformity and independence, note that each of the quantities $$ g_i,h_i,g_i^x,h_i^x,\quad i=0,1 $$ are uniformly distributed in $G$ and pairwise independent. Let $g_0 h_0^{-1}=\gamma,$ and $(g_1 h_1^{-1})^x=\delta.$

Since $x$ is nonzero, $\delta$ and $\gamma$ are both equal to $1_G$ if and only if we have the desired equalities.

If $$ \gamma \neq 1 \Leftrightarrow g_0\neq h_0 $$ the test in (1) fails to detect this only if $\delta=\gamma^{-1}$ which happens with probability $1/|G|$ due to the uniformity assumption.

Conversely if $$ \delta \neq 1 \Leftrightarrow g_1^x\neq h_1^x \Leftrightarrow g_1\neq h_1 $$ the test in (1) fails to detect this only if $\delta=\gamma^{-1}$ which again happens with probability $1/|G|$ due to the uniformity assumption.

Thus the probability of declaring a false equality is certainly less than the sum of these two probabilities, i.e., it is less than $$ \frac{2}{|G|} $$ which is very small since the size of the group $G$ is very large.

The underlying assumption of the setup is also that, while in theory the group elements are given, we don't actually know their values individually.

This is an advantage in some settings, if the checker need not know the values, in terms of information leakage.

I am likely missing some elementary argument, but I am having trouble proving that this method is sound. My attempt so far has given that if the check is satisfied, then the probability that $g_0 \neq h_0$ or $g_1\neq h_1$ is at most $1/2$, which is nowhere near good enough.

And is the uniformity and mutual independence assumption necessary?

Yes, otherwise the proof will go through with higher probability of declaring false equality (if you can quantify dependence and non uniformity of the relevant distributions).