In several applications or examples in Computer Science (Algorithms & Data Structures), one needs to find two numbers $a_S$, $b_S$ out of two different ordered sequences $A$ and $B$ which summed up give a specific number $S$.
The most efficient way of doing this is as follows:
Inputs: $S$, $A = [a_1, a_2, ..., a_n]$, $B = [b_1, b_2, ..., b_m]$
- set $i = 0$, $j=m$
- if $a_i + b_j = S$ we're finished
- if $a_i + b_j < S$ we increase i by 1
- if $a_i + b_j > S$ we decrease j by 1
- go to 2.
Although it makes sense intuitively and especially doing examples, I can't get my head around why its so simple, even though we're only checking a minute subset of the combinations possible.
My questions:
- What is the formal proof?
- if instead of a $a_i + b_j = S$ we generalise with $f(a_i,b_i) = S$ (e.g. multiplication, or any other arbitrary function) what are the conditions for this algorithm to be correct?