We know that any positive integer can be represented as
$$\displaystyle\sum_{i=0}^\infty a_i \cdot 2^i$$
where $a_i \in \{ 0, \pm 1 \}$. Given an arbitrarily large integer, how can we find its sparsest binary representation, i.e., the combination containing the least possible number of nonzero $a_i$’s?
One algorithm is to write the number in standard binary. Starting from the $a_0$ term, if you have two terms $a_i,a_{i+1}$ that are both $1$, you can replace them with $a_i=-1,a_{i+2}=1$. If $a_{i+2}$ was already $1$, it is now $2$ and you can carry, setting $a_{i+2}=0, a_{i+3}=1$. Keep going. This will terminate when you get at most one bit higher than you started with, as you know that bit started out zero. As you make two passes through the binary expansion of the number, this is $O(\log n)$