The scalar product equation $$ {\bf v\cdot w}=s\\v_1w_1+v_2w_2+\cdots+v_nw_n = s$$ is a linear equation if we look at $\bf v$ and $\bf w$ separately. But can we somehow make it simultaneously linear?
In 2D special case $s=0$ we can "cheat" by setting $$\cases{v_1=-w_2\\v_2=w_1} \Leftrightarrow \cases{v_1+w_2=0\\v_2-w_1=0}$$ which is simultaneously linear assuming we have a vectorization $${\bf q} = \left[\begin{array}{c}\bf v\\\bf w\end{array}\right] = \left[\begin{array}{c}v_1\\v_2\\w_1\\w_2\end{array}\right]$$ we can write the equations $$\left[\begin{array}{cccc|c}1&0&0&1&0\\0&1&-1&0&0\end{array}\right] \text{or with matrices : } \\\phantom{a}\\{\bf M}= \left[\begin{array}{cccc}1&0&0&1\\0&1&-1&0\end{array}\right], {\bf Mq=0}$$
Here is one approach. Assuming $\bf D$ has column vectors being a first estimate of vectors to be made orthogonal, and $\bf V$ is an additive update
$${\bf D^T(V+D)} = \begin{bmatrix}d_1^T(v_1+d_1)&d_1^T(v_2+d_2)\\d_2^T(v_1+d_1)&d_2^T(v_2+d_2)\end{bmatrix}$$
Which when we are done and $\bf V=0$ should be $\begin{bmatrix}|d_1|^2&0\\0&|d_2|^2\end{bmatrix}$
That is; we want to preserve diagonal entries (punish $\bf V$ which change diagonal values) and we want to punish the result $(\bf V+D)$ in the off diagonal entries. Letting ${\bf M}_{R(D^T)}$ mean the matrix performing matrix multiplication from the (R)ight by matrix $D^T$ and $\bf P_1$ pick out the off diagonal elements, and ${\bf P_2}$ picks out the diagonal elements, we can then aim to linearly minimize:
$$\|{\bf P}_1 {\bf M}_{R(D^T)}\text{vec}({\bf V+D})\|_2+\|{\bf P}_2 {\bf M}_{R(D^T)}\text{vec}({\bf V})\|_2$$
Many more useful terms are possible to add but the answer would be too crowded. Here is an image to show it "kind of works" (the new red vectors are closer to $90^\circ$ angle to each other than the blue ones):