I'm trying to perform an analysis of a recursion. I'll provide a bit of background and then the actual question later.
Let $\left\{ \omega_j \right\}_{j\in\mathbb{N}} = \left\{ \arctan \left(2^{-j}\right) \right\}_{j\in\mathbb{N}}$, let $\theta \in \left[-\sum_{j=0}^{+\infty} \omega_j, \sum_{j=0}^{+\infty} \omega_j\right]$ given, we can prove that the following recursion
$$ t_j = \left\{ \begin{array}{ll} 0 & j = 0 \\ t_{j-1} + d_{j-1}\omega_{j-1} & j > 0 \end{array} \right. $$ with $$ d_j = \left\{ \begin{array}{ll} 1 & t_j \leq \theta \\ -1 & t_j > \theta \end{array} \right. $$ converges to $\theta$. For given angle $\psi$ the rotation matrix (counter clockwise is given by)
$$ R_{\psi} = \begin{pmatrix} \cos(\psi) & -\sin(\psi) \\ \sin(\psi) & \cos(\psi) \end{pmatrix} = \frac{1}{\sqrt{1 + \tan^2(\psi)}} \begin{pmatrix} 1 & -\tan(\psi) \\ \tan(\psi) & 1 \end{pmatrix}. $$ Using the defintion of $t_j$ above we can write the following transformation $$ \vec{x}_{j+1} = R_{t_{j+1}}\vec{x}_0 = R_{t_j + d_j\omega_j} \vec{x}_0 = R_{d_j \omega_j} R_{t_j} \vec{x}_0 = R_{d_j \omega_j} \vec{x}_{j} \Rightarrow x_{j+1} = R_{d_j\omega_j} \vec{x}_j. $$ Given the definition of $\omega_j$ we can write $$ R_{d_j\omega_j} = \frac{1}{\sqrt{1+2^{-2j}}}\left(I + d_j2^{-j}J \right) = \frac{1}{\sqrt{1+2^{-2j}}}A_j, $$ where $$ \begin{array}{l} I = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} \\ J = \begin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix} \end{array}. $$ Now the question...Suppose that $\vec{x}_0 = \left(\frac{1}{K},0\right)^T$ with $$ K = \prod_{j=0}^{+\infty}(1+2^{-2j}), $$ The recursion I want to study is this one $$ \vec{x}_{j+1} = A_j \vec{x_j}, $$ (without the scale factor) specifically I want to study the properties of the residual $$ \lVert \vec{x} - \vec{x}_j \rVert = \lVert \vec{\epsilon}_j \rVert, $$ and I managed to derive the relation $$ \vec{\epsilon}_{j+1} = A_j \vec{\epsilon}_j - d_jJ \vec{x}. $$ Here $x$ is the limit value (which happens to be $(\cos(\theta),\sin(\theta))^T$.
I was specifically wondering whether or not the sequence of the error is monotonically decreasing, namely is $\lVert \vec{\epsilon}_{j+1} \rVert < \lVert \vec{\epsilon}_{j} \rVert$. My attempt was based on observing that the induced matrix is
$$ \begin{array}{l} \lVert A_j \rVert_2 = \sqrt{1+2^{-j}} \\ \lVert d_j2^{-j} J \rVert_2 = 2^{-j} \lVert J \rVert_2 = 2^{-j} \end{array}, $$ therefore using the triangular inequality I get $$ \lVert \vec{\epsilon}_{j+1} \rVert_2 = \lVert A_j \vec{\epsilon}_j - d_j 2^{-j} J \vec{x} \rVert \leq \lVert A_j \vec{\epsilon}_j \rVert_2 + \lVert d_jJ \vec{x} \rVert_2 \leq \lVert A_j \rVert_2 \lVert \vec{\epsilon}_j \rVert_2 + \lVert d_j 2^{-j} J \rVert_2 \lVert \vec{x} \rVert_2 = \sqrt{1+2^{-j}} \lVert \vec{\epsilon}_j \rVert_2 + 2^{-j} $$ namely I end up with the inequality
$$ \lVert \vec{\epsilon}_{j+1} \rVert_2 \leq \sqrt{1+2^{-j}} \lVert \vec{\epsilon}_j \rVert_2 + 2^{-j}, $$ but this doesn't seem enough to me to prove the monotonicity. Any suggestions?