$\newcommand{\vb}[1]{\boldsymbol{#1}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\matspace}{\mathcal{M}_{n\times n}(\R)}$ $\newcommand{\diag}{\mathrm{diag}}$ $\newcommand{\defeq}{\stackrel{\scriptsize\text{def}}{=}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\norm}[1]{\left\| #1 \right\|_{\infty}}$
I was reading in Frank Poole's Introduction to Linear Algebra, section 7.2, a passage on the convergence of the Jacobi method for diagonally dominant matrices. To use the same notation as the book, the system we are trying to solve is $$A\vb{x} = \vb{b}\,,$$ where $A\in\matspace$, $\vb{x}\in\R^n$ and $\vb{b}\in\R^n$. Also, we define $D\in\matspace$ as the diagonal matrix containing the elements in $A$'s diagonal, $U\in\matspace$ as the upper triangular matrix containing the elements above the diagonal, and $L\in\matspace$ as the lower triangular matrix containing the elements below the diagonal—such that $A = D + L + U\, $. To simplify notation, we define $G \defeq -D^{-1}(L + U)$ and $\vb{c} \defeq -D^{-1}\vb{b}$.
The equation $A\vb{x} = \vb{x}$ is equivalent to $$ \vb{x} = G\vb{x} + \vb{c}\,; $$ the iterates of the Jacobi method are thus obtained as $$ \vb{x}^{k+1} = G\vb{x}^k + \vb{c}\qquad \forall k \in\N \,. $$
It can be proven that $$\norm{\vb{x}-\vb{x}^{k+1}} \le \norm{G}\norm{\vb{x}^{k+1} - \vb{x}^k} < \norm{\vb{x}^{k+1} - \vb{x}^k}$$ since $\norm{G}<1$ (which can be proven through the fact that $A$ is diagonally dominant), implying that the sequence of iterates is convergent. The inequality above also gives rise, when used recursively, to the fact that $$ \norm{\vb{x}-\vb{x}^{k}} \le \norm{G}^k\norm{\vb{x} - \vb{x}^0}\,, $$ which can be used to bound the error of an approximation to the desired precision (by finding a lower bound for the number of iterations).
Let $\varepsilon \in \R^+$, and suppose we want to ensure $\norm{\vb{x}-\vb{x}^{k}} < \varepsilon$. It would be sufficient to ensure $$ \norm{G}^k\norm{\vb{x} - \vb{x}^0} < \varepsilon \quad\Leftrightarrow\quad k > \frac{\log\varepsilon - \log\norm{\vb{x} - \vb{x}^0}}{\log\norm{G}} = \frac{\log\norm{\vb{x} - \vb{x}^0} - \log\varepsilon}{\log\norm{G}^{-1}} $$ (the last equality is just for convenience, since $\norm{G} < 1$ implies $\log\norm{G}^{-1} > 0$).
Now, the problem is the expression $\norm{\vb{x} - \vb{x}^0}\,$: if we were to use it to estimate the amount of iterations we needed to make, we would need to know the "exact" solution, $\vb{x}$, which is impractical. So, in a practical situation, we would need to estimate that expression. Concretely, we would need to find an upper bound $M\in\R^+\,$, such that $\norm{\vb{x} - \vb{x}^0} \le M\,.$ Indeed, the inequality $$ k > \frac{\log M - \log\varepsilon}{\log\norm{G}^{-1}} $$ would necessarily imply $$ k > \frac{\log\norm{\vb{x} - \vb{x}^0} - \log\varepsilon}{\log\norm{G}^{-1}}\,, $$ and thus, through all our previous reasoning, the condition $\norm{\vb{x}-\vb{x}^{k}} < \varepsilon$ would be satisfied—so we could set the number of iterations to $$ k = \left\lceil \frac{\log M - \log\varepsilon}{\log\norm{G}^{-1}} \right\rceil\,. $$
But the author of the book simply takes $\norm{\vb{x} - \vb{x}^0} \approx \norm{\vb{x}^1 - \vb{x}^0}\,$, without making sure that $\norm{\vb{x} - \vb{x}^0} \le \norm{\vb{x}^1 - \vb{x}^0}\,$. Wouldn't that allow the possibility that $\norm{\vb{x} - \vb{x}^0} > \norm{\vb{x}^1 - \vb{x}^0}\,,$ and thus imply the possibility of the condition $\norm{\vb{x}-\vb{x}^{k}} < \varepsilon$ not being satisfied (by taking $k = \left\lceil\frac{\log \norm{\vb{x}^1 - \vb{x}^0} - \log\varepsilon}{\log\norm{G}^{-1}} \right\rceil$)?
If so, what would be a way of finding an actual upper bound approximation of $\norm{\vb{x}-\vb{x}^{0}} = \norm{\vb{x}}$ (supposing $\vb{x}^0=\vb{0}$), therefore guaranteeing the condition $\norm{\vb{x}-\vb{x}^{k}} < \varepsilon$?