I am studying the reinforcement learning using the textbook entitled "Reinforcement Learning An Introduction," written by Richard S. Sutton.
However, I got a weird point in the iterative policy evaluation algorithm, and cannot make sure whether I am wrong or the textbook is wrong. Hence, I am posting this question here to ask about this.
In the picture above, the following statement is stated in the second algorithm "Policy Evaluation." $$\Delta \gets \max \left( \Delta , \lvert v - V(s) \rvert \right)$$
In my opinion, in order for the algorithm to terminate someday, $\max$ must be replaced with $\min$. However, all the iterative policy evaluation codes I've seen in github sticks to $\max$, not $min$. I want to make sure what is correct, and if $\max$ is correct, can someone say why it is?

That is not a mistake, the convergence is due to fixed point theorem.
We examine each state and record down what is the maximal update across the states.
We only terminate the procedure when the change in the previous iteration and current iteration is small enough.