I am trying to understand this algorithm for perceptron, but do not understand how it works (source, slide 25):
Let $w$ be the separating hyperplane and $y \in \{-1, 1\}$.
Iteratively
- Find vector $x_i$ for which $(w^{\bot} \cdot x_i)(y_i) \lt 0$
- Add $x_i$ to $w$:
$$ w_{t+1}\leftarrow w_t + y_i x_i $$
But imagine running the algorithm. Let's say we have a point $(2,2)$ with label $y = -1$. Let's start with a hyperplane $w_0 = (2, -1)$. What this algorithm will do is iteratively update $w_t$ by adding $-1 * (2, 2)$ or $(-2, -2)$ to each new hyperplane. What happens?
$$ w_0 = (2, -1)\\ w_1 = (0, -3)\\ w_2 = (-2, -5)\\ w_3 = (-4, -7)\\ w_4 = (-6, -9)\\ w_5 = (-8, -11)\\ ...\\ w_n = (k, k-2)\\ $$
I've drawn these hyperplanes and $x = (2,2)$:
To my mind, the perceptron's hyperplane will keep moving closer and closer to $(2,2)$ without ever crossing that point. Thus, $(2,2)$ will forever be misclassified as $1$ instead of $-1$.
Am I misinterpreting this algorithm?

Note that for $w_1 = (0, -3)$, we have $$ w_1^T x \cdot y = (-6) \cdot (-1) = 6 > 0 $$ Therefore, the perceptron algorithm will terminate with $w = (0, -3)$ and the resultant classifier would label $x$ as $\texttt{sign}(w^Tx) = -1$.