I have been reading through Stephen Boyd's famous text book Convex optimization (PDF).
Regarding primal-dual interior point methods, §11.7.2 on page 612 states:
In the primal-dual interior-point method the iterates $x(k)$, $\lambda(k)$, and $\nu(k)$ are not necessarily feasible, except in the limit as the algorithm converges.
Note: $x$ is the primal variable, $\lambda$ and $\nu$ are the dual variables (Lagrange multipliers associated with the inequality and equality constraints, respectively).
There are two things I do not understand about the statement above, which is quite central to the primal-dual interior point method (Algorithm 11.2).
First, how can the iterate $\nu(k)$ possibly be infeasible? The feasibility set of the dual variable $\nu$ is $\mathbb R^p$ (page 561), so any $\nu$ should be feasible. What am I missing?
Second, the initialization of the primal-dual interior-point method (Algorithm 11.2) is primal-feasible and dual-feasible (regarding $x$ and $\lambda$). Step 3 of the Algorithm (line search) guarantees that the next iterate will also be feasible (regarding $x$ and $\lambda$). Doesn't this imply that the iterates $x(k)$, $\lambda(k)$, and $\nu(k)$ are necessarily feasible?
Finally, why is the surrogate gap then required after all?