I'm reading about nonlinear programming and the Goldstein test. Here is the definition from my book:
A line search accuracy test that is frequently used is the Goldstein test. A value of $\alpha \geq0$ is considered not too small in the Goldstein test if
$$\phi(\alpha) > \phi(0) + (1-\epsilon)\phi'(0)\alpha,\;\;\;\;\;(1)$$
where $\phi(\alpha) = f(\textbf{x}+\alpha\textbf{d}),$ for some point $\textbf{x}\in E^n$ and feasible direction $\textbf{d}\in E^n$. In terms of the original notation, the Goldstein criterion for an acceptable value of $\alpha$, with corresponding $\textbf{x}_{k+1} = \textbf{x}_k+\alpha\textbf{d}_k$, is
$$\epsilon \leq \frac{f(\textbf{x}_{k+1})-f(\textbf{x}_k)}{\alpha\nabla f(\textbf{x}_k )\textbf{d}_k}\leq 1-\epsilon.\;\;\;\;\;(2)$$

My question is: Are $(1)$ and $(2)$ supposed to be the same expression, i.e. equivalent?
Because from $(1)$ I get:
$$\phi(\alpha) > \phi(0) + (1-\epsilon)\phi'(0)\alpha$$
$$<=> f(\textbf{x}_{k+1}) > f(\textbf{x}_k) + (1-\epsilon)\nabla f(\textbf{x}_k)\textbf{d}_k \alpha$$
$$<=> \frac{f(\textbf{x}_{k+1})-f(\textbf{x}_k)}{\nabla f(\textbf{x}_k)\textbf{d}_k \alpha} >1-\epsilon.\;\;\;\;\;(3)$$
So, did I make a mistake somewhere or is there a fault in the definition in my book? Please comment if my question is unclear =)
P.S. here you can find more details: book, page 232
Optimization algorithms try to find a minimum, and so they mostly look at descent directions.
Looking at a descent direction you would have $\phi'(0)<0$, so the sign of inequality changes when dividing by $\phi'(0)$.