I'm rather new to Markov Chains and I was reading about them in the book "Randomized Algorithms" by Motwani and Raghavan when i stumbled into a somewhat confusing definition. I tried looking for similar questions but they all seem to refer to additional concepts and I would like to get a feedback on the book specific case to asses whether my understanding is correct.
Specifically I got confused reading the following extract about a the definition of $h$ the expected number of steps needed to get from a state $i$ to a state $j$. The scenario is that of discrete Markov Chains with a finite or countable number of states, associated to a transition matrix P (which is row-stochastic). I report here the relevant extract of the text.
As you can see $h$ follows a standard definition for the expected value, but I am not able to fully grasp the last sentence. To me it looks like it is a logical deduction the author does with respect to the definition of $h$, and not as a special case of the definition (but I may be wrong since English is not my primary language), and to me this does not seem to make sense.
Intuitively, it would make sense that whenever I have the possibility to reach a region that completely prevents me to get back to the state $j$ the expected time to get back to it will grow indefenetly. However, looking at the formula of $h$ I could rather easily come up with a example that seems to falsify the authors' claim. An example could be represented by the following graph where we could assume uniform probability for the outgoing edges (i.e. $1/2$ for all the edges except the self-loop).
In this case it appears to me that $f_{ij}$ would be equal to $1/2$,since I can get to it only half of the times when starting from $i$, and hence $f_{ij} < 1$, but $h_{ij} = 1/2 \neq \infty$ (since only $r_{ij}^{(1)}= 1/2 >0$, while $\forall \ t>1\ \ r_{ij}^{(t)}=0$ since it is impossible to end up in j for the first time in exactly $t>1$ steps when starting from $i$). In this case the issue would be that $r_{ij}^{(t)}$ becomes exactly $0$ for all the values $t>t_{0}$ for some $t_{0}$, preventing the contibution of infinitely many positive contributions in $h_{ij}$. In any case, even when there are infinitely many (but not all, since $f_{ij} < 1$) paths that lead from i to j, I guess we may end up with a convergent sum for $h_{ij}$, which would again prove against the definition.
So my doubt is mainly the following: is the definition of the book somewhat imprecise and the claim $f_{ij}<1 \implies h_{ij} = \infty$ is more of a convention that does not follow the standard case, or am I getting something wrong with my reasoning? Any hint would be much appreciated.

Thanks to the answer by @jjcluu and comment by @Ian and looking at other definitions for the hitting time I understood that the one given in the book is rather imprecise, leading to a clash between the intuition and the mathematical formula. I will summarize here the main points.
As suggested, a better definition would be the following. Let $Y$ be a random variable that takes the value of $\text{inf}\{t\ |\ X_t=j\}$, then
$$h_{ij}=\text{E}(\ Y\ |\ X_0 = i)$$
At this point we can observe that $Y=+\infty$, i.e. there is a random walk on the associated graph that does not visit $j$, may happend with probability $1-f_{ij}$ when $X_0 = i$. Hence when $f_{ij} < 1$ it follows that $Pr[Y=+\infty | X_0=i] > 0$ and so that $h_{ij} = +\infty$ , which is the result expected and matches the intuition.