In random walks, a path may return to origin for the $r$-th time in $n$-th step ($r$ is given but $n$ is not given), and under this condition, these $n$ steps can be split into $r$ sections where each section ends with an $i$-th return ($i=0,1,\ldots,r$). We denote the length of each section as $l_1,l_2,\ldots,l_r$ ($l_1+l_2+\ldots+l_r=n$).
What I concern is the independence between $l_1,l_2,\ldots,l_r$.
In the obvious sense the random walk starts from scratch every time when the path returns to the origin. Thus one may draw a conclusion that they are independent of each other. But the following excerpt from Feller's book (on page 91) tells us an unexpected different story (Because if they are independent, $n/r$ will be irrelevant to $r$ instead of being proportional to $r$).

The result in the above words totally comes from calculation.
My question is: What key point that we ignored led us to the intuitive but wrong conclusion?
I don't have a copy of the book so I can't see the full context, but I suspect the issue is with your statement "if they are independent, $n$ will be of the order of magnitude $r$". I imagine you are thinking that $l_1+\cdots+l_r$ will typically be about $r$ times the expectation of $l_1$.
This doesn't work, because for a simple symmetric random walk, the expectation of $l_1$ is infinite. As $r$ increases, the average of $r$ independent copies will therefore almost surely tend to infinity (so an order of $r^2$ for the sum is plausible - but note that this is not the expectation of the sum, but a typical value).