I am confused by the definition of a controllable pair (A,B) from "Linear Systems" by Chen.

But can we get convergence in continuous case in finite time? Say a scalar system: $\frac{dx}{dt}=ax+bu$, ($b\neq0$ to be controllable). Assign control $u=-kx, (k>0)$, the state $x$ will only converge to the origin exponentially fast, but will never reach zero. In discrete time case, since we only consider the state at discrete time points, we can control the state to zero in finite time.
Thanks a lot for your answers!
Assume, without loss of generality, that the initial condition $x(0) > 0$. Now assign the control
$$ u = r - k x $$
where $r$ is the reference value for the control system. Now set
$$ r = -1. $$
Then $x(t)$ will converge exponentially fast to $-1$. Because the initial condition was assumed to be positive, $x(t)$ will eventually cross the zero line. Due to continuity, there will be a time $t^*$ at which $x(t^*) = 0$ holds exactly. Therefore, the origin can be reached exactly.
The same holds analogously for negative initial conditions.
Notice that the definition just states that $x(t)$ is transfered to a final state, not that this state can actually be maintained, as mentioned by Cesareo.