Here is the definition of the method of steepest descent given in the book "The mathematics of nonlinear programming" by Peressini.
Suppose $f(x)$ is a function with continuous partial derivatives on $R^n$ and that $x_0 \in R^n$. Then, the Steepest descent sequence $\{ x_k\}$ with initial point $x_0$ for minimizing $f(x)$ is given by the following formula
$x_k= x_k - t_k\nabla f(x_k)$
where $t_k$ is the value of $t\geq 0$ that minimizes the function $\phi_k(t) = f(x_k-t\nabla f(x_k))$, for $t \geq 0$
My question is how do I know that such $t_k$ which minimizes $\phi_k(t) $ exists for each $k$?
Does the fact that $f(x)$ is in $C^1(R^n)$ somehow gurantee the existence of $t_k$?
I don't have Peressini in front of me, but there is something missing, either in the book, or in your transcription of it. Without some kind of lower bound on $f$ indeed you cannot limit the step size, and it is easy to construct counterexamples. (Take $n=1$ and $f(x)=x$.)