Consider the convex optimization problem $$\min_{x\in X} f(x),$$ where $X$ is convex compact and $f: \Bbb R^n\to \Bbb R$ is convex. When studying the complexity of a certain algorithm $A$ for this problems, usually an upper bound of certain error $\epsilon_k$(ideally =0, being $k$ the iteration number) is determined. Hence we have a statement of the form $\epsilon_k\leq g(k)$, for some function $g$ such that $g(k)\to 0$ when $k\to\infty.$ Then, in order to analyze whether the method is efficient, they usually proceed to find a lower complexity bound, which is usually a statement of the type: $$$$ $\textbf{Statement:}$ For any $1\leq k \leq n,$(hence $k$ is fixed) there exists a function $f$ and a compact convex set $X$ such that, when we apply method $A,$ we have $$\epsilon_k \geq h(k).$$ $$$$ My doubt lies in this Statement: How does that solves the problem of the lower complexity? It seems to me that the right statement should be $$$$ $\textbf{Statement Modified: }$ there exists a function $f$ and a compact convex set $X$ such that, when we apply method $A$ we have, $\textbf{for all }k\in \Bbb N, $ $$\epsilon_k \geq h(k).$$
EDIT: Due to a suggestion of Brian Borchers, I added a reference on which such STATEMENTS can be found.
In Nesterov's 'Introductory Lectures in Convex Optimization' you can find:

