Gradient descent reduces the value of the objective function in each iteration. This is repeated until convergence happens.
The question is if the norm of gradient has to decrease as well in every iteration of gradient descent?
Edit: How about when the objective is a convex function?
No. Take for example $f(x)=\sqrt{|x|}$. While $f(x)$ decreases monotonically at each step under the assumption that steps are small enough (or if you apply backstepping to enforce monotonicity), the gradient $(2 \, \mathrm{sign}(x) \, f(x))^{-1}$ gets larger in modulus as the current iterate $x$ approaches the solution $x=0$.
If you don't like the kink in 0 in this example, another simple one where the gradient does not decrease monotonically is $1-\cos(x)$ for $x\in (-\pi, \pi)$ (which is $C^\infty$).