Consider the non-smooth optimization problem
\begin{equation}
\min_{x \in \mathbb{R}^n} f(x).
\end{equation}
To solve the above problem, I am suing subgradient descent
\begin{equation}
x_{t+1} = x_t - \eta\partial f(x_t)
\end{equation}
My question is: why subgradient $\partial f(x)$ is non-vanishing in the nbd of the optimal solution if learning rate $\eta$ is fixed?
Short answer: No. I general, you wouldn't expect the subgradient to vanish in the neighborhood of a minimum point $x^*$. All you can say is 0∈∂f($x^*$).
Indeed, consider the abolute-value function f:x↦|x|. A simple calculation reveals that ∂f(x)=sign(x) if x≠0 and ∂f(0)=[−1,1]. Also, we know (from purely algebraic considerations) that f attains a global minimum of 0 at the origin 0. However, no amount of subgradient descent will take you there (provably!) as @littleO already said. In particular, ∂f will never vanish no matter how close you come within 0 without hitting it.