In strong convexity, there are a few definitions, one of them is:
$f$ is strongly convex over $\mathcal{C}$ with coefficient $\sigma$ if $\forall x,y \in \mathcal{C}$ and all $\alpha \in [0,1]$, we have
$$f(\alpha x+(1-\alpha)y)+ \frac{\sigma}{2}\alpha(1-\alpha)\|x-y\|^2\leq\alpha f(x)+(1-\alpha)f(y)$$
In the textbook Convex Optimization Algorithms, Bersekas p.471
, it says:
There exists a unique $x^* \in \mathcal{C}$ minimizing $f$ over $\mathcal{C}$, and by applying the definition above with $y=x^*$ and letting $\alpha \rightarrow 0$, we have
$$f(x) \geq f(x^*)+\frac{\sigma}{2} \|x-x^*\|^2$$
I am questioning if the conclusion is correct since if $\alpha \rightarrow 0$, there should be no the term $\frac{\sigma}{2} \|x-x^*\|^2$.
How to get this result?
Setting $y=x^*$ gives you after a small rearrangement $$ f(x^*+\alpha(x-x^*))-f(x^*)\le\alpha\left[-\frac{\sigma}{2}(1-\alpha)\|x-x^*\|^2+f(x)-f(x^*)\right]. $$ The left hand side is non-negative due to the assumption that $x^*$ is the minimum, that is $$ 0\le-\frac{\sigma}{2}(1-\alpha)\|x-x^*\|^2+f(x)-f(x^*). $$ Now let $\alpha\to 0$.