So I am studying convex optimization and I came across this theorem regarding the minimizer of a function in a space $\mathcal{X}$. In particular, the theorem states that if $f:\mathcal{X} \rightarrow \mathbb{R}$ is a convex and differentiable function then $x^*$ minimizes $f$ on $\mathcal{X}$ if and only if:
\begin{equation} \langle\nabla f(x^*),x-x^*\rangle \geq 0,\text{ for all x}\in \mathcal{X}. \end{equation}
However, I remember when studying $\mathbb{R}^n$, $\nabla f(x^*) = 0$ was a sufficient condition for $x^*$ to be a minimizer. So my question is in which cases does $x^*$ being a minimizer imply $\nabla f(x^*) = 0$. Thank you.
Assume that $\mathcal{X}$ is a vector space. Your condition easily implies $\nabla f(x^*) = 0$ (in $\mathcal{X}^*$). Indeed, for any $x \in \mathcal{X}$, you have $x + x^* \in \mathcal{X}$. Hence, $$\langle \nabla f(x^*), x\rangle = \langle \nabla f(x^*), (x + x^*) - x^* \rangle \ge 0.$$ Similarly, you can show $$\langle \nabla f(x^*), -x\rangle \ge 0.$$ This gives $\langle \nabla f(x^*), x \rangle = 0$ for all $x \in \mathcal{X}$, thus, $\nabla f(x^*) = 0$ in $\mathcal{X}^*$.
If the set $\mathcal{X}$ is a proper subset, this is no longer the case.