Let $f:\mathbb{R}^n \rightarrow \mathbb{R}$ be a convex function which is not differentiable everywhere and let $K\subseteq \mathbb{R}^n$ be a convex set. Consider the problem of minimizing $f(x)$ subject to $x \in K$. Is it true that $x^\star$ is the optimal solution in $K$ if and only if $$g^T(y-x^\star)\geq 0, \forall y \in K, \forall g \in \partial f(x^\star),$$ where $\partial f(x^\star)=\{g \in \mathbb{R}^n|~~f(y)\geq f(x^\star)+g^T(y-x^\star)~~\forall y \in \mathbb{R}^n\}$ is the set of subgradients of $f$ at $x^\star$?
I can only prove one direction. Since $g^T(y-x^\star)\geq 0, \forall y \in K, \forall g \in \partial f(x^\star)$, by definition of $\partial f(x^\star)$ we have that $f(x^\star)\leq f(y)$ for every $y \in K$, thus $x^\star$ is optimal. What about the other direction, that is proving that $x^\star$ is optimal implies $g^T(y-x^\star)\geq 0, \forall y \in K, \forall g \in \partial f(x^\star)$? Is it even true?
Here is a counterexample. Let $n=1, f(x) = |x|$, and $K=[0,1]$. Then $x^*=0$ minimizes $f$ over $K$, but $g = -1 \in \partial f(0)$ and $g(1-0) <0$.