Let $f:\mathbb{R}^n\rightarrow \mathbb{R}$ be a convex function and $K\subseteq \mathbb{R}^n$ be a convex set. Consider the problem of minimizing $f(x)$ over $x \in K$. It is well known that, if $f$ is differentiable everywhere, then $x^\star$ solves the problem if and only if $\nabla f(x^\star)(y-x^\star)\geq 0,~\forall y \in K$.
Now suppose that $f$ is not differentiable and let $$\partial f(x)=\{g \in \mathbb{R}^n|~~f(y)\geq f(x)+g^T(y-x)~~\forall y \in \mathbb{R}^n\}$$ denote the subdifferential of $f$ at $x$. Are there any necessary and sufficient conditions for optimality that mimic $\nabla f(x^\star)(y-x^\star)\geq 0,~\forall y \in K$, but using subdifferentials for non differentiable functions?
One way to think about this is that $x^*$ minimizes $h(x) = f(x) + I_K(x)$ iff $0 \in \partial h(x^*) = \partial f(x^*) + \partial I_K(x^*)$ iff there exists $g\in \partial f(x^*)$ such that $-g$ is in the normal cone to $K$ at $x^*$ iff there exists $g \in \partial f(x^*)$ such that $g^T(y-x^*) \geq 0$ for all $y \in K$.
(Note that this argument uses the subdifferential sum rule, and the fact that the subdifferential of the convex indicator function $I_K$ at $x^*$ is the normal cone to $K$ at $x^*$.)