I am going through the notes of Stephen Boyd and I come across this. I cannot understand why is this a subgradient of the said function? Please check the image attached herewith.
$ g = \frac{x - \Pi_{C_{j}}}{\|x - \Pi_{C_{j}}\|_2}$
$\Pi_{C_{j}}$ is the Euclidean projection onto the convex set $C_{j}$ meaning this point ($\Pi_{C_{j}}$) on $C_{j}$ is closest to the point $x$ outside the convex set.
The denominator, $ \|x - \Pi_{C_{j}}\|_2$, is a Euclidean distance. The numerator, $x - \Pi_{C_{j}}$ is a vector. I cannot seem to wrap my head around this subgradient.
It is not entirely clear what your question is asking.
We have $f(x) = \max_k d_{C_k}(x)$, where $d_{C_k} (x) = \min_{c \in C_k} \|x-c\|$. Each of the functions $d_{C_k}$ is regular hence $\partial f(x) = \operatorname{co} \{ \partial d_{C_k}(x)\}_{k \in I(x)}$, where $I(x) = \{ k | f(x) = d_{C_k}(x) \}$ (see Clarke, "Optimization and Nonsmooth Analysis", Proposition 2.3.12, for example).
Hence to find a sugbradient $g$ of $f$ it suffices to find a subgradient of any of the 'active' distances.
The notation in the linked picture incorrectly suggests that the distance function has a gradient. This is true ae. by the Rademacher theorem but not necessarily true everywhere. However, the given vector is a sugradient.
As an aside, to illustrate the previous point, let $C=\operatorname{epi} g \subset \mathbb{R}^2$ where $g(x) = 1+|x|$. Then we can compute $d_C((t,0)) = \max(0,|t|-1)$, which is not differentiable at $|t|=1$.
To see that the given vector is a subgradient, note that we can compute a subgradient of $d_C$ at $x_0$ in the following manner, noting that $C$ is contained in a suitable supporting hyperplane $H$:
Suppose $d_C(x_0)=\|x_0-c_0\| >0$, where $c_0 \in C$ (that is, $c_0 = \Pi_C(x)$ using the notation in the attached picture). Note that $C \subset H = \{ x | \langle x-c_0, c_0-x_0 \rangle \ge 0 \}$, $d_C(x) \ge d_H(x)$ for all $x$ and $d_H(x_0) = d_C(x_0)$.
A quick computation shows that $d_H(x) = | \langle {x_0-c_0 \over \|x_0-c_0|} , x-c_0 \rangle |$, and since $d_C \ge 0$ we have \begin{eqnarray} d_C(x) &\ge& \langle {x_0-c_0 \over \|x_0-c_0 \|} , x-c_0 \rangle \\ &=& \langle {x_0-c_0 \over \|x_0-c_0 \|} , x-x_0 \rangle + \langle {x_0-c_0 \over \|x_0-c_0 \|} , x_0-c_0 \rangle \\ &=& d_C(x_0) + \langle {x_0-c_0 \over \|x_0-c_0 \|} , x-x_0 \rangle \end{eqnarray} for all $x$. Hence ${x_0-c_0 \over \|x_0-c_0 \|} \in \partial d_C(x_0)$.