We have a matrix inequality defined as follows $$\mathbb{F(\Gamma)}=\mathbb{I}-\sum_{i=1}^K{\lambda_i\mathbb{H_i}}\succeq \mathbb{0}.$$ Where $\mathbb{\Gamma}=\{\lambda_1,\lambda_2,\cdots \lambda_K\}$ and $\mathbb{H_i}$ are the symmetric matrices. In one of the paper, the authors provide subgradient of $\mathbb{F(\Gamma)}$ as follows.
The positive semidefinite constraint can be equivalently expressed as a scalar inequality constraint as follows $$\pi(\mathbb{\Gamma})=\min_{\|\mathbb{\xi\|}=1}\mathbb{\xi~F(\Gamma)~\xi}^{H}\geq 0.$$
Given a query point $\mathbb{\Gamma_1}=\{\lambda_{1,1},\lambda_{1,2},\cdots \lambda_{1,K}\}$ one can find the normalized Eigenvector $\mathbb{\nu_1}$ of $\mathbb{F(\Gamma_1)}$ corresponding to the smallest Eigenvalue of $\mathbb{F(\Gamma_1)}$. Consequently we can determine the value of the scalar constraint at a query point as $\pi(\mathbb{\Gamma_1})=\mathbb{\nu_1~F(\Gamma_1)~\nu_1}^{H}$. To obtain a subgradient, we have $$\pi(\mathbb{\Gamma})-\pi(\mathbb{\Gamma_1})=\min_{\|\mathbb{\xi\|}=1}\mathbb{\xi~F(\Gamma)~\xi}^{H}-\mathbb{\nu_1~F(\Gamma_1)~\nu_1}^{H}$$ $$\leq \mathbb{\nu_1~[F(\Gamma)-F(\Gamma_1)]~\nu_1}^{H}~~ \text{(how to show that this inequality is true? and how is it a subgradient? are we not supposed to divide by the interval?}\mathbb{\Gamma_1-\Gamma})$$ $$=\sum_{i=1}^K(\lambda_{1,i}-\lambda_i)\mathbb{\nu_1~H_i~\nu_1}^{H}$$ where the last equality follows from the affine structure of $\mathbb{F}(.)$. By the weak subgradient calculus, the subgradient of $\mathbb{F(\Gamma)}$ at the given $\mathbb{\Gamma}$ is then $$[\mathbb{\nu~H_1 \nu}^H, ~\mathbb{\nu~H_2 \nu}^H,\cdots \mathbb{\nu~H_K \nu}^H]$$ where $\mathbb{\nu}$ is the Eigenvector corresponding to the smallest eigenvalue of $\mathbb{F(\Gamma)}$.
I'm going to try to prove the statement that \[g(\Gamma_1) = [v_1H_1v_1^H, v_1H_2v_1^H,\ldots, v_1H_Kv_1^H]^T\] is the subgradient of $F(\Gamma)$ at $\Gamma_1$, where $v_1$ is the eigenvector corresponding to the smallest eigenvalue of $F(\Gamma_1)$. I write $g(\Gamma_1)$ to make explicit the dependency of $g$ on $\Gamma_1$.
Since subgradient of $f: \mathbb{R}^n \rightarrow \mathbb{R}$ at point $x \in \mathbb{R}^n$ is defined as a vector $g \in \mathbb{R}^n$ such that: \[f(y) \geq f(x) + g^T (y-x), \quad \forall y \in \mathbb{R}^n \] or equivalently, \[g^T (y-x) \leq f(y) - f(x), \quad \forall y \in \mathbb{R}^n \] which means that going along the direction of subgradient $g$ from the reference point $x$ is a global underestimator of the function value, no matter how far you go $y-x$.
As such, following what you wrote, we have \begin{equation} \begin{split} \pi(\Gamma) - \pi(\Gamma_1) &= \min_{\|\mathbb{\xi\|}=1}\mathbb{\xi~F(\Gamma)~\xi}^{H}-\mathbb{\nu_1~F(\Gamma_1)~\nu_1}^{H} \\ &\leq \mathbb{\nu_1~F(\Gamma)~\nu_1}^{H}-\mathbb{\nu_1~F(\Gamma_1)~\nu_1}^{H} \\ &\leq \mathbb{\nu_1~[F(\Gamma)-F(\Gamma_1)]~\nu_1}^{H}~~ \\ &= - \left( \sum_{i=1}^K\lambda_{i}\mathbb{\nu_1~H_i~\nu_1}^{H} -\sum_{i=1}^K\lambda_{1,i}\mathbb{\nu_1~H_i~\nu_1}^{H}\right) \end{split} \end{equation} let \[g(\Gamma_1) = [v_1H_1v_1^H, v_1H_2v_1^H,\ldots, v_1H_Kv_1^H]^T\] and replacing summation with vector inner product form, we have \begin{equation} \begin{split} - \left( \sum_{i=1}^K\lambda_{i}\mathbb{\nu_1~H_i~\nu_1}^{H} -\sum_{i=1}^K\lambda_{1,i}\mathbb{\nu_1~H_i~\nu_1}^{H}\right) &= - \left(g(\Gamma_1)^T\Gamma - g(\Gamma_1)^T\Gamma_1\right) \\ &\geq \pi(\Gamma) - \pi(\Gamma_1) \end{split} \end{equation} hence, \[\pi(\Gamma) - \pi(\Gamma_1) \geq g(\Gamma_1)^T \left(\Gamma - \Gamma_1\right), \quad \forall \Gamma\]
Since $\pi(\cdot)$ is equivalent to $F(\cdot)$, we show that $g(\Gamma_1)$ is indeed a subgradient for $F(\cdot)$ at $\Gamma_1$