In semi-definite programming (SDP), you might have an optimization problem where $A \succeq B^{-1}⪰0$ is a constraint, which implies that $A_{ii} \geq (B^{-1})_{ii}$ for all $i$. In some cases, $B$ may not be invertible, and to handle such situations, I'm considering the use of the Moore-Penrose inverse of $B$, $B^{\dagger}$.
Here's a concrete example to illustrate my concern:
Let's say we have matrix $B$ as follows:
$B = \begin{bmatrix} 2 & 1 & 1 \\ 1 & 1 & 1 \\ 1 & 1 & 1 \end{bmatrix}$
Computing the pseudo-inverse of $B$, we get:
$B^{\dagger} = \begin{bmatrix} 1 & -0.5 & -0.5 \\ -0.5 & 0.5 & 0.5 \\ -0.5 & 0.5 & 0.5 \end{bmatrix}$
Observation: The diagonal entries of $B^{\dagger}$ appear to be smaller than expected, which is contrary to the intuition that some of them should be large.
My query is whether the inequality $A_{ii} \geq (B^{\dagger})_{ii}$ still holds for this non-invertible matrix $B$. (For the sake of argument, assume $B\neq0$.) If this inequality remains valid, it would provide a valuable insight for handling such cases in mathematical modeling.