Let $\lambda>0$, and $\mathbb{R}^{n\times n}$ be the space of matrices equipped with Frobenius norm. Consider the set $$ D=\{A\in \mathbb{R}^{n\times n}\mid \textrm{$A$ is lower triangular and eigenvalues of $A+A^*$ is larger or equal to $\lambda$ }\}. $$ Then $D$ is nonempty, closed and convex (see here). Hence by the Hilbert projection theorem, the orthogonal projection (with respect to the Frobenius inner product) $P: \mathbb{R}^{n\times n} \to D$ is well-defined. That is, for all $A\in \mathbb{R}^{n\times n}$, $P(A)$ is defined as the unique element in $D$ such that $$ P(A) =\arg\min_{B\in D} \|B-A\|_{\mathbb{R}^{n\times n}}. $$
May I know given $A\in \mathbb{R}^{n\times n}$, can $P(A)$ be computed analytically or numerically?
The question is mainly motivated by optimising a nonlinear function over the set $D$, such as a quadratic function $A\mapsto \|A-C\|^2_{\mathbb{R}^{n\times n}}$ with a given matrix $C$. The first method came to my mind would be a projected gradient method, which requires to compute the projection. Projection to the space of lower triangular matrices is easy, but it is unclear how to handle it together with the constraints on eigenvalues of the symmetric part of $A$.
Rephrasing slightly, let
$$ \mathcal{D}_{\mu}^n := \left\{ {\bf X} \in \mathbb{R}^{n \times n} \mid {\bf X} \textrm{ is lower triangular and } {\bf X} + {\bf X}^\top \succeq \mu {\bf I}_n \right\}$$
Thus, we can project a given matrix ${\bf A} \in \mathbb{R}^{n \times n}$ onto $\mathcal{D}_{\mu}^n$ as follows
$$ \begin{array}{ll} \underset {{\bf X} \in \mathbb{R}^{n \times n}} {\text{minimize}} & \| {\bf X} - {\bf A} \|_{\text{F}}^2 \\ \text{subject to} & x_{ij} = 0, \quad \forall j > i\\ & {\bf X} + {\bf X}^\top \succeq \mu {\bf I}_n \end{array} $$