Let $\text{SPD}^n$ and $\text{PD}^n$ be the symmetric semi-positive and positive definite matrices in $\mathbb{R}^{n\times n}$, respectively. I want to find an $X\in \textrm{SPD}^n$ that minimizes $||X||$ (Frobenius norm) and simultaneously fullfills $$ \Sigma_1+X=\Sigma_2+Y~~,$$ where $Y\in \textrm{SPD}^n$ and $\Sigma_1$ and $\Sigma_2$ are given, symmetric, and in $\text{PD}^n$. I suspect this problem is symmetric between $X$ and $Y$, so I could ask for the pair of matrices in $\textrm{SPD}^n$ that minimizes $||X||^2+||Y||^2$. Note that if $\Sigma_1-\Sigma_2\in \textrm{SPD}^n$ then $X=0_{n\times n}$ and $Y=\Sigma_1-\Sigma_2$ would be the solution. Also, we always have that $||X||\leq||\Sigma_2||$.
The motivation is to find a "minimum" multidimensional Gaussian that could relate two independent Gaussian distributions after convolution. If $n=1$ you can always find a Gaussian with variance $\sigma_x^2\ge0$ that fulfills $\sigma_1^2+\sigma_x^2=\sigma_2^2$ if $\sigma_1^2\le\sigma^2_2$. The idea is to generalize this to $n>1$.
Following the suggestion of @AlgebraicPavel, let $S=\Sigma_1-\Sigma_2$ symmetric, therefore $$S=U^\textrm{T} DU~~,$$ and without losing generality we can assume that $d_1\ge\ldots\ge d_n$. The interesting case is when $d_n<0$. Let $d_k$ be the first negative eigenvalue. We propose $$ X=-U^\textrm{T}{\scriptsize\begin{pmatrix} 0&&\\ &\ddots&\\ && d_k \\ &&&\ddots\\ &&&&d_n\\ \end{pmatrix}}U~~.\tag{0}$$
Clearly $X$ and $S+X\in{\rm SPD}^n$, and this solution reduces to the given solutions in the simple cases mentioned in the question. We need to prove that $||X||^2$ (which is equal to $=\sum_{j\ge k}^nd_k^2$) is the minimum norm among the solutions. For that, lets take $X^\prime$ another ${\rm SPD}^n$ matrix such that $S+X^\prime$ is ${\rm SPD}$. We have that \begin{align} ||X^\prime||^2 &= ||U^{\rm T}X^\prime U||^2 \\ &\ge \sum_{j\ge k}^n (u_j^{\rm T}X^\prime u_j)^2\\ \end{align} Since $S+X^\prime\in {\rm SPD}^n$, then $d_j+(u_j^{\rm T}X^\prime u_j)\ge0$. Because $d_j\le0\le u_j^{\rm T}X^\prime u_j$ for $j\ge k$, then $(u_j^{\rm T}X^\prime u_j)^2\ge d_j^2$. Therefore \begin{align} ||X^\prime||^2&\ge \sum_{j\ge k}^n d_j^2\\ &=||X||^2~~. \end{align} So $X$ minimizes the norm and it is the solution to the problem.
The solution can be obtained from the results of Higham (1988, "Computing a Nearest Symmetric Positive Semidefinite Matrix", see 5.2P14 from Horn & Johnson "Matrix Analysis"). This result shows that, for Q real and symmetric, $x=Q_+$ is the solution to $$ \min_{x\ge 0}||x-Q||^2~~,\tag{1}$$ where $Q_+$ and $Q_-$ are the positive and negative semidefinite parts of $Q$, respectively. That is, if $Q=U^T\Lambda U$, then $Q_+=Q=U^T\Lambda_+ U$ where $\Lambda_{+,i}=\max{\lbrace\lambda_i,0\rbrace}$ and $Q_-=Q=U^T\Lambda_- U$ where $\Lambda_{-,i}=-\min{\lbrace\lambda_i,0\rbrace}$ (definition 4.1.12 from Horn & Johnson "Matrix Analysis"). We also define $|Q|=Q_++Q_-$ and note that $Q=Q_+-Q_-$
Since $Q_+ \ge Q$, problem (1) is also the solution of $$ \min_{x\ge 0,x\ge Q}||x-Q||^2~~.\tag{1a} $$ Note that the expression given in Eq. (0) is the solution to $ \min_{x\ge0,x\ge \Sigma_2-\Sigma_1}||x||^2$, which is clearly the same solution of $ \min_{x\ge0,x\ge \Sigma_2-\Sigma_1}||x-(\Sigma_2-\Sigma_1)||^2$. That is, the problem solved by Eq. (0) has the same form as Eq. (1a).
Let us define $U(A,B)=\lbrace X:X\ge A, X\ge B\rbrace$. By defining $x'=x-Q$, problem (1a) is equivalent to $\min_{x'\in U(0,-Q)}||x'||^2$, whose solution is $x'=Q_+-Q=Q_-=(-Q)_+$. Therefore, the solution to $\min_{x\in U(0,P)}||x||^2$ is also $P_+$. We can deduce from this (replace $x$ by $x+P/2$) that the solution to $$ \min_{x\in U(-S,S)}||x||^2~~\tag{3}$$ is $x=|S|$.
Finally, we can re-state the original problem $$ \min_{\substack{x\ge0,y\ge0\\x+\Sigma_1=y+\Sigma_2}}||x||^2+||y||^2~~.\tag{OP}$$ By the parallelogram law, $2||x||^2+2||y||^2=||x+y||^2+||x-y||^2$. Note that $x-y=\Sigma_2-\Sigma_1$. We redefine the variable $z=x+y$ and re-write the OP as $$ \min_{z\in U(\Sigma_2-\Sigma_1,\Sigma_1-\Sigma_2)}||z||^2~~.\tag{OP'}$$ Therefore, the solution is $z=|\Sigma_2-\Sigma_1|$ and $x=(\Sigma_2-\Sigma_1)_+$ and $y=(\Sigma_1-\Sigma_2)_+$. This is the same solution posted before.