Updated: the goal is to solve $(A^\top A+B^\top B + D)x=c$. Maybe it is not necessary to compute $(A^\top A+B^\top B + D)^{-1}$.
Denote $e=(1,1,\ldots,1)^\top\in\mathbb{R}^n$ and $$A=\begin{bmatrix} e & & & \\ & e & & \\ & & \ddots & \\ & & & e \end{bmatrix}$$ and $$B=\begin{bmatrix}\mathrm{diag}(e) & \mathrm{diag}(e) & \cdots & \mathrm{diag}(e)\end{bmatrix}$$ where $e$ appears $n$ times in $A$ and $n$ times in $B$ (i.e. $A,B\in\mathbb{R}^{n\times n^2}$).
For example, if $n=3$ we have:
$$A=\begin{bmatrix} 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 1 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 \end{bmatrix}$$
$$B=\begin{bmatrix} 1 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0\\ 0 & 1 & 0 & 0 & 1 & 0 & 0 & 1 & 0\\ 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 1 \end{bmatrix}$$
Let $D$ be a $n^2\times n^2$ diagonal matrix with positive elements.
I'm looking for an efficient way to solve the linear equation $$(A^\top A+B^\top B + D)x=c$$ where $x\in\mathbb{R}^{n^2}$ is the variable.
Thank you in advance for any suggestions.
Let $C=A^TA+B^TB+D$. You can write it as $$ C=D+EE^T, \quad E=[A^T,B^T]. $$ The inverse can be then using the Woodbury formula written as $$ \begin{split} C^{-1}&=D^{-1}-D^{-1}E(I+E^TD^{1}E)^{-1}E^TD^{-1}\\ &=D^{-1}-D^{-1}[A^T,B^T]\left\{I+\begin{bmatrix}A\\B\end{bmatrix}D^{-1}[A^T,B^T]\right\}^{-1}\begin{bmatrix}A\\B\end{bmatrix}D^{-1} \end{split}\tag{$*$} $$ Assume that $D=\mathrm{diag}(D_1,\ldots,D_n)$, where each diagonal block $D_i$ is $n\times n$. Since $A=\mathrm{diag}(e^T)$ and $B=\mathrm{diag}(I_n)$, the "small" matrix inside the curly brackets can be written as $$ I+\begin{bmatrix}A\\B\end{bmatrix}D^{-1}[A^T,B^T]= \left[\begin{array}{ccc|c} 1+e^TD_1^{-1}e & & & e^TD_1^{-1} \\ & \ddots & & \vdots \\ & & 1+e^TD_n^{-1}e & e^TD_n^{-1} \\ \hline D_1^{-1}e & & D_n^{-1}e & I+D_1^{-1}+\cdots+D_n^{-1} \end{array}\right]. $$ It can hence be written in the form $$\tag{✿} I+\begin{bmatrix}A\\B\end{bmatrix}D^{-1}[A^T,B^T]=\begin{bmatrix}\Phi&\Theta^T\\\Theta&\Sigma\end{bmatrix}, $$ where the matrices $\Phi=\mathrm{diag}(1+e^TD_i^{-1}e)_{i=1}^n$ and $\Sigma=I+D_1^{-1}+\cdots+D_n^{-1}$ are $n\times n$ diagonal matrices. The $n\times n$ matrix $\Theta$ contains in the $i$th column the entries of the diagonal of $D_i^{-1}$. Now you can use some block inversion formulas, e.g., both $\Phi$ and the Schur complement $\Sigma-\Theta\Phi^{-1}\Theta^T$ are invertible (the Schur complement is normally a dense matrix). You can plug the computed inverse back to ($*$) and compute the inverse of $C^{-1}$.
Note that you actually don't need the inverse explicitly. Assuming that the matrix (✿) has a Cholesky factorisation $LL^T$, all you need is to solve the system $$ LX=\begin{bmatrix}A\\B\end{bmatrix} $$ with multiple right-hand sides. Then $$ C^{-1}= D^{-1}-D^{-1}[A^T,B^T](LL^T)^{-1}\begin{bmatrix}A\\B\end{bmatrix}D^{-1} =D^{-1}-D^{-1}X^TXD^{-1}. $$
To solve a system with $C$, you don't need to compute $X$ but still you need to factorise the matrix (✿). You can use ($*$) to solve a system $Cx=b$ as follows:
Multiplying with $E$ and $E^T$ is easy as it involves only some summations and copies.
NOTE: I'm afraid that there might be no nicely looking solution except in cases where $D$ is a multiple of identity (or at least the matrices $D_i$ are multiples of identity). Otherwise, $\Theta$ is a general $n\times n$ matrix (with positive entries) so the inverse of the Schur complement $\Sigma-\Theta\Phi^{-1}\Theta^T$ cannot be written in some simple form. However, in the mentioned special cases the matrix $\Theta\Phi^{-1}\Theta^T$ would have rank one so one could use the Sherman-Morrison formula to invert the Schur complement.
NOTE 2: The matrix (✿) has a special form so this can be somewhat exploited in the Cholesky factorisation. The matrix can be factorised as $$ \begin{bmatrix}\Phi&\Theta^T\\\Theta&\Sigma\end{bmatrix} = \begin{bmatrix}I&0\\\Theta\Phi^{-1}&I\end{bmatrix} \begin{bmatrix}\Phi&0\\0&\Pi\end{bmatrix} \begin{bmatrix}I&\Phi^{-1}\Theta^T\\0&I\end{bmatrix}, $$ where $\Pi=\Sigma-\Theta\Phi^{-1}\Theta^T$ is the already mentioned Schur complement. So if $\tilde{L}\tilde{L}^T=\Pi$ is the Cholesky factorisation of $\Pi$, then $$ \begin{bmatrix}\Phi&\Theta^T\\\Theta&\Sigma\end{bmatrix} = \underbrace{\begin{bmatrix}\Phi^{1/2}&0\\\Theta\Phi^{-1/2}&\tilde{L}\end{bmatrix}}_{L} \underbrace{\begin{bmatrix}\Phi^{1/2}&\Phi^{-1/2}\Theta^T\\0&\tilde{L}^T\end{bmatrix}}_{L^T} $$ is the Cholesky factorisation of (✿). Of course, some rearrangements can be made to avoid computing the square roots.