Problem
Consider $N$ random variables $X_1,\, \dots,\, X_N \in \mathbb{R}^d$. They are assumed zero-mean, i.e, $\text{E} \left(X_i\right) = 0$, and their covariances are denoted $\text{E}\left[X_i X_i^\intercal\right] = P_i$. Howerver, the cross-covariances $P_{i,j} = \text{E}\left[X_i X_j^\intercal\right]$ are unknown.
Can we upper-bound the covariance of the sum $S = \sum_{i=1}^N X_i$? That is, can we find a covariance $B$ such that, $$\text{E}\left[S S^\intercal\right] \preceq B,$$ where the inequality is over the set of PSD matrices (Loewner ordering).
Ideas
Let $C$ denote the covariance of $S$, and introduce the correlation matrices: $$C = \sum_{1 \le i,j \le n} P_{i,j} = \sum_{1 \le i,j \le n} P^{1/2}_{i} R_{i,j} P^{1/2}_{j}$$.
One dimensional
If $d = 1$, $C$ becomes: $$C = \sum_{1 \le i,j \le n} \sigma_i \sigma_j \rho_{i,j}.$$ It is clearly maximal when all the correaltions $\rho_{i,j} = 1$. In this case, the optimal bound is $B = (\sum_i \sigma_i)^2$.
Can we extend this kind of reasoning to higher dimensions?
Using Covariance Intersection
I known about Covariance Intersection. It provides a bound on the covariance of the augmented vector $X_{aug} = (X_1^\intercal,\, \dots,\, X_N^\intercal)^\intercal$. For any $\theta \in \mathbb{R}^d$ satisfying $\theta_i \ge 0$ and $\sum \theta_i =1$:
$$\begin{pmatrix}P_1 & P_{1,2} & \cdots & P_{1,N} \\ P_{2,1} & P_{2} & \cdots & P_{2,N} \\ \vdots & \vdots & \ddots & \vdots \\ P_{N,1} & P_{N,2} & \cdots & P_N\end{pmatrix} \preceq \begin{pmatrix}\frac{1}{\theta_1}P_1 & 0 & \cdots & 0 \\ 0 & \frac{1}{\theta_2} P_{2} & \ddots & \vdots \\ \vdots & \ddots & \ddots & 0\\ 0 & \cdots & 0 & \frac{1}{\theta_N}P_N\end{pmatrix}$$
This gives a bound $B = \sum_i \frac{1}{\theta_i}P_i$, in particular $B = N\sum_i P_i$. However, this bound is kind of loose. Can we obtain a tighter bound?