I am asking this in the context of MultiTask Learning applied to Gaussian Processes.
Suppose I have two isotropic RBF kernel functions $K^a(x,x')$ and $K^b(x,x')$ and $N$ distinct points $x_1, x_2, ..., x_N$. Consider the following $200 \times 200$ matrix, where $K^a_{i,j} = K^a(x_i, x_j)$ and $K^b_{i,j} = K^b(x_i, x_j)$, and $i,j = 1,2,...,100$:
$$M = \begin{bmatrix} K^a & C^T \\ C & K^b \end{bmatrix}$$
This matrix has ones on its diagonal. What are the most generous constraints on $C$ such that M is positive semidefinite?
For reference, the RBF kernel is given by: $$K(x,x') = \text{exp}\Big( \frac{||x - x' ||}{2\sigma^2}\Big)$$ (Thus the only difference between $K^a$ and $K^b$ is their $\sigma$ parameter.)