Given vectors $a_1, b_2, a_2, b_2 \in \mathcal{R}^{n\times 1}$, I am interested in finding a positive semi-definite matrix $M \in \mathcal{R}^{n\times n}$, $M \succeq 0$, such that $M\cdot a_1 = b_1$, $M\cdot a_2 = b_2$. Here $n \gg 2$, say $n = 1000 $. $a_1, a_2$ are not parallel and are non-zero.
To write it in equations, I want to solve the following semidefinite program
\begin{equation*} \begin{aligned} & \underset{M}{\text{minimize}} & & 0 \\ & \text{subject to} & & M\cdot a_1 = b_1 \\ && & M\cdot a_2 = b_2 \\ &&& M \succeq 0. \end{aligned} \end{equation*}
Depending on the value of $a_1, b_2, a_2, b_2$, sometimes a numerical solver will report this program is infeasible (no such $M$ exists). I have experimented with multiple solvers with identical result. I can further impose that $a_1^T\cdot b_1>0, a_2^T\cdot b_2>0$, but the result is the same.
An observation: If $a_1, a_2$ are orthogonal, it appears the problem is always feasible.
My intuition is that the number of free variables in $M$ is $(n(n+1)/2 -n)$, because a symmetric matrix has $n(n+1)/2$ free variables, and positive semidefiniteness requires all principal minors to be positive, adding $n$ constraints. It appears this intuition is not correct.
What is the requirement of $a_1, b_2, a_2, b_2$ for $M$ to exist?
One way to look at this is to derive the dual, and determine the conditions under which the dual is unbounded---because that means the original problem is infeasible.
The Lagrangian is $$L(M,Z,\lambda_1,\lambda_2) = \lambda_1^T(M a_1-b_1)+\lambda_2^T(M a_2-b_2) - \langle M, Z \rangle$$ Where the scalar Lagrange multipliers $\lambda_1,\lambda_2\in\mathbb{R}^{n+1}$ are unconstrained and $Z$ is positive semidefinite. So the dual optimality condition is $$\mathop{\textrm{sym}}(a_1 \lambda_1^T + a_2 \lambda_2^T) - Z = 0$$ where $\mathop{\textrm{sym}}(Y)=(Y+Y^T)/2$, and the dual problem is \begin{array}{ll} \text{maximize} & - b_1^T \lambda_1 - b_2^T \lambda_2 \\ \text{subject to} & \mathop{\textrm{sym}}(a_1 \lambda_1^T + a_2 \lambda_2^T) - Z = 0 \\ & Z \succeq 0 \end{array} Eliminating $Z$ yields \begin{array}{ll} \text{maximize} & - b_1^T \lambda_1 - b_2^T \lambda_2 \\ \text{subject to} & \mathop{\textrm{sym}}(a_1 \lambda_1^T + a_2 \lambda_2^T) \succeq 0 \\ \end{array} And we see that this is unbounded if there exists $\lambda_1, \lambda_2$ such that $$\mathop{\textrm{sym}}(a_1 \lambda_1^T + a_2 \lambda_2^T) \succeq 0, \quad b_1^T \lambda_1 + b_2^T \lambda_2 < 0$$ If you can find even one pair that satisfies these inequalities, then you can just scale $(\lambda_1,\lambda_2)$ by any positive value to drive the dual objective to infinity. And if you can do that, the primal must be infeasible. Since the scaling is irrelevant, we can easily fix it at $b_1^T \lambda_1 + b_2^T \lambda_2 = -1$.
The formal name for this exercise is the derivation of a theorem of alternatives. That is, exactly one of the following alternatives is true [EDIT: see the comments, neither may be true]: either $$\exists M \succeq 0 \quad \text{s.t.} \quad Ma_1 = b_1, ~ Ma_2 = b_2$$ or $$\exists \lambda_1, \lambda_2 \quad\text{s.t.} \quad \mathop{\textrm{sym}}(a_1 \lambda_1^T + a_2 \lambda_2^T) \succeq 0, \quad b_1^T \lambda_1 + b_2^T \lambda_2 = -1$$