Solutions to equation of $6$ variables

304 Views Asked by At

Preliminaries: Let $M$ be any $4\times 4$ unitary matrix. Now, define the matrices $A$ and $B$ by

$$A=M^\dagger\left[ \begin{pmatrix} \cos(\theta) & -e^{i\lambda}\sin(\theta)\\ e^{i\phi}\sin(\theta) & e^{i(\lambda+\phi)}\cos(\theta) \end{pmatrix}\otimes \begin{pmatrix} 1 & 0\\ 0 &1 \end{pmatrix}\right]M$$

$$B=M^\dagger\left[\begin{pmatrix} 1 & 0\\ 0 &1 \end{pmatrix}\otimes \begin{pmatrix} \cos(\theta) & -e^{i\lambda}\sin(\theta)\\ e^{i\phi}\sin(\theta) & e^{i(\lambda+\phi)}\cos(\theta) \end{pmatrix}\right]M$$

We may then define $f:\mathbb{R}^6\to \mathbb{R}$ by

$$f(\theta,\lambda,\phi,\delta,\alpha,\beta)=\prod_{C\in\{A,B\}}\left(\left|\begin{pmatrix} 0\\ 0\\ 1\\ 0 \end{pmatrix}^TC\begin{pmatrix} \cos(\delta)e^{i\alpha}\\ \sin(\delta)e^{i\beta}\\\ 0\\ 0 \end{pmatrix}\right|+\left|\begin{pmatrix} 0\\ 0\\ 0\\ 1 \end{pmatrix}^TC\begin{pmatrix} \cos(\delta)e^{i\alpha}\\ \sin(\delta)e^{i\beta}\\\ 0\\ 0 \end{pmatrix}\right|\right)$$

Now, what are the zeros of this function? It is easy to show that any element of the set

$$S=\{(k_1\pi ,\lambda, 2\pi k_2-\lambda,\delta,\alpha,\beta):k_1,k_2\in\mathbb{Z}\text{ and }\lambda,\delta,\alpha,\beta\in\mathbb{R}\}$$

is a solution to $f(\theta,\lambda,\phi,\delta,\alpha,\beta)=0$. This is because

$$\begin{pmatrix} \cos(\theta) & -e^{i\lambda}\sin(\theta)\\ e^{i\phi}\sin(\theta) & e^{i(\lambda+\phi)}\cos(\theta) \end{pmatrix}\Bigg|_{\theta=k_1 \pi,\phi=2\pi k_2-\lambda}=(-1)^{k_1}\begin{pmatrix} 1 & 0\\ 0 & 1 \end{pmatrix}$$

and the matrix $A$ collapses down to

$$A=M^\dagger[(-1)^{k_1}I_4]M=(-1)^{k_1}M^\dagger M=(-1)^{k_1}I_4$$

(where $I_4$ is the $4\times 4$ identity matrix), which in turn means all the inner products are zero.

My question: Is there some $4\times 4$ unitary matrix $M$ such that $S$ is all the solutions to the equation $f(\theta,\lambda,\phi,\delta,\alpha,\beta)=0$?

Motivation: If I could prove this question in the negative (there are no such matrices $M$), then I could show that at least $3$ qubits are required for an error detection code I am developing for a quantum computer. I won't bore you with all the details (unless someone wants to know more), but my problem has simplified to the linear algebra question above.

Work: So far, I have tried a lot of different ways of proving the above without much success. For any set matrix $M$, I can always find solutions that are not in $S$. However, it does seem like almost all the degrees of freedom are required because there are some $M$ where the solution seems to lie on a line in $\mathbb{R}^6$ space (basically it depends on the difference between $\alpha$ and $\beta$). That is, it does not form a family of solutions on a plane/hyper-plane in the space. My current work is to try to use different generators of $U(4)$ and see if I can say something if I decompose $M$ into constituent parts (Here are two different papers I have been using for said generators). Unfortunately, this has not simplified my problem to the point where I can get a satisfactory solution.

2

There are 2 best solutions below

0
On BEST ANSWER

I have finally worked out a proof of this problem after all this time. The proof builds off the idea of @atarasenko. Namely, we show that there is a non-trivial matrix $U$ such that $A_{42}A_{31}-A_{41}A_{32}=0$. That is, there is non-trivial matrix $U$ such that the $2\times 2$ block matrix in the lower left corner of the resulting $4\times 4$ unitary matrix has determinant $0$ (we will name this block matrix either $S_0$ or $S_1$ depending). I have actually already written this proof for my prospectus, but I thought it would be nice to put it here as well and get some closure on this problem. As will shortly be seen, the proof delves in lots of different cases, and I would still be very happy if anyone can shorten the case work into something more manageable.

Lemma: If $X$ is any matrix in $\mathbb{C}^{2\times 2}$, then there exists a unitary matrix $U$, with the stipulation that $U\neq e^{i\phi}I_2$ for any angle $\phi$, such that

\begin{equation} \det([U,X])=|UX-XU|=0 \end{equation}

Proof: By definition there exists a magnitude $1$ vector $\vec{v}_1$ such that $X\vec{v}_1=\tau \vec{v}_1$ for some $\tau\in\mathbb{C}$. Let $\vec{v}_2$ be any vector orthonormal to $\vec{v}_1$. Now, let $U$ be the unitary matrix defined by

\begin{equation} U\vec{v}_1=\vec{v}_1\text{ and }U\vec{v}_2=-\vec{v}_2 \end{equation}

Note that we are assured that $U$ is not the identity matrix times a phase since its eigenvalues have different phases. But then

\begin{equation} (UX-XU)\vec{v}_1=UX\vec{v}_1-XU\vec{v}_1=\tau U\vec{v}_1 -X\vec{v}_1=\tau\vec{v}_1-\tau\vec{v}_1=\vec{0} \end{equation}

We conclude that there exists a non-trivial $U$ such that $\det([U,X])=0$.

Proof: Having completed the lemma, we will now dive into the proof in earnest. Let $M$ be an arbitrary $4\times 4$ unitary matrix

\begin{equation} M=\left( \begin{array}{cccc} a_0 & b_0 & c_0 & d_0\\ a_1 & b_1 & c_1 & d_1\\ a_2 & b_2 & c_2 & d_2\\ a_3 & b_3 & c_3 & d_3\\ \end{array} \right) \end{equation}

There are $7$ possible cases for $M$ that we will investigate:

$1)$ $\left| \begin{array}{cc} a_0 & b_0 \\ a_1 & b_1 \\ \end{array} \right|\neq 0$

$2)$ $\left| \begin{array}{cc} a_2 & b_2 \\ a_3 & b_3 \\ \end{array} \right|\neq 0$

$3)$ $\left| \begin{array}{cc} a_1 & b_1 \\ a_3 & b_3 \\ \end{array} \right|\neq 0$

$4)$ $\left| \begin{array}{cc} a_0 & b_0 \\ a_2 & b_2 \\ \end{array} \right|\neq 0$

$5)$ $M=\left( \begin{array}{cccc} e^{i\delta_1}\cos(\theta_1) & -e^{i(\delta_1+\phi_1)}\sin(\theta_1) & 0 & 0\\ 0 & 0 & e^{i\delta_2}\cos(\theta_2) & -e^{i(\delta_2+\phi_2)}\sin(\theta_2)\\ 0 & 0 & e^{i(\delta_2+\lambda_2)}\sin(\theta_2) & e^{i(\delta_2+\lambda_2+\phi_2)}\cos(\theta_2)\\ e^{i(\delta_1+\lambda_1)}\sin(\theta_1) & e^{i(\delta_1+\lambda_1+\phi_1)}\cos(\theta_1) & 0 & 0\\ \end{array} \right)$

for some $(\delta_1,\theta_1,\lambda_1,\phi_1,\delta_2,\theta_2,\lambda_2,\phi_2)\in\mathbb{R}^8$

$6)$ $M=\left( \begin{array}{cccc} 0&0&e^{i\delta_2}\cos(\theta_2) & -e^{i(\delta_2+\phi_2)}\sin(\theta_2)\\ e^{i\delta_1}\cos(\theta_1) & -e^{i(\delta_1+\phi_1)}\sin(\theta_1)&0&0\\ e^{i(\delta_1+\lambda_1)}\sin(\theta_1) & e^{i(\delta_1+\lambda_1+\phi_1)}\cos(\theta_1)&0&0\\ 0&0&e^{i(\delta_2+\lambda_2)}\sin(\theta_2) & e^{i(\delta_2+\lambda_2+\phi_2)}\cos(\theta_2)\\ \end{array} \right)$

for some $(\delta_1,\theta_1,\lambda_1,\phi_1,\delta_2,\theta_2,\lambda_2,\phi_2)\in\mathbb{R}^8$

$7)$ $M$ does not fall into cases $1-6$

We will start with case $7)$ as this case is actually impossible:

Case 7: In order to show that this case is impossible, we will assume that we are not in cases $1-4$ and from there show how this implies either of case $5$ or case $6$. To start, note that every $2\times 2$ matrix with determinant $0$ has the form

\begin{equation} \left( \begin{array}{cc} a & b \\ xa & xb \\ \end{array} \right)\text{ or }\left( \begin{array}{cc} 0 & 0 \\ a & b \\ \end{array} \right) \end{equation}

for some $a,b,x\in\mathbb{C}$. Then with matrices from cases $1-4$, we are left with $16$ subcases, each one corresponding to a different combination of forms selected from the equations above. For example, the first subcase is

Case 7.1: Assume that the matrices are of the form

\begin{equation} \left( \begin{array}{cc} a_0 & b_0 \\ a_1 & b_1 \\ \end{array} \right)=\left( \begin{array}{cc} a & b \\ xa & xb \\ \end{array} \right) \end{equation}

\begin{equation} \left( \begin{array}{cc} a_2 & b_2 \\ a_3 & b_3 \\ \end{array} \right)=\left( \begin{array}{cc} c & d \\ yc & yd \\ \end{array} \right) \end{equation}

\begin{equation} \left( \begin{array}{cc} a_1 & b_1 \\ a_3 & b_3 \\ \end{array} \right)=\left( \begin{array}{cc} e & f \\ ze & zf \\ \end{array} \right) \end{equation}

\begin{equation} \left( \begin{array}{cc} a_0 & b_0 \\ a_2 & b_2 \\ \end{array} \right)=\left( \begin{array}{cc} g & h \\ wg & wh \\ \end{array} \right) \end{equation}

for some $a,b,c,d,e,f,g,h,x,y,z,w\in\mathbb{C}$. Right off the bat, we get the following equalities: $a=g$, $b=h$, $c=wg$, $d=wh$, $e=xa$, and $f=xb$. With these equations, we can rewrite the first two columns of $M$ as

\begin{equation} \left( \begin{array}{cccc} a_0 & b_0 \\ a_1 & b_1 \\ a_2 & b_2 \\ a_3 & b_3 \\ \end{array} \right)=\left( \begin{array}{cccc} a & b \\ xa & xb \\ wa & wb \\ zxa & zxb \\ \end{array} \right) \end{equation}

But this is a contradiction as these columns are not linearly independent. Thus, this subcase is impossible.

Case 7.2: We will show the work in this case as it diverges enough from the work in case $7.1$ to be worth writing out in full. Assume that the matrices are of the form

\begin{equation} \left( \begin{array}{cc} a_0 & b_0 \\ a_1 & b_1 \\ \end{array} \right)=\left( \begin{array}{cc} 0 & 0 \\ a & b \\ \end{array} \right) \end{equation}

\begin{equation} \left( \begin{array}{cc} a_2 & b_2 \\ a_3 & b_3 \\ \end{array} \right)=\left( \begin{array}{cc} c & d \\ xc & xd \\ \end{array} \right) \end{equation}

\begin{equation} \left( \begin{array}{cc} a_1 & b_1 \\ a_3 & b_3 \\ \end{array} \right)=\left( \begin{array}{cc} e & f \\ ye & yf \\ \end{array} \right) \end{equation}

\begin{equation} \left( \begin{array}{cc} a_0 & b_0 \\ a_2 & b_2 \\ \end{array} \right)=\left( \begin{array}{cc} 0 & 0 \\ g & h \\ \end{array} \right) \end{equation}

for some $a,b,c,d,e,f,g,h,x,y\in\mathbb{C}$. Again, we get a set of easy equations: $c=g$, $d=h$, $a=e$, $b=f$, $xc=ye$, and $xd=yf$. But then we can rewrite the first two columns of $M$ as

\begin{equation} \left( \begin{array}{cccc} a_0 & b_0 \\ a_1 & b_1 \\ a_2 & b_2 \\ a_3 & b_3 \\ \end{array} \right)=\left( \begin{array}{cccc} 0 & 0 \\ e & f \\ c & d \\ xc & xd \\ \end{array} \right) \end{equation}

Now, if $y\neq 0$ then we can further manipulate this as

\begin{equation} =\left( \begin{array}{cccc} 0 & 0 \\ \frac{x}{y}c & \frac{x}{y}c \\ c & d \\ xc & xd \\ \end{array} \right) \end{equation}

But again, these columns are not linearly independent. Thus, $y$ must be $0$ (implying that $xc=xd=0$) and therefore the first two columns of $M$ are of the form

\begin{equation} \left( \begin{array}{cccc} a_0 & b_0 \\ a_1 & b_1 \\ a_2 & b_2 \\ a_3 & b_3 \\ \end{array} \right)=\left( \begin{array}{cccc} 0 & 0 \\ a & b \\ c & d \\ 0 & 0 \\ \end{array} \right) \end{equation}

Since these columns are orthonormal, the four non-zero elements must form a $2\times 2$ unitary matrix, and we are therefore firmly planted in case $6$.

Cases 7.3-7.16: The remaining cases are all dealt with in a similar manner as the previous two subcases.

Having proven that cases $1-6$ are the only possibilities for any $4\times 4$ unitary matrix $M$, we now turn towards proving our main result: consider the following two matrices

$$M^\dagger (U\otimes I_2)M=\left( \begin{array}{cc} Q_0 & R_0 \\ S_0 & T_0 \\ \end{array} \right)$$

\begin{equation} M^\dagger ( I_2\otimes U)M=\left( \begin{array}{cc} Q_1 & R_1 \\ S_1 & T_1 \\ \end{array} \right) \end{equation}

where $U$ is a $2\times 2$ unitary matrix, $M$ is an arbitrary $4\times 4$ unitary matrix, and $Q_i,R_i,S_i,T_i\in\mathbb{C}^{2\times 2}$. There exists a non-trivial $2\times 2$ unitary matrix $U$ (non-trivial in the sense that it is not the identity times some phase) such that either $\det(S_0)=0$ or $\det(S_1)=0$.

Case 1: Write first possibility in block matrix form:

$$\left(\begin{array}{cc} Q_1 & R_1 \\ S_1 & T_1 \\ \end{array} \right)=M^\dagger (I_2\otimes U)M=\left( \begin{array}{cc} A^\dagger & C^\dagger \\ B^\dagger & D^\dagger \\ \end{array} \right)\left( \begin{array}{cc} U & \hat{0} \\ \hat{0} & U \\ \end{array} \right)\left( \begin{array}{cc} A & B \\ C & D \\ \end{array} \right)$$

Note that by our assumption we know that $A$ is invertible and thus by the Nullity Theorem we also know that $D$ is invertible. But then we can rewrite

\begin{equation}M^\dagger=M^{-1}=\left(\begin{array}{cc} (A-BD^{-1}C)^{-1} & \hat{0} \\ \hat{0} & (D-CA^{-1}B)^{-1} \\ \end{array} \right)\left(\begin{array}{cc} I_2 & -BD^{-1} \\ -CA^{-1} & I_2 \\ \end{array} \right)\end{equation}

(this comes from the block matrix inversion formula). Using this form of $M^\dagger$ in equation give us

$$\left( \begin{array}{cc} Q_1 & R_1 \\ S_1 & T_1 \\ \end{array} \right)=\left(\begin{array}{cc} (A-BD^{-1}C)^{-1} & \hat{0} \\ \hat{0} & (D-CA^{-1}B)^{-1} \\ \end{array} \right) $$ \begin{equation}\left(\begin{array}{cc} I & -BD^{-1} \\ -CA^{-1} & I \\ \end{array} \right)\left( \begin{array}{cc} U & \hat{0} \\ \hat{0} & U \\ \end{array} \right)\left( \begin{array}{cc} A & B \\ C & D \\ \end{array} \right)\end{equation}

Multiplying this out, we get that

\begin{equation} S_1=(D-CA^{-1}B)^{-1}(UCA^{-1}-CA^{-1}U)A \end{equation}

We may now appeal to the lemma proved earlier with $X=CA^{-1}$. Thus, we conclude there is a non-trivial $U$ such that

\begin{equation} \det(S_1)=\det(D-CA^{-1}B)^{-1}\det(UCA^{-1}-CA^{-1}U)\det(A)=0 \end{equation}

Case 2: For this case, define

\begin{equation} P=\left( \begin{array}{cccc} 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ \end{array} \right) \end{equation}

The we write

$$\left( \begin{array}{cc} Q_1 & R_1 \\ S_1 & T_1 \\ \end{array} \right)=M^\dagger (I_2\otimes U)M=\left(\begin{array}{cc} A^\dagger & C^\dagger \\ B^\dagger & D^\dagger \\ \end{array} \right)\left( \begin{array}{cc} U & \hat{0} \\ \hat{0} & U \\ \end{array} \right)\left( \begin{array}{cc} A & B \\ C & D \\ \end{array} \right)$$

\begin{equation}=\left(\begin{array}{cc} A^\dagger & C^\dagger \\ B^\dagger & D^\dagger \\ \end{array} \right)P^\dagger P\left( \begin{array}{cc} U & \hat{0} \\ \hat{0} & U \\ \end{array} \right)P^\dagger P\left( \begin{array}{cc} A & B \\ C & D \\ \end{array} \right)\label{case2p} \end{equation}

But this simplifies as

\begin{equation} PM=\left( \begin{array}{cc} C & D \\ A & B \\ \end{array} \right) \end{equation}

\begin{equation}P\left( \begin{array}{cc} U & \hat{0} \\ \hat{0} & U \\ \end{array} \right)P^\dagger =\left( \begin{array}{cc} U & \hat{0} \\ \hat{0} & U \\ \end{array} \right) \end{equation}

Then the equation simplifies to

\begin{equation}=\left(\begin{array}{cc} C^\dagger & A^\dagger \\ D^\dagger & B^\dagger \\ \end{array} \right)\left( \begin{array}{cc} U & \hat{0} \\ \hat{0} & U \\ \end{array} \right)\left( \begin{array}{cc} C & D \\ A & B \\ \end{array} \right) \end{equation}

This is the same as case $1$ (as we assumed that $C$ was invertible) except we now have the equation that

\begin{equation} S_1=(B-AC^{-1}D)^{-1}(UAC^{-1}-AC^{-1}U)C \end{equation}

Again appealing to the lemma, we conclude that there is a non-trivial $U$ such that $\det(S_1)=0$.

Case 3: This case can be worked through in the same manner as the last two cases except we start with the first possibility instead of the second possibility. First, define

\begin{equation} P=\left( \begin{array}{cccc} 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ \end{array} \right) \end{equation}

Then

\begin{equation}\left( \begin{array}{cc} Q_0 & R_0 \\ S_0 & T_0 \\ \end{array} \right)=M^\dagger (U\otimes I_2)M=M^\dagger P^\dagger P(U\otimes I_2)P^\dagger PM \end{equation}

If we write

\begin{equation}U=\left( \begin{array}{cc} e^{i \delta } \cos (\theta ) & -e^{i (\delta + \phi) } \sin (\theta ) \\ e^{i (\delta + \lambda) } \sin (\theta ) &e^{i (\delta + \lambda + \phi) } \cos (\theta ) \\ \end{array} \right)\end{equation}

(the most general $2\times 2$ unitary matrix), we see that

$$P^\dagger (U\otimes I_2)P=\left( \begin{array}{cccc} e^{i (\delta + \lambda + \phi)}\cos (\theta ) & e^{i (\delta + \lambda) } \sin (\theta ) & 0 & 0 \\ -e^{i (\delta + \phi) } \sin (\theta ) & e^{i \delta } \cos (\theta ) & 0 & 0 \\ 0 & 0 &e^{i (\delta + \lambda + \phi)} \cos (\theta ) & e^{i (\delta +\lambda) } \sin (\theta ) \\ 0 & 0 & -e^{i (\delta + \phi) } \sin (\theta ) & e^{i \delta } \cos (\theta ) \\ \end{array} \right)$$

\begin{equation}=\left( \begin{array}{cc} 1 & 0 \\ 0 & 1 \\ \end{array} \right)\otimes \left( \begin{array}{cc} e^{i (\delta + \lambda + \phi) } \cos (\theta ) & e^{i( \delta + \lambda) } \sin (\theta ) \\ -e^{i (\delta + \phi) } \sin (\theta ) & e^{i \delta } \cos (\theta ) \\ \end{array} \right)=I_2\otimes U^{'}\end{equation}

where $U^{'}$ is simply another way to write the most general $2\times 2$ unitary matrix. Importantly, note that if $U=e^{i\phi}I_2$ then $U^{'}=e^{i\phi}I_2$ (and vice-versa). We can also write

\begin{equation}PM=\left( \begin{array}{cc} A^{'} & B^{'} \\ C^{'} & D^{'}\\ \end{array} \right)\end{equation}

where $A^{'}=\left( \begin{array}{cc} a_1 & b_1 \\ a_3 & b_3\\ \end{array} \right)$ (note that by our assumption $A^{'}$ is invertible. But this leads us to the same situation we had in cases $1$ and $2$. Here, we get that

\begin{equation} S_0=(D^{'}-C^{'}{A^{'}}^{-1}B^{'})^{-1}(U^{'}C^{'}{A^{'}}^{-1}-C^{'}{A^{'}}^{-1}U^{'})A^{'} \end{equation}

Thus, there is a nontrivial matrix $U$ (related to $U^{'}$ in a complicated manner) such that $\det(S_0)=0$.

Case 4: This final case is proved in much the same way as the previous cases. We simply take

\begin{equation}P=\left( \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ \end{array} \right)\end{equation}

and the logical train of thought will lead us to

\begin{equation} S_1=(B^{'}-A^{'}{C^{'}}^{-1}D^{'})^{-1}(U^{'}A^{'}{C^{'}}^{-1}-A^{'}{C^{'}}^{-1}U^{'})C^{'} \end{equation}

(where $C^{'}$ is matrix we assumed was invertible). Thus, there exists non-trivial $U$ such that $\det(S_0)=0$.

Case 5 and Case 6: These cases will be proved in a different manner than the previous cases. Simply note that

\begin{equation}\left( \begin{array}{cc} Q_0 & R_0 \\ S_0 & T_0 \\ \end{array} \right)=M^\dagger \left(I_2\otimes \left( \begin{array}{cc} 1 & 0 \\ 0 & e^{i\tau} \\ \end{array} \right)\right)M=\left( \begin{array}{cc} A & \hat{0} \\ \hat{0} & B \\ \end{array} \right)\end{equation}

\begin{equation}\left( \begin{array}{cc} Q_1 & R_1 \\ S_1 & T_1 \\ \end{array} \right)=M^\dagger \left(\left( \begin{array}{cc} 1 & 0 \\ 0 & e^{i\tau} \\ \end{array} \right)\otimes I_2\right)M=\left( \begin{array}{cc} C & \hat{0} \\ \hat{0} & D \\ \end{array} \right) \end{equation}

for some $2\times 2$ unitary $A,B,C,D$. In both cases, it is obvious that $S_0=S_1=\hat{0}$ for all angles $\tau$. Since these clearly have determinant $0$, we are done.

Having completed our casework, we may now begin the final portion of the proof. Let $M$ be an arbitrary $4\times 4$ unitary matrix. From our casework above, we know there exists a non-trivial matrix $U$ such that either $S_0$ or $S_1$ is non-invertible. Assume it is $S_0$ (the logic is exactly the same in the case of $S_1$). Then by definition there exists $\alpha,\beta\in\mathbb{C}$ such that

\begin{equation} S_0\left( \begin{array}{cc} \alpha \\ \beta \\ \end{array} \right)=\left( \begin{array}{cc} 0 \\ 0 \\ \end{array} \right) \end{equation}

and that $|\alpha|^2+|\beta|^2=1$. Then we have that

\begin{equation}M^\dagger (U\otimes I_2)M\left( \begin{array}{cccc} \alpha\\ \beta\\ 0 \\ 0 \\ \end{array} \right)=\left( \begin{array}{cc} Q_0& R_0 \\ S_0& T_0 \\ \end{array} \right)\left( \begin{array}{cccc} \alpha\\ \beta\\ 0 \\ 0 \\ \end{array} \right) \end{equation}

However, from our definition of $\alpha$ and $\beta$, this simplifies to

\begin{equation} = \left( \begin{array}{cccc} \tau\\ \delta\\ 0 \\ 0 \\ \end{array} \right)\end{equation}

for some $\tau,\delta\in\mathbb{C}$.

0
On

Not a full solution, an idea how to simplify the equation.

Vectors $\lvert e_i\rangle=\{(1,0,0,0)^{T}, (0,1,0,0)^{T}, (0,0,1,0)^{T}, (0,0,0,1)^{T}\}$ form a basis. $f$ is zero if for some vector $\lvert a\rangle=e^{i\alpha}\cos{\delta}\lvert e_1\rangle+e^{i\beta}\sin{\delta}\lvert e_2\rangle$: $$\tag{1} \langle e_3\rvert A\lvert a\rangle=\langle e_4\rvert A\lvert a\rangle=0 \; or \; \langle e_3\rvert B\lvert a\rangle=\langle e_4\rvert B\lvert a\rangle=0 $$ If we introduce matrix components $A_{ik}=\langle e_i \rvert A\lvert e_k\rangle$, equation (1) can be rewritten as: $$ A_{31}e^{i\alpha}\cos{\delta}+A_{32}e^{i\beta}\sin{\delta}=A_{41}e^{i\alpha}\cos{\delta}+A_{42}e^{i\beta}\sin{\delta}=0 $$ (or similar equation for $B$, which is omitted); which is satisfied if: $$ F(\theta,\lambda,\phi)\equiv A_{42}A_{31}-A_{41}A_{32}=0 $$ This is a single complex equation for 3 real parameters: $\theta,\lambda,\phi$. This way we can analyze an equation with just 3 parameters, which should be easier.