How to disentangle these operators?

190 Views Asked by At

How do I solve for $\beta_k$ in: $e^{\alpha_1 G_1 + \alpha_2 G_2 +\alpha_3 G_3 } =e^{\beta_1 G_1} e^{\beta_2 G_2} e^{\beta_3 G_3} e^{\beta_4 G_4}$ ? Note there is no $\alpha_4$ term.

(Also, do solutions even exist for this problem? Referring to the answer by MoisheKohan at Disentangling and reordering operator exponentials from Lie groups)

Here $G_k$ form $\mathfrak{gl}_2(\mathbb{C})=\mathfrak{sl}_2(\mathbb{C})\oplus\mathbb{C}$ Lie algebra:

$[G_1,G_2]=0,\\ [G_1,G_3]=[G_3,G_2]=G_4,\\ [G_1,G_4]= [G_4,G_2]=G_3,\\ [G_3,G_4]=-2G_1+2G_2$

These have the representations: \begin{equation}\begin{aligned} G_1 &= \begin{pmatrix}1&0\\0&0\end{pmatrix}\\ G_2 &= \begin{pmatrix}0&0\\0&1\end{pmatrix}\\ G_3 &= \begin{pmatrix}0&1\\1&0\end{pmatrix}\\ G_4 &= \begin{pmatrix}0&1\\-1&0\end{pmatrix} \end{aligned}\end{equation}

Using these representations I end up with a matrix equation: \begin{equation}\begin{aligned} \begin{pmatrix}e^{\frac{\alpha_1+\alpha_2}{2}}\left[\cosh\left(\frac{1}{2}\sqrt{(\alpha_1-\alpha_2)^2+4\alpha_3^2}\right)+\frac{(\alpha_1-\alpha_2)}{\sqrt{(\alpha_1-\alpha_2)^2+4\alpha^2_3}}\sinh\left(\frac{1}{2}\sqrt{(\alpha_1-\alpha_2)^2+4\alpha_3^2}\right)\right]&\frac{2e^{\frac{\alpha_1+\alpha_2}{2}}\alpha_3\sinh\left(\frac{1}{2}\sqrt{(\alpha_1-\alpha_2)^2+4\alpha_3^2}\right)}{\sqrt{(\alpha_1-\alpha_2)^2+4\alpha^2_3}}\\\frac{2e^{\frac{\alpha_1+\alpha_2}{2}}\alpha_3\sinh\left(\frac{1}{2}\sqrt{(\alpha_1-\alpha_2)^2+4\alpha_3^2}\right)}{\sqrt{(\alpha_1-\alpha_2)^2+4\alpha^2_3}}&e^{\frac{\alpha_1+\alpha_2}{2}}\left[\cosh\left(\frac{1}{2}\sqrt{(\alpha_1-\alpha_2)^2+4\alpha_3^2}\right)-\frac{(\alpha_1-\alpha_2)}{\sqrt{(\alpha_1-\alpha_2)^2+4\alpha^2_3}}\sinh\left(\frac{1}{2}\sqrt{(\alpha_1-\alpha_2)^2+4\alpha_3^2}\right)\right]\end{pmatrix} &= LHS \end{aligned}\end{equation}

and \begin{equation}\begin{aligned} RHS &= \begin{pmatrix}e^{\beta_1}\left(\cos\beta_4\cosh\beta_3-\sin\beta_4\sinh\beta_3\right)&e^{\beta_1}\left(\sin\beta_4\cosh\beta_3+\cos\beta_4\sinh\beta_3\right)\\e^{\beta_2}\left(-\sin\beta_4\cosh\beta_3+\cos\beta_4\sinh\beta_3\right)&e^{\beta_2}\left(\cos\beta_4\cosh\beta_3+\sin\beta_4\sinh\beta_3\right)\end{pmatrix} \end{aligned}\end{equation}

2

There are 2 best solutions below

0
On BEST ANSWER

I'm only writing this to avoid a garland of thrust-and-parry comments, and to remind you of the standard method. The standard drill you may have covered in the physics of spin 1/2 through Pauli matrices is the following.

First clean up your formulas and parameters that seem to completely overwhelm you. $$ G_3=\sigma_1; ~~G_4=i\sigma_2; ~~2G_1=\sigma_3+ I; ~~2G_2=I-\sigma_3; $$ It is thus obvious that $G_1+G_2$ is in the center of your Lie algebra, the 2x2 identity matrix, and factors out of the problem: it should be eliminated with extreme prejudice.

The remaining three Lie Algebra elements are traceless, and so the group elements of $sl(2)$ now map to an exponential of a traceless 2x2 matrix. That is, $$ e^{(\alpha_1 + \alpha_2)/2} e^{(\alpha_1-\alpha_2) \sigma_3/2 + \alpha_3 \sigma_1 } =e^{(\beta_1 +\beta_2)/2} e^{(\beta_1 -\beta_2)\sigma_3/2} e^{\beta_3 \sigma_1} e^{i\beta_4 \sigma_2} . $$ That is to say, after you appreciate that $\alpha_1+\alpha_2=\beta_1+\beta_2$, one α and one β is redundant, and may be eliminated. Do that, introducing primed variables for the half differences, to solve $$ e^{\alpha' \sigma_3 + \alpha_3 \sigma_1 } = e^{\beta' \sigma_3} e^{\beta_3 \sigma_1} e^{i\beta_4 \sigma_2} . $$ Now, given the cornerstone expansion of the Pauli vector adduced in the WP link provided, perform the multiplication on the RHS and equate it to the expansion of the LHS. One combination of the of the 3 remaining βs will be constrained to zero: In particular the coefficient of the $\sigma_2$, on the RHS, which is absent on the LHS--do you see why? So there are only two βs to solve for two αs.

If I were you, I'd take my remaining two αs to be pure imaginary, so the LHS is a group element of su(2); and $\beta_4$ real, while $\beta_3$ and $\beta'$ pure imaginary, so you merely compose three elements of su(2) on the right, three unitary 2x2 matrices, to a restricted unitary matrix on the LHS.

0
On

Let me just record the answer based on my comments without going into details:

  1. Over the complex numbers, this problem has no solution for general values of $\alpha_1, \alpha_2, \alpha_3$.

  2. For "generic" values of $\alpha_1, \alpha_2, \alpha_3$ the problem does have a solution and, in principle, there is even an algorithm for finding one. Here "generic" means: There exists a complex-analytic subvariety $A\subset {\mathbb C}^3$ (with nonempty complement) such that as long as $(\alpha_1, \alpha_2, \alpha_3)\notin A$, there is a solution. Even more: There exists a system of polynomial equations $P(M)=0$ (with complex coefficients) on complex $2\times 2$ matrices $M$ such that if $M$ satisfies $P(M)\ne 0$, then you can find your $\beta_1,...,\beta_4\in {\mathbb C}$ such that $$ M= \exp(\beta_1 G_1)... \exp(\beta_4 G_4). $$ Again, one in principle can write down the equation $P$ explicitly, but I will not do this (do not even ask).

  3. The answer is quite different if you consider real coefficients:

For every invertible 2-by-2 real matrix $M$ there exist real numbers $\beta_1,...,\beta_4\in {\mathbb R}$ such that $$ M= \exp(\beta_1 G_1)... \exp(\beta_4 G_4). $$

The key to the proof is to consider the linear-fractional transformations $$ \gamma: z\mapsto \frac{az+b}{cd +d}, z\in {\mathbb C} $$ corresponding to matrices (with real coefficients) $$ \left[\begin{array}{cc} a&b\\ c&d\end{array}\right] $$ satisfying $ad-bc=1$. The maps $\gamma$ send the complex upper half-plane $U=\{z: Im(z)>0\}$ to itself and preserve the hyperbolic metric on $U$. The linear-fractional transformations $\gamma_1, \gamma_3$ corresponding to the matrices $\exp(\beta_1 G_1), \exp(\beta_3 G_3)$ are hyperbolic, while $\gamma_4$ corresponding to the matrix $\exp(\beta_4 G_4)$ is elliptic. Each hyperbolic linear fractional transformation $\gamma$ of $U$ preserves a hyperbolic geodesic $L_\gamma\subset U$ and acts on $L_\gamma$ as an intrinsic translation. This geodesic is called the axis of $\gamma$. In contrast, an elliptic linear-fractional transformation has a unique fixed-point in $U$. (The transformation $\gamma_4$ will fix the point $i\in U$.)

There are many places where this staff is discussed, for instance

Anderson, James W., Hyperbolic geometry, Springer Undergraduate Mathematics Series. London: Springer (ISBN 1-85233-934-9/pbk). xi, 276 p. (2005). ZBL1077.51008.

Now, the key property that $\gamma_1, \gamma_3$ satisfy is that their axes intersect in $U$. Using this one verifies that for any pair of points $z, w\in U$ there are (real) parameters $\beta_1, \beta_3$ such that $$ \gamma_1 \gamma_3(z)=w. $$ (In contrast, this existence property fails if the axes do not intersect.) Finding such $\gamma_1, \gamma_3$ amounts mostly to computing the intersection point (in $U$) between two circles in the complex plane, so it can be done constructively. These circles (more precisely, intersections of the circles with $U$) are certain orbits of 1-parameter groups of linear fractional transformations containing $\gamma_1, \gamma_3$.

Using this, one verifies that for each linear-fractional transformation $\gamma$, there are (real) parameters $\beta_1, \beta_3, \beta_4$ such that $$ \gamma= \gamma_1 \gamma_3 \gamma_4. $$ Namely, consider $w=\gamma(i)$ and find $\gamma_1, \gamma_3$ such that $$ \gamma_1 \gamma_3(i)=w. $$ Then $\gamma_3^{-1}\gamma_1^{-1}\gamma$ will fix $i$ and, hence will equal $\gamma_4$ for some value of $\beta_4$.

From this, one concludes that for every real matrix $M\in GL(2, {\mathbb R})$ there are real parameters $\beta_1,...,\beta_4\in {\mathbb R}$ such that $$ M= \exp(\beta_1 G_1)... \exp(\beta_4 G_4). $$ Each of the steps in this argument is not hard but requires a proof and I will not attempt to write one.