Two operators $X$ and $Z$ in an infinite dimensional Hilbert space satisfying $X^2=Z^2=I$ and $\{X,Z\}= 0$

274 Views Asked by At

I am seeking to extend the following theorem to the case of infinite dimensional Hilbert space:

Suppose we have two Hermitian operators $X$ and $Z$ in a finite dimensional Hilbert space $\mathcal H$. And they satisfy the following relation:

$$X^2=Z^2=I, \quad \{X,Z\}\equiv XZ+ZX= 0.$$

Then it is not very hard to prove that, up to a unitary change of basis,

$$X=\left(\begin{array}{cc}0&1\\1&0\end{array}\right)\otimes I,\qquad Z=\left(\begin{array}{cc}1&0\\0&-1\end{array}\right)\otimes I\tag{*}.$$

Indeed, we can find out all the vector pairs $v$ and $Xv$ such that $Zv=v,ZXv=-XZv=-Xv$. Under the the orthonormal basis made of these pairs, $X$ and $Z$ have the matrix representation given above.

I am just wondering if this theorem can fit into infinite dimension case. It may need rigorous definition of $X$ and $Z$, and reference to the spectral theory. How exactly should I redefine the problem in order to obtain result similar to $(*)$?

3

There are 3 best solutions below

2
On BEST ANSWER

I assume that your operators are bounded in a Hilbert space $H$, otherwise I am not sure that the result I am proving is still true in the absence of other assumptions like the essential self-adjointness of the operator $\sum_{a=1}^3 X_a^2$.

Define $iY=ZX$ and next $X_1:=X$, $X_2:=Y$, $X_3:= Z$.

With this definition and your hypotheses, you easily see that $X,Y,Z$ are bounded self-adjoint operators such that $$\{X_a,X_b\}= 2\delta_{ab}I\tag{1}$$ $$[X_a,X_b] = 2 i\sum_c \epsilon_{abc} X_c\:.\tag{2}$$ (2) are the commutation relations of $su(2)$.

As the operators are bounded and self-adjoint, $\sum_{a=1}^3 X_a^2$ is bounded and self-adjoint as well, so in particular it is essentially self-adjoint. Nelson's theorem implies that the Hilbert spece supports a strongly-continuous unitary representation of $SU(2)$, whose Lie-algebra is represented by operators $-iX_a$.

At this point, since $SU(2)$ is compact, Peter-Weyl theorem says that $H$ decomposes into an orthogonal direct sum of finite dimensional irreducible subrepresentatons $H = \oplus_{j}H_j$. Above $j=0,1/2,1,3/2,2,\ldots$. The generators of the $j$-th subrepresentation are the restrictions of the $X_a$ to $H_j$.

Let us focus on these generators $X_{aj}$. As well known $H_j$ is the eigenspace of $X_j^2= \sum_{a=1}^3 X_{aj}^2$ with eigenvalue $4j(j+1)$. So $$X_j^2 = 4j(j+1)I_j$$ but the constraint (1) implies $$3 I_j= 4j(j+1)I_j\:,$$ thus $j=1/2$. The only possible representation appearing in the decomposition of $H$ is the one with $j=1/2$. It may appear infinitely times if $H$ is infinite dimensional. Thus $$H = H_{1/2}\otimes K$$ where $K$ is infinite dimensional if $H$ is. The representation of $X_a$ in $H_{1/2}$ are the ones given in terms of Pauli matrices. We end up with $$X_1 = \sigma_1 \otimes I\:, \quad X_2 = \sigma_2 \otimes I \:, \quad X_3 = \sigma_3 \otimes I\:,$$ where $I$ is the identity operator in $K$.

2
On

We have two self-adjoint operators $X$ and $Z$ satisfying (I assume that you mean with $\{X,Z\}$ the commutator $[X,Z]$)

\begin{eqnarray*} X^{2} &=&Z^{2}=I \\ \lbrack X,Z] &=&0 \end{eqnarray*} Then they share the same spectral measure $\{E(d\lambda ),\lambda \in \mathbb{R}\}$ and we can write \begin{eqnarray*} X &=&\int f(\lambda )E(d\lambda )\Rightarrow X^{2}=\int f(\lambda )^{2}E(d\lambda )=I\Rightarrow f(\lambda )^{2}=1\;\mathrm{a.e.} \\ Z &=&\int g(\lambda )E(d\lambda )\Rightarrow Z^{2}=\int g(\lambda )^{2}E(d\lambda )=I\Rightarrow g(\lambda )^{2}=1\;\mathrm{a.e.} \end{eqnarray*} Note that $f(\lambda )$ and $g$($\lambda )$ are real but in general not positive. We can decompose \begin{eqnarray*} f(\lambda ) &=&\chi _{A}(\lambda )-\chi _{B}(\lambda ) \\ g(\lambda ) &=&\chi _{C}(\lambda )-\chi _{D}(\lambda ) \end{eqnarray*} where \begin{equation*} A=\{\lambda |f(\lambda )\geqslant 0\},\;B=\{\lambda |f(\lambda )<0\} \end{equation*} and similar for $C$ and $D$. Here a.e. is understood.

Substraction gives \begin{equation*} \int \{f(\lambda )^{2}-g(\lambda )^{2}\}E(d\lambda )=0 \end{equation*} so \begin{equation*} f(\lambda )^{2}=g(\lambda )^{2}\;\mathrm{a.e.} \end{equation*} Thus \begin{eqnarray*} f(\lambda ) &=&g(\lambda ),\;\lambda \in \{A\cap C\}\cup \{B\cap D\} \\ f(\lambda ) &=&-g(\lambda ),\;\lambda \in \{A\cap D\}\cup \{B\cap C\} \end{eqnarray*} We note that in general there are many $X$ and $Z$ satisfying the requirements.

0
On

The following argument works both in finite and infinite dimension. The context in infinite dimension is that $X $ and $Z $ are bounded operators acting on a separable Hilbert space.

From $Z^2=I$, you get that the spectrum of $Z$ is contained in $\{1,-1\}$. From $ZX+XZ=0$ we get that $Z\ne \pm I$, so the spectrum of $Z$ is $\{1,-1\}$. It is then immediate (here is a proof) that $Z$ is unitarily equivalent to $$ Z=\begin{bmatrix}I_n&0\\0& -I_m\end{bmatrix}, $$ where the blocks are given, respectively, by the projections onto $\ker(Z-I)$ and $\ker(Z+I)$. If we represent $X$ as a block matrix with respect to the same basis, we have $$ X=\begin{bmatrix}A&B\\ B^*&C\end{bmatrix}. $$ But then $$ \begin{bmatrix}0&0\\0&0\end{bmatrix}=XZ+ZX=\begin{bmatrix}2A&0\\0&-2C\end{bmatrix}.$$ It follows that $A=C=0$. From $X^2=I$, we now get that $BB^*=I_n$, $B^*B=I_m$. If $n$ or $m$ is finite, taking the trace we see that $n=m$. So $n=m$ (whether they are finite or infinite) and $$ X=\begin{bmatrix}B&0\\0&I_n\end{bmatrix}\,\begin{bmatrix}0&I_n\\ I_n&0\end{bmatrix}\,\begin{bmatrix}B&0\\0&I_n\end{bmatrix}^*. $$ Now a straightforward computation shows (using that $BB^*=I_n$) that $$ \begin{bmatrix}B&0\\0&I_n\end{bmatrix}\,\begin{bmatrix}I_n&0\\0&- I_n\end{bmatrix}\,\begin{bmatrix}B&0\\0&I_n\end{bmatrix}^*=\begin{bmatrix}BB^*&0\\0&- I_n\end{bmatrix}=\begin{bmatrix}I_n&0\\0&- I_n\end{bmatrix}=Z. $$ Thus, writing $U=\begin{bmatrix}B&0\\0& I_n\end{bmatrix}$, we have $$ X=U(\sigma_x\otimes I_n)U^*,\ \ \ Z=U(\sigma_z\otimes I_n)U^*. $$