While reading Nielsen & Chuang's Quantum Computation and Quantum Information, I was troubled by their definition of a diagonalizable linear operator, which requires the eigenvectors to be orthogonal, thus implying that diagonalizable is equivalent to normal. Are there good reasons to add the restriction of orthogonality for the eigenvectors?
2026-04-06 14:16:50.1775485010
Non-equivalent definitions of diagonalizable linear operator on Hilbert space?
84 Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail At
1
There are 1 best solutions below
Related Questions in OPERATOR-THEORY
- $\| (I-T)^{-1}|_{\ker(I-T)^\perp} \| \geq 1$ for all compact operator $T$ in an infinite dimensional Hilbert space
- Confusion about relationship between operator $K$-theory and topological $K$-theory
- Definition of matrix valued smooth function
- hyponormal operators
- a positive matrix of operators
- If $S=(S_1,S_2)$ hyponormal, why $S_1$ and $S_2$ are hyponormal?
- Closed kernel of a operator.
- Why is $\lambda\mapsto(\lambda\textbf{1}-T)^{-1}$ analytic on $\rho(T)$?
- Show that a sequence of operators converges strongly to $I$ but not by norm.
- Is the dot product a symmetric or anti-symmetric operator?
Related Questions in HILBERT-SPACES
- $\| (I-T)^{-1}|_{\ker(I-T)^\perp} \| \geq 1$ for all compact operator $T$ in an infinite dimensional Hilbert space
- hyponormal operators
- a positive matrix of operators
- If $S=(S_1,S_2)$ hyponormal, why $S_1$ and $S_2$ are hyponormal?
- Is the cartesian product of two Hilbert spaces a Hilbert space?
- Show that $ Tf $ is continuous and measurable on a Hilbert space $H=L_2((0,\infty))$
- Kernel functions for vectors in discrete spaces
- The space $D(A^\infty)$
- Show that $Tf$ is well-defined and is continious
- construction of a sequence in a complex Hilbert space which fulfills some specific properties
Related Questions in DIAGONALIZATION
- Determining a $4\times4$ matrix knowing $3$ of its $4$ eigenvectors and eigenvalues
- Show that $A^m=I_n$ is diagonalizable
- Simultaneous diagonalization on more than two matrices
- Diagonalization and change of basis
- Is this $3 \times 3$ matrix diagonalizable?
- Matrix $A\in \mathbb{R}^{4\times4}$ has eigenvectors $\bf{u_1,u_2,u_3,u_4}$ satisfying $\bf{Au_1=5u_1,Au_2=9u_2}$ & $\bf{Au_3=20u_3}$. Find $A\bf{w}$.
- Block diagonalizing a Hermitian matrix
- undiagonizable matrix and annhilating polynom claims
- Show that if $\lambda$ is an eigenvalue of matrix $A$ and $B$, then it is an eigenvalue of $B^{-1}AB$
- Is a complex symmetric square matrix with zero diagonal diagonalizable?
Trending Questions
- Induction on the number of equations
- How to convince a math teacher of this simple and obvious fact?
- Find $E[XY|Y+Z=1 ]$
- Refuting the Anti-Cantor Cranks
- What are imaginary numbers?
- Determine the adjoint of $\tilde Q(x)$ for $\tilde Q(x)u:=(Qu)(x)$ where $Q:U→L^2(Ω,ℝ^d$ is a Hilbert-Schmidt operator and $U$ is a Hilbert space
- Why does this innovative method of subtraction from a third grader always work?
- How do we know that the number $1$ is not equal to the number $-1$?
- What are the Implications of having VΩ as a model for a theory?
- Defining a Galois Field based on primitive element versus polynomial?
- Can't find the relationship between two columns of numbers. Please Help
- Is computer science a branch of mathematics?
- Is there a bijection of $\mathbb{R}^n$ with itself such that the forward map is connected but the inverse is not?
- Identification of a quadrilateral as a trapezoid, rectangle, or square
- Generator of inertia group in function field extension
Popular # Hahtags
second-order-logic
numerical-methods
puzzle
logic
probability
number-theory
winding-number
real-analysis
integration
calculus
complex-analysis
sequences-and-series
proof-writing
set-theory
functions
homotopy-theory
elementary-number-theory
ordinary-differential-equations
circles
derivatives
game-theory
definite-integrals
elementary-set-theory
limits
multivariable-calculus
geometry
algebraic-number-theory
proof-verification
partial-derivative
algebra-precalculus
Popular Questions
- What is the integral of 1/x?
- How many squares actually ARE in this picture? Is this a trick question with no right answer?
- Is a matrix multiplied with its transpose something special?
- What is the difference between independent and mutually exclusive events?
- Visually stunning math concepts which are easy to explain
- taylor series of $\ln(1+x)$?
- How to tell if a set of vectors spans a space?
- Calculus question taking derivative to find horizontal tangent line
- How to determine if a function is one-to-one?
- Determine if vectors are linearly independent
- What does it mean to have a determinant equal to zero?
- Is this Batman equation for real?
- How to find perpendicular vector to another vector?
- How to find mean and median from histogram
- How many sides does a circle have?
In Quantum Mechanics, measurements of a state variable $A$ for a state $x$ of the system is made through a quadratic form $\langle Ax,x\rangle$, and this value must be real. So you are you working with a symmetric linear operator $A$. The requirement that $A$ be self-adjoint, and not just symmetric, is made for various reasons. Then you're dealing with eigenfunction expansions of self-adjoint linear operators on a Hilbert space. Eigenvectors for such an $A$ are automatically orthogonal if they associated with different eigenvalues. It's a little more difficult to keep thinking in terms of orthogonal eigenfunction expansions in the general case, but the notation is compelling enough that they keep it, and they adapt it to the more general case. Mathematicians abstract to a spectral measure $E$ in order to representat $A$: $$ Ax=\int_{\sigma(A)}\lambda dE(\lambda)x $$ Here $E(S)x \perp E(T)x$ if $S\cap T=\emptyset$. So the orthogonality is built in, but it has been generalized beyond trying to keep to one-dimensional eigenfunction expansions of the form $$ x = \int_{\sigma(A)}\langle \phi_{\lambda}|x\rangle\phi_{\lambda} d\lambda $$ (Physicsts use inner products that are linear in the second coordinate and conjugate linear in the first, which accounts for $x$ being on the right in their inner product.) The Mathematicians version of orthogonality is that $E(S)x\perp E(T)x$ if $S\cap T=\emptyset$. The Physicists push this notation to an infinitesimal limit, and try to think of $\phi_{\lambda}d\lambda$ in these terms. I personally do not like that notation much, but it has a long History, and it seems to be here to stay. For simple cases such as ordinary Fourier transform, this is just fine: $$ f = \int_{-\infty}^{\infty}\langle e_{\lambda},f\rangle e_{\lambda} d\lambda. $$ In this case, the above is a simple recasting of the Fourier transform and it's inverse: $$ f = \int_{-\infty}^{\infty}\langle f,\frac{1}{\sqrt{2\pi}}e^{i\lambda x'}\rangle \frac{1}{\sqrt{2\pi}} e^{i\lambda x} d\lambda, \\ \frac{1}{i}\frac{d}{dx}f = \int_{-\infty}^{\infty}\lambda\langle f,\frac{1}{\sqrt{2\pi}}e^{i\lambda x'}\rangle \frac{1}{\sqrt{2\pi}}e^{i\lambda x}d\lambda. $$ Once you veer away from the simple cases, it is hard to keep the Physicist's notation straight, and to keep it accurate because the eigenspaces are not spanned by one-dimensional functions $\lambda\mapsto\phi_{\lambda}$. That's why von Neumann came up with his Spectral Integral representation of a self-adjoint linear operator on a Hilbert Space: $A = \int_{\sigma(A)}\lambda dE(\lambda)$. This works in the general case, and it retains much of the original character without having to try to fit it back into Dirac notation.