Why is there no matrix representation of a linear operator

1.3k Views Asked by At

If a Hilbert space, $H$, has orthonormal basis $\{e_i\}$, then a linear operator $A:H \to H$ can only be defined by a matrix (i.e., $\{a_{i,j} : \langle Ae_i,e_j \rangle = a_{i,j}\}$ if the operator is bounded. I can't fully understand why we need this condition. I don't need an explicit counter-example, since it is seems to be hard to construct unbounded linear operators, but I would like an explanation of what exactly it means for a linear operator to be unbounded. Thanks!

Edit: sorry, I worded my question badly. when I said 'I would like an explanation of what exactly it means for a linear operator to be unbounded' I meant in the context of this matrix representation. I.e., if we have a discontinuous linear operator on an infinite dimensional Hilbert space, why does this matrix representation fail?

To help you understand my confusion better, given this matrix representation, it's easy to see that for any $x = \sum_i x_i e_i \in H$, $Tx = \sum_k (\sum_i x_i a_{i,j}) e_k$. I believe this is ill-defined when $T$ is either unbounded (or discontinuous), but why is this? Is it because these infinite sums might be conditionally convergent? I can't construct any nice examples, thanks in advance!

4

There are 4 best solutions below

3
On BEST ANSWER

An example of an unbounded operator is $$ Lf=-f'' $$ on the domain $\mathcal{D}(L)$ consisting of all twice absolutely continuous $f\in L^2[0,\pi]$ with $f,f''\in L^2$ and satisfying $f(0)=f(\pi)=0$. In fact, this operator $L : \mathcal{D}(L)\subset L^2[0,\pi]\rightarrow L^2[0,\pi]$ is an unbounded selfadjoint operator, with a complete orthonormal basis of eigenfunctions $\{ s_n \}_{n=1}^{\infty}\subset L^2[0,\pi]$, where $$ s_n = \sqrt{2}\sin(nx). $$ The eigenvalues of $L$ are $n^2$ for $n=1,2,3,\cdots$. This is an unbounded operator because $Ls_n = n^2 s_n$, which prevents there from being a constant $M$ such that $\|Lf\| \le M\|f|$ for all $f\in \mathcal{D}(L)$. You can see from this that $L$ is discontinuous because $\{ \frac{1}{n}s_n \}_{n=1}^{\infty}$ is a sequence in $L^2[0,\pi]$ that tends to $0$ in $L^2[0,2\pi]$, while $\{ L(\frac{1}{n}s_n)=ns_n \}$ does not.

The thing that really goes wrong with such an operator $L$ is this: the operator $Mf=-f''$ on $\mathcal{D}(M)$ defined as the same domain as $\mathcal{D}(L)$, except without the endpoint conditions, is that $L$ and $M$ agree on the basis elements given above, but their domains are not the same. So the operator $L$ is not uniquely determined by its action on the orthonormal basis $\{ s_n \}$. This inability to distinguish different operators by their actions on a basis is a serious problem, and it rules out dealing with general operators as matrices.

0
On

Even in your case, your operator cannot be defined by a matrix unless $H$ is of finite dimension. In that case every linear operator will be bounded.

I am not aware of any more general definition of an "infinite-dimensional" matrix and I would assume that such a definition is not attempted because it would not make sense in the context of unbounded operators.

Edit: Okay, since Solomonoff's Secret pointed out there is a straight-forward way of extending the "matrix" definition to the infinite dimensional case, and in fact terms like "infinite matrix" are sometimes found in the literature, I feel I should be more careful with the statement I made above and elaborate:

Matrices where historically introduced as a means of facilitating easy notation and computation in the finite-dimensional context.

With the advent of computers, the issue of data storage and organisation became relevant: Finite matrices can be seen as inducing a convention on how to store the coefficients of a linear operator between finite-dimensional vector spaces in memory. This is in a sense "notation" in a machine context.

When one requires that applicability with respect to computation to be a feature of any generalised matrix-concept, there are two possible paths to follow:

1.) Some linear operators between infinite-dimensional spaces may not be describable by a finite number of coefficients with respect to the given basis in those spaces, however it may be the case that their sets of coefficients might be described in a finite way, for example if there are finitely many fourier coefficients "encoding" the coefficients of the linear operator when the latter are ordered in some suitable way.

2.) If no finite description of the operator can be found even applying some sort of "compression" as in 1., there may still be a way of ordering the infinite set of coefficients / basis vectors in a way that facilitates easy computation up to a certain desired accuracy: If the basis of the domain and range of the operator can be ordered in such a manner, that computing its value for the first $m \times n$ basis vectors is guaranteed to result in an error decreasing in $m$ and $n$ (in some suitable norm), a computationally useful "infinite" matrix concept would consist of the order of the basis vectors together with a (simple) algortihm that yields the $m+1$ or $n+1$ basis vector for the domain / range along with the operator's respective coefficients once the first $m$ or $n$ basis vectors and coefficients have been calculated.

Now, both 1. and 2. are non-trivial and it can be shown that neither approach is always possible in the context of unbounded operators.

Specifically, for 1. to work one would require to normalize the coefficient series for an unbounded growth term before encoding (as the series itself will be unbounded) and for 2. a norm different from the one found in the range would be required for the error estimate, basically bounding the operator in an artificial way.

If one choses to neglect the computational use of an "infinite" matrix concept, definitions are easier. I am sure there are theoretical uses for such a concept but those would be beyond my knowledge.

Since the question occured in the context of functional analysis, I would assume that the requirement of boundedness is actually to be understood as the operator acting between finite-dimensional (sub)spaces.

If the question is however to be understood in the context of infinite-dimensional spaces, the assertion is either not true (see Solomonoff's Secret's comment) or a specific notion of a matrix is used that would need to be added for clarification.

0
On

It's a standard exercise that a linear operator between normed spaces is bounded if and only if it is continuous. (By linearity, it is enough to verify that it is continuous at zero.) So in your setting, with norm induced by the inner product, an unbounded operator would be discontinuous. In fact, we can replace "continuous" with "uniformly continuous" or "Lipschitz continuous".

That replacement could suggest a standard example. Take the space $\mathbb{R}[x]$ of polynomials in $x$ with real coefficients over $[0,1]$ with basis $\{x^n\}_{n=0}^\infty$, observe that $f_n(x) = x^n$ is a sequence in that space having bounded norm (in fact the norm of all of these is $1$), that the derivative operator is a linear operator, and that the derivative of $x^n$ is not uniformly continuous as $n \rightarrow \infty$. That is, the norm of $nx^{n-1}$ is $n$, which is unbounded as $n \rightarrow \infty$.

0
On

Oh wait, I think I can see your problem now: If you want to define an operator that way, you would have to make sure that the coefficient series of its intended "image" still are square-summable! Just set $a_{i,j}\;:=\;\delta_{i,j}\cdot2^i$ and check what will be the "image" of the vector $x$ with coefficients $x_i\;:=\;\left(\frac{1}{2}\right)^i$: You will run into deep trouble, as the "image" $y$ of $x$ would have coefficients $y_i =1$. Thus $y$ cannot be part of $H$!