The definition of a regular operator

150 Views Asked by At

In this paper the notion of a regular operator is referenced. I have not heard the term before, in one sentence the author remarks:

A closed operator $T$ is regular and self-adjoint if and only if $T −\lambda$ is surjective for all $\lambda \in \Bbb C\setminus \Bbb R$, if and only if the Cayley transform of $T$ is unitary, if and only if $T$ has a functional calculus homomorphism on $C_0(\Bbb R)$.

This is confusing to me: If $T$ is self-adjoint then all these things are already true!

What am I missing, in particular: What is the correct definition of "regular" used here? Google is not helpful, unfortunately.

1

There are 1 best solutions below

0
On BEST ANSWER

@user854214 has already settled this question in the comments with the reference to Pal's paper but it might perhaps be useful to see an example illustrating why must the definition of a regular operator in the context of Hilbert-modules be so much more intricate than the definition for Hilbert spaces.

Recall that if $A$ is a C*-algebra and $E$ and $F$ are Hilbert-modules over $A$, then an operator from $E$ to $F$ is said to be regular if:

  1. $T$ is closed and densely defined,

  2. its adjoint $T^*$ is also densely defined, and

  3. the range of $I + T^*T$ is dense in $E$.

As noted by Pal, if $A=\mathbb C$, in which case $E$ and $F$ are nothing but Hilbert spaces, then (2) and (3) follow from (1), so regular operators are nothing but closed operators.

As the following example illustrates, already $(1) \Rightarrow (2)$ may fail, even for bounded operators.

Let $A$ be any unital C*-algebra and let $I$ be a closed 2-sided ideal in $A$. We may then view both $A$ and $I$ as (right) Hilbert modules over $A$ with the obvious module structure and with the inner product defined by $\langle x, y\rangle = x^*y$.

Let $T:I\to A$ be the inclusion of $I$ into $A$. As a bounded map, it is obvious that $T$ satisfies (1).

The domain of $T^*$ is then the set of elements $a$ in $A$ such that there exists some (necessarily unique) $x$ in $I$ satisfying $$ \langle T(y), a\rangle = \langle y, x\rangle , \quad\forall y\in I. $$ This translates to $y^*a=y^*x$, and one then has by definition that $T^*(a)=x$.

In this case, notice that $a-x$ belongs to $$ I^\perp := \{b\in A: yb = 0, \quad\forall y\in I \}, $$ so we have that $$ a = x+a-x\in I+I^\perp, $$ and in fact it is easy to prove that the domain of $T^*$ is precisely $I+I^\perp$, Moreover $I+I^\perp$ is always closed, so (2) can only hold when $I+I^\perp=A$. However, more often than not, this is false.

For example, if $A=C(X)$ and $I=C_0(U)$, for some open set $U\subseteq X$, then $I^\perp=C_0(X\setminus \overline U)$ and $$ I+I^\perp=\{f\in C(X): f \text{ vanishes on } \partial U\}, $$ which coincides with $C(X)$ iff $\partial U$ is empty.