How to draw things like commutative diagram in PDE theory

132 Views Asked by At

The question maybe a little vague. I am working on PDE theory, I am trying to make my mind more clear by drawing some graph like commutative diagram. But the problem comes since in PDE theory u may need to put your object in different space. For example, for elliptic PDE, we consider the operator $\sum_{ij}a^{ij}\partial_{ij}$ with domain $H^1$ and codomain $L^2$, though we set our domain as $H^1$, but the precise domain is actually $H^2$! But we will need some property from $H^1$. Also, we may require to put the operator in different space. That could be some examples how things get messy, could anyone help with that?

1

There are 1 best solutions below

4
On BEST ANSWER

I think there are two slightly different issues here, and the fact that it's not just one issue did confuse me for a long time.

First, there is a useful body of abstraction about "unbounded operators on Hilbert space", abstracting differential operators (and also unbounded multiplication operators). As noted in your question, and looming larger than might be anticipated, there's the issue of "domain" of unbounded operators. They map to the Hilbert space (concretely, $L^2$), but cannot be reasonably defined on all of $L^2$ (and still reasonably map to $L^2$). This can be discussed in a mildly abstract setting without mentioning Sobolev spaces of distributions...

On another hand, for specific operators such as diff ops, especially Laplacians and relatives, we can define Sobolev spaces, families of Hilbert (or Banach) spaces, exactly designed so that $\Delta$ maps $H^s$ to $H^{s-2}$, and so on. (And usually there is some version of "Sobolev imbedding/inequality", comparing Sobolev spaces' $L^2$-differentiability with classical, pointwise differentiability.) This viewpoint also can be abstracted, to "Gelfand triples" and "rigged Hilbert spaces" (meaning families like the $H^s$).

Is this bit of discussion heading in a helpful direction?

EDIT: As in the original asker's comments, indeed, often one discovers that the "original operator" (which in naive mathematics is not mutable), has domain problems. Then, yes, unless one has been around this block before, it is probably wise to give various restrictions and extensions different names (even while realizing they're different incarnations of the same intuitive thing).

For example, in my own experience, various incarnations of Laplace-Beltrami operators play a role, and from a naive viewpoint dredge up serious muddles. Although my own interest is in such operators on modular curves and other "arithmetic quotients", which have various bits of number-theoretic significance, most of the mechanisms already make complete sense in far simpler situations, like on the circle or on the line, not to mention $\mathbb R^n$.

A typical parade of incarnations of "the Laplacian" is: well, first, sure, we have the distributional Laplacian, which can act on nearly anything, and map it to a distribution. Trying to control this a little, one orthodox approach is to restrict the Laplacian to test functions, where it is continuous (after all), and then think about extensions... ideally to $L^2$, but we know this is not literally possible.

One way to adapt is via Sobolev spaces ("tuned" to the Laplacian, most often), to refine the extreme range between test functions and distributions, with $L^2$ "in the middle".

Also, in the case of "semi-bounded" operators such as Laplace-Beltrami operators, there is a canonical, and relatively simple to describe, self-adjoint extension of the merely-symmetric operator Laplacian-restricted-to-test-functions, the Friedrichs extension. (This did already appear as a comment in von Neumann, a few years earlier, as K. Klinger-Logan pointed out to me.) But/and as von Neumann (and many others since) observed, in general there is not a unique self-adjoint extension of a symmetric (but unbounded, hence only densely defined...) operator.

The latter ambiguity has more consequences than many people seem to realize. E.g., "the adjoint", easily mistakable for the original operator in terms of computations, will often have eigenvalues consisting of all complex numbers, or at least an upper or lower half-plane of them. In particular, in violent contrast to one's expectations about "self-adjoint" operators, they're not necessarily real...

Although I do not claim to understand the context of general PDE discussion of this, it is already clear to me that the seemingly fiddly/silly little issues about "domains" and $H^2$ versus $H^1$ are not actually silly, but embody some essential issues.

Put otherwise, the "same" operator but with different domains should be seriously treated as a different thing, and not obstinately "collapsed" to being the same.