Piecewise differentiable functions and domain of a first order differential operator

139 Views Asked by At

For example, let's consider a Hilbert space $L^2[0,1]$ and a function $f$ that is defined as $f(x) = 0$ for $x < 1/2$ and $f(x) = x - 1/2$ otherwise. Then $f$ is a member of $L^2[0,1]$. Now I want to study a differential operator $d/dx$. But $f$ is not differentiable at $x = 1/2$. It is not differentiable only at that point. So if I don't bother with the exceptional point. $d/dx$ maps $f$ to a step function $H(x - 1/2) $ which is definitely a member of $L^2[0,1]$. Of course, the step function does not defined for $x=1/2$, but it doesn't matter as a member of $L^2[0,1]$. But materials I'm reading require that the functions in the domain of $d/dx$ should be differentiable everywhere in $[0,1]$. Isn't it too strict? Is it harmful to allow non-differentiability at mesure zero set?

1

There are 1 best solutions below

1
On BEST ANSWER

It is always possible to study the operators in a classical setting, with differentiability in a classical sense. The problem is that the Hilbert space techniques used to prove the existence of various types of solutions are not generally available in such settings. Algorithmic solution processes tend to wander out of the classical spaces. One-dimensional operators are notable exceptions, if you consider absolutely continuous functions as classical. Your function with the corner is absolutely continuous, meaning it is the Lebesgue integral of its almost everywhere derivative.

It's better if your linear operator $L$ has a domain $\mathcal{D}(L)$ that is dense in the Hilbert space $H$, and has a closed graph $\mathcal{G}(L)$ in $H\times H$. In your case that means $f\in\mathcal{D}(L)$ iff $f$ is equal a.e. to an absolutely continuous function on $[0,1]$ with $f'\in L^2$. Derivative operators are much nicer in 1-d because of this. In higher dimensions, you need to introduce Sobolev spaces to end up with closed operators. In fact, that's the reason for Sobolev spaces. The name "closed" for sets goes back to the idea that sequences obtained in algorithm solution processes can't escape the space where you're working.

An operator $L$ is closable if the closure $\mathcal{G}(L)^c$ of its graph $\mathcal{G}(L)$ in $H\times H$ is still a graph of a linear operator. This sounds too simple, but a linear subspace $\mathcal{M}\subseteq H\times H$ is the graph of a linear operator iff $(0,y)\in \mathcal{M} \implies y=0$. So the condition that $L$ is closable is equivalent to the following: $$ \mbox{If } \{ f_n \}\subseteq\mathcal{D}(L) \mbox{ converges to } 0 \mbox{ and } \{ Lf_n \} \mbox{ converges to } g, \mbox{ then } g=0. $$ All this condition does is ensure that $(0,g) \notin \mathcal{G}(L)^c$ unless $g=0$. Then the subspace $\mathcal{G}(L)^c$ is the graph $\mathcal{G}(L^c)$ of a closed linear operator $L^c$. Hilbert space decompositions, especially using orthogonal complements, require closed subspaces. When you have a densely-defined closed linear operator on a Hilbert space $H$, then the orthogonal complement $\mathcal{G}(L)^{\perp}$ of its graph is, up to a negative sign in one coordinate, the transpose of the graph of the adjoint operator. Adjoint operators are nice to have, and selfadjoint operators are really nice because you get a full spectral theory, giving rise to nice solution properties. You don't get a nice spectral theory for non-closed operators (such as classical differential operators) because the spectral objects typically involve limiting descriptions that take you out of the graph.