Is there a reason why Harmonic functions are defined on open sets?

959 Views Asked by At

Whenever I see a definition of a harmonic function, it's always defined as follows

A function $f : U \to \Bbb{R}$ is called harmonic (where $U$ is an open subset of $\Bbb{R}^n$) iff it is twice continuously differentiable on $U$ and satisfies Laplace's equation $\Delta f = 0$.

  • Is there a reason why $U$ is always chosen to be open and what are the ramifications if $U$ is chosen to not be an open set in the context of the Laplace operator (in which we take the second partial derivatives)?

Edit: I have recently read that for a "good" subset $V$ of $\Bbb{R}^n$ i.e. a submanifold of codimension 1 with boundary, we can extend the notion of differentiability by extending $V$ for any point $v \in V$ to involve a neighbourhood around $v$ over which we can differentiate. Then the derivative of $V$ is just the derivative of the extension.

  • Supposedly, unless $V$ is "good", then the derivative needn't be uniquely defined. Why does this happen to be the case and what's the necessity in defining "good" as we have done?

Thanks in advance!

3

There are 3 best solutions below

7
On BEST ANSWER

I'll be assuming the function is real valued (as it is in the problem) but some of what I say generalizes.

The domain of the function is presumed to be an open set because derivatives and the Laplacian are defined on open sets. The reason we need to use an open set as the domain for $f$ when talking about derivatives, is because the concept of a derivative doesn't make sense when considering a point in isolation: you need to look in the neighborhood of the point. If I told you that I have a function $f:\{0,1,2,3\}\to\mathbb{R}$ and wanted to know how do calculate $f'(x)$, you'd tell me I couldn't. The concept of the derivative of a discrete function makes no sense because the the concept of $$\lim_{h\to0}\frac{f(x+h)-f(x)}{h}$$ doesn't make sense for such a function, since $f(x)$ is undefined for any value near $2$.

This is the core difficulty. Every closed set, $C$, in a metric space has the property that $\exists c\in C$ such that $\forall\varepsilon\exists x\in C^{c}$ such that $d(c,x)<\varepsilon$. This is immediate, because $C^c$ is open, so it doesn't contain its closure. Take $c$ to be some point in the closure of $C^c$ but not in $C$ and we are done.

In the particular case of the real numbers, although it's possible you'll have a one-sided limit you can calculate (say, in the example $[0,1]$), but the traditional definition of a derivative is the limit, not the one sided limit, and so you'll wind up with more differentiable functions than we normally have. Any time the left-handed limit doesn't exist, you will still get a derivative for $f(x)$ at $0$. For example, $f(x)=|x|$ is differentiable on $[0,1]$ under a one-sided limit.

Sometimes the above behavior is desirable, and so we wish to extend our notion of derivative to such places. There are two obvious ways to do this. For $f:A\to\mathbb{R}$ we can extend our definition of derivative to cover any point $x$ in the boundary of $A$ by considering the limit in the definition of a derivative to be a sequential limit, i.e. $$f'(x)=\lim_{n\to\infty}\frac{f(a_n)-f(a)}{a_n-x}$$ where $a_n$ is a sequence converging to $x$.

This definition can cause problems from being too general, as the derivative of a function on $\{0\}\cup\{\frac{1}{n}|n\in\mathbb{n}\}$ doesn't do what we are used to doing with derivatives, since there is no neighborhood or half neighborhood of any point on which $f$ is defined everywhere. The middle ground is to do as follows: Define the derivative of $f$ at a point, $a$ to be the number, $g(a)$, that satisfies $$f(a + h) = f(a) + g(a) h + O(|h|^2)$$ For all $h>0$. Define the function $f'(x)$ to be the formal derivative of $f(x)$ to be the function that satisfies $f'(a)=g(a)$ and which is defined exactly where the above equality holds.

This definition encapsulates the sense in which a derivative is the linear approximation of a function and so is very useful. It also always satisfies the mean value theorem, if you have an interval $f'(x)$ is defined in.

6
On

Let $A\subseteq \mathbb{R}^n$ and $$B(x_0,\epsilon):=\left\{x\big{|}|x-x_0|<\epsilon\right\}$$ and $$D(A):=\{x|(\forall \epsilon >0)((B(x,\epsilon)\setminus \{x\})\cap A\neq \emptyset)\}.$$ More generally you can choose $U$ satisfying the following conditions: $$U\subset\mathbb{R}^n \,\ \text{ and } \,\ D(U)\subset U$$

0
On

$\newcommand{\Eps}{\varepsilon}\newcommand{\Reals}{\mathbf{R}}\newcommand{\Basis}{\mathbf{e}}$Let $U \subset \Reals^{n}$ be non-empty, $f:U \to \Reals$ a function, and $x_{0}$ a point of $U$.

As you know, the definition of differentiability of $f$ at $x_{0}$ involves a limit at $x_{0}$, which strictly speaking requires $f$ to be defined in some open ball centered at $x_{0}$.

If $x_{0}$ is instead a boundary point of $U$, you can attempt to define the derivative of $f$ at $x_{0}$ by extending $f$ differentiably in a neighborhood of $x_{0}$, i.e., picking a differentiable function $F$ in some ball $B$ about $x_{0}$ in such a way that $F = f$ in $B \cap U$, and declaring that $Df(x_{0}) = DF(x_{0})$. Of course, this idea only works if $Df(x_{0})$ does not depend on $F$.

In order for $Df(x_{0})$ not to depend on the choice of extension $F$, it's necessary that "some basis $(\Basis_{i})_{i=1}^{n}$ of $\Reals^{n}$ is tangent to $U$ at $x_{0}$". In this event, $f$ itself determines directional derivatives along elements of a basis, so a linear approximation of $f$ at $x_{0}$, if one exists at all, is unique.

To give precise definitions (that are quite likely not standard), say that a vector $\Basis$ is tangent to $U$ at $x_{0}$ if there exists a smooth curve $\gamma:(-\Eps, \Eps) \to \Reals^{n}$ (for some value of smooth) such that:

  • $\gamma(0) = x_{0}$;

  • $\gamma'(0) = \Basis_{i}$;

  • $\gamma(t) \in U$ for $0 < t < \Eps$. (N.b., not necessarily for $-\Eps < t < 0$.)

Say that a basis $(\Basis_{i})_{i=1}^{n}$ of $\Reals^{n}$ is tangent to $U$ at $x_{0}$ if each element is tangent to $U$ at $x_{0}$.

If there exists a basis $(\Basis_{i})_{i=1}^{n}$ of $\Reals^{n}$ tangent to $U$ at $x_{0}$, and if $(\gamma_{i})_{i=1}^{n}$ is a corresponding collection of curves, say that $f$ is differentiable at $x_{0}$ if:

  • For each $i$, $f \circ \gamma_{i}$ is differentiable from the right at $0$, and

  • If $\gamma:(-\Eps, \Eps) \to \Reals^{n}$ is an arbitrary smooth curve with $\gamma(0) = x_{0}$ and $\gamma(t) \in U$ for all $t \neq 0$, then $f \circ \gamma$ is differentiable at $0$. (This ensures that if both $\Basis$ and $-\Basis$ are tangent to $U$ at $x_{0}$, then the respective directional derivatives are compatible.)

It's straightforward to check that if $M \subset \Reals^{n}$ is an open submanifold (n.b., not codimension $1$) and $U = \overline{M}$ is a manifold with boundary, then at each boundary point of $U$ there is a basis tangent to $U$. (Take a basis of the tangent space to the boundary together with an inward-pointing normal vector.)

The preceding framework runs into trouble for sets such as the following:

  • Submanifolds of lower dimension (e.g., plane curves).

  • Closures of open sets with "sharp boundaries", such as $$ U = \{(x, y) : |y| \leq x^{2}\}. $$ (The origin is problematic; no vector with non-vanishing second component is tangent to $U$ at the origin.)

For all the generality one gains by this type of scheme (e.g., manifolds with corners, edge-of-the-wedge results), in elementary circumstances it's adequate, and much easier, simply to assume $U$ is open.