What are the bases of a function space (Hilbert space)?

3.3k Views Asked by At

1. Motivation

I was learning about the Hilbert space and function spaces, i.e., roughly speaking, infinite-dimensional vector spaces.

Let's now think about ordinary 3D Euclidean vectors. A vector $\vec{x}$ may be given by

$$ \newcommand\mycolv[1]{\begin{bmatrix}#1\end{bmatrix}} \vec{x} = \mycolv{1\\3\\2}$$

and this is equivalent to saying that $\vec{x} = 1 \hat{i} + 3 \hat{j} + 2 \hat{k}$. So even if we think of this vector as an ordered triple, $(1, 3, 2)$, then a mathematical structure behind is saying that a vector can be demonstrated in the form of a linear combination of basis vectors.

2. My understanding of function spaces

As someone studying engineering and hasn't been exposed to rigorous mathematical proofs of linear algebra, I understood, through quite an intuitive approach, why function spaces are infinite-dimensional vector spaces. I considered the inner product defined, for example, of functions $\phi: R \rightarrow C$ and $\psi: R \rightarrow C$.

$$\langle \phi | \psi \rangle = \int \phi^*(x) \psi(x) \ dx$$

Intuitively speaking this is about adding up all the differentials $\phi^*(x) \psi(x) \ dx$, which is analogous to the scalar product in Euclidean space, so I thought each and every function values corresponding to individual (though an infinite number of) elements of the domain should all be the components of vectors $\phi(x)$ and $\psi(x)$. (And thus both function spaces should be of an infinite dimension.) So, like the below:

$$ \newcommand\mycolv[1]{\begin{bmatrix}#1\end{bmatrix}} |\psi(x) \rangle = \mycolv{...\\\psi(a-2\epsilon) \\ \psi(a-\epsilon) \\\psi(a)\\\psi(a + \epsilon) \\ \psi(a+2 \epsilon) \\ ...}$$

for some $a \in R$.

3. Question

It's fine up to here. But, there should at least be an infinite number of linearly independent basis vectors that make up the actual infinite-dimensional column vector, i.e., $|\psi(x)\rangle$. As if $\hat{i}, \hat{j}, \hat{k}$ corresponded respectively to the coefficients $1, 3, 2$, there should be basis vectors that correspond to each of $\psi(a), \psi(a+\epsilon), \psi(a-\epsilon), $ and so on. What are they?

3

There are 3 best solutions below

0
On BEST ANSWER

Your intuition that a function space is an infinite dimensional vector space with each point in the domain corresponding to a coordinate is correct.

The interesting function spaces come with a norm. Then a basis is a set of vectors such that every vector in the space is the limit of a unique infinite sum of scalar multiples of basis elements - think Fourier series.

The uniqueness is captures the linear independence.

These vector spaces also have infinite bases such that every element is a finite linear combination of scalar multiples of basis vectors, but those bases are not useful in analysis. See https://en.wikipedia.org/wiki/Basis_(linear_algebra)#Analysis

3
On

Basically, a vector space $V$ (also called linear space) is a set with two operations: multiplication with scalars (e.g. real or complex numbers) and addition.

A basis $B$ of the vector space $V$ is a subset such that every element $v \in V$ can be written as a unique linear combination of the elements in $B$.

In a Hilbert space the linear combination can contain an infinite number of terms, and the sum of the infinite sum is defined through the norm induced by the inner product.

The elements in a function space are functions, and so are the elements in a basis of such a space.

An example of a Hilbert space is $L^2([0,2\pi], \mathbb{C}),$ the linear space of functions $f : [0,2\pi] \to \mathbb{C}$ such that $\int_0^{2\pi} |f(x)|^2 \, dx$ is finite. The inner product is given by $\langle f, g \rangle = \int_0^{2\pi} \overline{f(x)}\,g(x)\,dx$. One basis for this space is given by $\{ e^{ikx} \mid k\in\mathbb{Z} \},$ where $e^{ikx}$ is restricted to $[0,2\pi]$.

0
On

Thank Ethan and md2perpe for your answers. I found a good YouTube video that pretty much explains all points I was confusing. So it would be good if anyone else is also agonising about the basis of function spaces. I will introduce the key ideas of the video.

We know that a Dirac delta function, which is informally defined as

$$\delta(x) = 0 \ when \ x = 0$$ $$\delta(x) = \infty \ when \ x \neq 0$$

has a peak at $x=0$ and is zero elsewhere.

So we can make it have a peak at $x=a$ by using $\delta(x-a)$ instead. And it satisfies the following property:

$$\int_{-\infty}^{\infty} f(x) \delta (x-a) dx = f(a)$$

So a function can be expressed in terms of a linear combination of the appropriate basis vectors $\delta(x-a)$, that is roughly

$$f(x) = \sum_{i=-\infty}^{\infty} f(x_i) \delta (x-x_i)$$

However, this is for the case of functions whose domain is the integer set, and we can extend this idea appropriately to ordinary continuous functions by changing the sum to the integral.

$$f(x) = \int_{-\infty}^{\infty} f(a) \delta(x-a) \ da$$

Notice that within the integral there is $f(a)$ which corresponds to $f(x_i)$ and the integration is done with respect to the variable $a$.

There are other ways of forming basis for a function, such as Taylor series which uses monomial basis ($x^0, x^1, x^2, x^3, ...$) and Fourier series which uses trigonometric/exponential basis ($e^{ikx}, e^{-ikx}, e^{2ikx}, e^{-2ikx}, ...$).