Intuition behind: Integral operator as generalization of matrix multiplication

2.9k Views Asked by At

So I am teaching myself more in-depth about integral operators and every once and awhile I see this little 'factoid', that integral operators are generalizations of matrix multiplications.

In particular, if:

$$Lf(x) = \int_{X} k(x,y) f(y) du(y)$$

So then, naturally, if $\mathbf{A}$ is an $m,n$ matrix with entries $a_{ij}$ and $\mathbf{u} \in \mathcal{C}^{n}$, $\mathbf{A}\mathbf{u} \in \mathcal{C}^{m}$, we have:

$(\mathbf{A}\mathbf{u})_{i} = \sum_{j=1}^{n} a_{ij} u_{j}, i=1 \ldots m$

So I always see something similar to the following statement:

The entries $k(x,y)$ are analogous to the entries $a_{ij}$ of matrix $\mathbf{A}$ and the values $Lf(x)$ are analogous to the entries $(\mathbf{A}\mathbf{u})_{i}$

I have never seen any example or more detail regarding this statement. Can somebody make this a bit clearer?


Here is my thought, the $x$ defines the 'row', while the $y$ defines the column.

So then if we wanted to, we could define a sequence $\{x_{0}, x_{1}, \ldots\}$ to then 'sample' the output of the operator integral $L$. For instance, if $L$ mapped $f$ to a certain region $P$, we would want to define our sequence to exist only in this region $P$ in order to save computation. If in this region, the functions $Lf$ had significant norm/power only in a subspace or section of this region $P_{k}$, we could 'sample' this subspace only -- ie $\{x_{0}, x_{1}, \ldots\} \in P_{k}$ and still get a 'reasonable' approximation to the resulting function/signal $Lf$.

Similarly, we could 'sample' $f$ on its domain with $\{y_{0}, y_{1}, \ldots\}$ and reduce the computation even further, when the sequence captures 'most of the information' about $f$ that is.

2

There are 2 best solutions below

1
On BEST ANSWER

Its always a bit hard to guess what another person might find intuitive, but here are my two cents on the topic.

You can interpret the elements of $\mathbb{R}^n$ as functions from the set $\{1,...,n\}$ to $\mathbb{R}$, where for $f \in \mathbb{R}^{n}$, $f(i)$ would just be the $i$-th component of the vector. We know from linear algebra that any linear operator $L: \mathbb{R}^{n} \rightarrow \mathbb{R}^{n}$ can be written as $L f = A\cdot \vec x$, where $A$ is an $n \times n$-matrix and $\vec x$ is the vector associated with $f$. We could invent a "kernel" function to write this down differently, with $k: \{1,...,n\} \times \{1,...,n\} \to \mathbb{R}$, $k(i,j) := A_{ij}$. We then have the formula

$$Lf(i) = (A\cdot\vec x)_i = \sum_{j=1}^{n} k(i,j) f(j).$$ Now let's replace $\{1,...,n\}$ with some infinite set $X$. Writing down matrices and using the multiplication rules in the same way as in $\mathbb{R}^n$ seems to be a complicated approach here, but it is easy to see what the generalisation of the formula above should be: The values $k(x,y)$ for $x,y \in X$ are the "matrix entries", so we get $$Lf(x) := \sum_{y \in X}k(x,y)f(y)$$ Now for countable $X$ this might still make sense, if we introduce some restrictions on $k$ and $f$ in order to ensure convergence, but for uncountable $X$ (which is the more interesting case) the sum doesn't make sense any more (at least if $k$ is nonzero almost everywhere). The integral is often viewed as a "continuous" analogon to summation (e.g. by physicists, or in measure theory), and as it is itself a limit of sums, it seems only natural to consider operators of the form

$$Lf(x) = \int_{X}k(x,y)f(y) dy$$

1
On

More thoughts about this. Matrix $A$ can be thought of as a linear operator from $\mathbb{R}^n$ to $\mathbb{R}^n$. In a similar way your integral transform $L$ is an operators from a (Hilbert) space of functions to a different space.

Just like you can define characteristics of $A$ (like eigenvalues and eigenvectors), and talk about basis of its image, so too you can do the same to $L$.

For an in-depth example of continuous and discrete transformations with similar eigenvalues and "related" eigenvectors, look at continuous and discrete Fourier transforms.