Linear algebra with 2-dim. functions instead of matrices

49 Views Asked by At

I just thought about what would happen if we try to do matrix calculus with functions $\mathbb R^2 \to \mathbb R$ instead of matrices.

The matrix multiplication would be something like

$$ (f \times g)(x, y) = \int_\mathbb R f(x, z) g(z, y) dz. $$

Is this theory developed somewhere? Does it have a name? Are there interesting results? What is the space of "invertible functions"? Are there determinants?

Now a matrix corresponds to a linear map between vector spaces of finite dimension. Similarly, a linear map could be defined via an integral kernel by $$ h \mapsto \int_{\mathbb R} f(x, z) h(z) dz. $$

If we want the integrals to converge, we should perhaps require the functions to be contained within an appropriate space. Furthermore, we then should want the product of two functions as given above to be within that space again. As far as I can tell, bump functions would certainly do, but I'm sure one can do better.

1

There are 1 best solutions below

3
On

The operation

$$h \mapsto \int_{a}^b f(x, z) h(z) dz$$

is called an operator with kernel $f$ https://en.wikipedia.org/wiki/Integral_transform (no relationship with the other meaning of the word in linear algebra). You have noticed that, instead of taking $\mathbb{R}$ in its whole, I have taken bounds $a,b$.

One of the important kernels is $e^{-2i \pi xz}$, associated with $a=-\infty, b=\infty$ : it defines the Fourier transform ; in the same way for many other transforms, Laplace (with $a=0,b=\infty$) Mellin,** ....that are listed in the Wikipedia article.

In most cases, this two dimensional kernel has the form $K(x,z)=k(x-z)$ for a certain function $k:\mathbb{R}\rightarrow\mathbb{R}$ and in this way, and in this way only, one finds back the convolution $k*f$ !

(this is for example the case for the very important kernel $K(x,z)=sinc(x-z)$ in signal processing).

See, for the Hibert space aspect https://en.wikipedia.org/wiki/Reproducing_kernel_Hilbert_space

Like for matrices, these transforms may have eigenvectors, eigenvalues that may constitute a basis (in a the sense of basis in a Hilbert space); using such bases, one can decompose, under certain conditions, these transformations under what is known as the Karhunen-Loève decomposition https://en.wikipedia.org/wiki/Karhunen%E2%80%93Lo%C3%A8ve_theorem which is a cousin of the SVD decomposition, etc.