Continuous analogy of the matrix inversion

388 Views Asked by At

I am thinking on matrix-like entities with continuous indexes. I think, maybe such "continuous matrices" could be defined as complex-valued functions on $\mathbb{R}^+\times\mathbb{R}^+$.

I think it is trivially visible that the continuous analogy of the unit matrix is $\delta_{xy}$. Multiplication could be defined as

$$(f \circ g)(x,y)=\int_0^\infty f(x,u)g(u,y)du$$

I am looking for the $f \in \mathbb{R}^+\times\mathbb{R}^+ \rightarrow \mathbb{C}$ with a given $g \in \mathbb{R}^+\times\mathbb{R}^+ \rightarrow \mathbb{C}$, for which

$$\int_0^\infty f(x,u)g(u,y) du = \delta_{xy} | \forall \{x,y\} \subset \mathbb{R}^+ $$

Does this algebra has a name? What could be the continuous analogy of the matrix inversion in it?

3

There are 3 best solutions below

0
On BEST ANSWER

The generalization of matrices to "continuous indices" are called linear operators. What you have in mind seem to be operators of spaces of functions and these operators are defined by "integral kernels" $k(x,y)$, i.e. a function $f$ is mapped to some $g$ by $$g(y) =\int k(x,y)f(x)dx $$ (omitting details about the domains).

The thing with these kind of operators is that it is not clear whether they are continuous and invertibility is much more subtle. For example the operator corresponding to a continuous kernel $k$ leads to a compact operator between the spaces of continuous functions (and the same holds for a square integrable kernel and square integrable functions). Compact operators have the property that they are not continuously invertible even when restricted to their range and when you factor out the null space (you need that the range of the operator is not finite dimensional for this to hold). You still can define the Moore-Penrose pseudo inverse for linear operators, but usually it is not defined everywhere and is not continuous (read unbounded).

2
On

The entity can be called kernel transformation. I want to note that main objects of the study are linear transformation rather than matrix. Kernel transformations are linear transformations on function spaces (i.e fourier transform,hankel transfrom). To call a transformation invertible, it should be a bijection and I guess it depend on the space you work on (i.e fourier transform and space of rapidly decaying functions). The identity is represented by dirac delta function $\delta (x-y)$. The subject is study of functional analysis mainly. The idea is also used in quantum mechanics since mathematical background of theory is depend of functional analysis especially hilbert space(more accurately rigged hilbert space).

0
On

Before analizing the continuous analogy of Matrix multiplication ($\bf F = \bf K \bf G$), it is the case to step through the continuous analogy of a linear system $$ {\bf f} = {\bf K}\,{\bf g}\quad \Leftrightarrow \quad f(x) = \int\limits_a^b {K(x,y)g(y)dy} $$

In fact, assuming that $f(x),g(y),K(x,y)$ are defined over the real intervals $a\le x <b, \; a\le y <b$, dividing both intervals into $n$ equal sub-intervals of length $\Delta x = \Delta y = (b-a)/n$, and putting $$ \left\{ \matrix{ 1 \le p,q \le n \hfill \cr K(a + px,a + qy) = K_{\,p,\,q} \hfill \cr f(a + px) = f_{\,p} \hfill \cr g(a + px) = g_{\,p} \hfill \cr} \right. $$ under the assumptions of the Riemann Sum, we can approximate the integral by $$ f_{\,p} = \sum\limits_{q = 1}^n {K_{\,p,\,q} \,g_{\,q} \,\Delta y} $$ (and that is actually done in the numerical solution).

All this comes under the field of Functional Analysis and Linear Operators in particular, as already said. But when the function $g(y)$ is the unknown, and we are making the analogy with the resolution of linear systems, thus involving the inversion of $K(x,y)$ then the subject is known as Integral Equations.
In particular the one we have introduced above is a Fredholm equation of the first kind.

Now, while some basic theorems from the finite field are transferred to the continuous, mainly due to the linearity of both, the inversion of the kernel function $K(x,y)$ is quite involved due to the fact that the basic notion of Determinant unfortunately is not easily transferrable.

You can refer to any good paper about Integral Equations for more details on this interesting subject.