In which kinds of books can I find the definition of exponential operator?

240 Views Asked by At

I have been studying real and functional analysis recently, but I hadn't encountered an exponential operator such as $e^{L}$ until I recently read a paper. In which kinds of books can I find the definition of this concept? Operator algebra or functional analysis or Lie algebra? I'd appreciate it if you'd help me. Thank you.

enter image description here

2

There are 2 best solutions below

3
On

Some suggested linear algebra topics for research. Also leads to practical application in Quantum Mechanics.

A lot of the analysis is based off of common ways of building up these concepts from the basic arithmetic.

For example, $2^N$ for integer N means $2\cdot 2 \cdot2...\cdot 2$ for a list of N 2's in a row.

From this meaning you can find $2^M \cdot 2 ^N=2^{(M+N)}$. You can also work backwards by dividing by two an get $2^0=1$, $2^{-1}=1/2$, $2^{-2}=1/4$, and so on.

But don't think of this as division, but multiplying by the inverse, which is the formal, more fundamental definition of division.

From hear you can generalize to how the concepts apply to matrices.

Given invertible matrix M, we can multiply it by itself n times. You can multiply the identity matrix by the inverse of M to get negative exponents of M.

You can diagonalize matrices to facilitate understanding fractional powers of a matrix.

If $M=A^{-1}BA$, then in general, $M^p=A^{-1}B^pA$ even for fractional powers of p, for suitable M.

You can generalize using the Binomial theorem:

$(I+A)^p=\sum_{k=0}^p \ {p \choose k}A^k$, the general form allowing for fractional exponents.

A text on Real Analysis teaches that from the binomial theorem and its general form, you can map exponentiation with the argument as the independent variable to having the base as the independent variable, i.e. $e^x=1+x+x^2/2+x^3/3!+...$ once you define limits as they apply to matrices.

In particular, $(1+a)^N=1+Na+...$ can be swapped with $(1+a/N)^N=1+a+...$. Take the limit as N goes to infinity and we get $e^a$.

By similar reasoning, for a matrix M, you have $e^M=I+M+M^2/2+M^3/3!+...$

In quantum mechanics you can let $M=-i\hbar\Delta x_0 \hat{D}$ where $\hat{D}$ is the differential operator, $\frac{\partial}{\partial x}$ in one dimension.

Plugging it in you get:

$e^M=I+(-i\hbar \Delta x_0)\frac{\partial}{\partial x}/1!+(-i\hbar \Delta x_0)^2/2!\frac{\partial^2}{\partial x^2}+...$

So the exponential operator, which can be expressed as a matrix, is the sum of the powers of other Linear Operators. This operator being The Translation Operator

8
On

A. Pazy, "Semigroups of Linear Operators and Applications to Partial Differential Equations".

This is one of the most readable books on the subject, and he starts with bounded operators and defines $e^{-tL}$ using an exponential series. He then works more general theories.

The original ideas for treating time solutions with an exponential came from the Electrical Engineer O. Heaviside. His ideas were highly original, controversial, and not well-understand. Heaviside's ideas eventually led to the development of the modern version of Laplace transform and its Bromwich integral inverse.

Heaviside introduced the idea of an evolution operator $S(t)$ that would take the state $x$ of an electrical system to its state $S(t)x$ after $t$ seconds. He reasoned that a time-independent circuit would have the property that $S(t')(S(t)x)=S(t+t')x$; that is, evolving $t$ seconds and then $t'$ more seconds would be the same as evolving $t+t'$ seconds. And $S(0)=I$ would hold because you're not doing anything to the system by evolving $0$ seconds.