How is analysis beautiful? -- confusion from an algebraist

1.5k Views Asked by At

As a math major about to go into grad school, I find the algebra-side of mathematics beautiful and inspiring -- I like to explore the hidden structure of things. I also find the geometry/topology interesting as they bring intuition to something we can visualize. Yet I can't feel the beauty of analysis but only its difficulty to visualize and its complicated ad-hoc techniques for manipulating the epsilons.

I couldn't find satisfactory answers online. Most math educators seem to like algebra (or more easy-to-explain thing in general). It just seems hard to tell the big picture of the analysis. On others sites reddit and quora, I have mostly seen evidence for why people love algebra, but little evidence for why people love analysis. (possible exceptions may be Riemannian geometry/complex manifolds but I know very little about the details)

For what it's worth, I am also physics/string theory inclined and know that much of theoretical physics that's hard to experimentally verify is driven by "mathematical appeal". I have not studied much analysis beyond measure theory. I would like to invite experts in analysis or people who have had any inspiring analysis courses to share their excitement. (You are also welcome to share why you hated analysis if you really want to. I just watched 3B1B's monster group video and felt more excited about algebra)

More specifically, can you share results from analysis, which provide a deeper understanding of the underlying structures analysts work with? I am trying to get a sense of the beauty of analysis, and struggling to do so because it all seems very ad-hoc.

Sorry if this question seems too opinion-based. But I believe the answer is illuminating to many rising math students. Technically, this post belongs to "Constructive subjective questions" so it should be reopened. If you also think so, you can vote for "reopen" below.

3

There are 3 best solutions below

0
On BEST ANSWER

Of course, it is a matter of taste. However I think it is helpful to understand the history of analysis. I myself really enjoyed the book Mathematics: The Loss of Certainty, by Morris Kline.

In the good old days (pre-19th Century), people did calculus willy-nilly. But it was realized that rigor was required, because contradictory results came up (like several different values for $\sum_{n=1}^\infty \frac{(-1)^{n+1}}n$). So they made it rigorous with the use of $\epsilon$-$\delta$ proofs, and later a rigorous notion of integration. These kinds of proofs are now a rite of passage for anyone who wants to do analysis, before they do something that is applicable (like differential equations, probability theory, and harmonic analysis).

Now some people fall in love with these $\epsilon$-$\delta$ proofs, and these people can go on to study abstract Banach space theory. But other people, like Norbert Wiener, used analysis to develop more interesting and applicable stuff like mathematical Brownian motion (that is, the Wiener process). Indeed I remember a quote by Norbert Wiener where he compared himself to Stefan Banach, but I am unable to locate this quote.

So there is a side to analysis that does have more structure, and in this sense, it does have more of the flavor of algebra.

0
On

If your definition of "beautiful" is restricted to "structured", I'm afraid you may never find much beauty in analysis. In another post on either MSE or MO, someone cleverly summarized algebra as the art of exploring structure in complicated objects, while analysis is the art of coming up with a very limited set of rules and definitions and seeing what sort of properties emerge.

As a beginning graduate student in analysis, I think these emergent properties can be quite beautiful, even if they are not inherently "structural" facts. I'd like to provide two examples, hopefully accessible at the undergraduate/beginning graduate level:

Partial differential equations: consider the family of harmonic functions: functions $u:U\subset\mathbb{R}^n\to\mathbb{R}$ with the property that $\Delta u := \sum_{i=1}^n u_{x_ix_i}= 0$. It turns out that such functions satisfy the mean value property: if $B(x,r)$ is the ball of radius $r$ centered at any $x$, then $u(x)$ must equal the both the volume average and surface average of $u$ over any such ball: $$u(x) = \frac{1}{|\mu(B(x,r))|}\int_{B(x,r)} u(x)\,dx = \frac{1}{|\partial B(x,r)|}\int_{\partial B(x,r)} u(x)\,dS(x).$$ This mean-value property seems, superficially, quite disconnected to the fact that the sum of all second-order partials of a function must vanish, and yet harmonic functions satisfy this property, among many others. What is perhaps even more fascinating is that the converse holds: any locally $L^1$ function is automatically smooth and harmonic!

Functional analysis: Perhaps a more "structural" example. The Riesz representation theorem is never taught with as much excitement as it should be. Let $H$ be a Hilbert space, endowed with an inner product $\langle \cdot,\cdot\rangle$. Let $f$ be any bounded linear functional on $H$; that is a continuous linear map from $H\to\mathbb{R}$.

If $H$ is just $\mathbb{R}^n$ endowed with the Euclidean norm, then the inner product is just the dot product. What's the only linear way to map a vector $v$ to a real number? At least to me, it is clear that the only way is to take linear combinations of its components, so that any linear functional $f(v)$ takes the form $$f(v) = \sum_{i=1}^n c_iv_i.$$ But this is just the dot product, and so we can say that any bounded linear functional on $\mathbb{R^n}$ takes the form $f(v) = c\cdot v$ for some $c\in H$. This gives us a nice way to, erm, represent the family of all bounded linear functionals on $\mathbb{R}^n$.

But wait, the Riesz Representation Theorem says that this is true for any Hilbert space! That is, any bounded linear functional $f$ on $H$ can be written in the form $f(v) = \langle \phi, f\rangle$ for some $\phi\in H$. The fact that this rather obvious property of $\mathbb{R}^n$ translates to an arbitrary Hilbert space is, at least to myself, quite mysterious and remarkable!

More generally, some analysts (at least myself) find beauty in the connection between abstract, theoretical results in analysis and their ability to represent the real world; people much more intelligent and articulate than myself have written about this at length, notably Eugene Wigner.

0
On

When we start learning analysis, it looks like a mess. Epsilons here, deltas there, monstrous counterexamples, so many different concepts of derivatives, rules for differentiation and integration, one will quickly feel lost. But that's only at the beginning. Once you dive deeper, analysis reveals a lot of structure.

Many of the seemingly distinct concepts can be consolidated

I will give a relatively simple example where the confusing multitude of concepts can be elegantly consolidated into a single concept: the total derivative. Let $V,W$ be Banach spaces. A function $f:V\to W$ is called differentiable at $x_0\in V$ if there exists a continuous linear map $L:V\to W$ called total differential of $f$ at $x_0$ and a remainder map $R:V\to W$ such that

  1. $f(x)=f(x_0)+L(x-x_0)+R(x)$
  2. $\lim\limits_{x\to x_0}\frac{R(x)}{\Vert x-x_0\Vert}=0$

Continuity of $L$ guarantees that the expression in 1. is continuous. Then the first condition says that $f$ can be approximated by a linear function $L$, while the second condition specifies how good this approximation is (The difference between $f$ and its linear approximation $f(x_0)+L(x-x_0)$ goes to $0$ faster than $x-x_0$). If these conditions are fulfilled, then $f$ is totally differentiable at $x_0$, and almost all other concepts of derivatives reduce to this definition:

  1. In single variable analysis, the linear map $L$ is just $L(v)=f'(x_0)\cdot v$.
  2. For scalar fields, $L(v)=\nabla f(x_0)\cdot v$, where $\nabla f(x_0)$ is the gradient of $f$ at $x_0$.
  3. For vector fields, the matrix representation of $L$ is the Jacobian of $f$.
  4. The directional derivative at $x_0$ in the direction $v$ is the total differential of $g(t):=f(x_0+vt)$ at $0$.
  5. The partial derivatives are just the directional derivatives in the direction of the coordinate axes.
  6. The functional derivative of a functional $\mathcal F:F\to\mathbb R$ (where $F$ is a complete function space) is the total differential of $\mathcal F$.

Suddenly all these disparate concepts are just one, and I think that's beautiful.

Analysis is the continuation of algebra

Many algebraic fields study simple geometry. By simple I certainly don't mean easy, but they often take the most ideal scenario possible to study geometry. Linear geometry, for instance, or the geometry of polynomial functions. Analysis takes the ideas of algebra and asks: How flexible can we be if we still want to apply all this idealized algebra? The definition of the total differential above is a prime example. We know a lot about linear geometry, that is the geometry of vector spaces and their (affine) subspaces. Affine subspaces can be defined in terms of affine maps. The map $f(x_0)+L(x-x_0)$ is an affine map. So the definition above is essentially saying that a function is differentiable if it is reasonably close to an affine map, about which we know a lot. This is what leads to some of the biggest theorems in a first analysis course, like the inverse function theorem: A linear equation $L(x)=y$ has a unique solution for all $y$ if $L$ is an injective linear map. Well, a nonlinear equation $f(x)=y$ has a unique solution for all $y$ in a neighborhood of $x_0$ if the total differential of $f$ at $x_0$ is an injective linear map. So analysis essentially takes the concepts of algebra (especially linear algebra) one step further.

Algebra and analysis aren't actually that distinct

I mean, they're certainly different, but they have a lot of intersections. For instance, the fundamental theorem of algebra is one of the standard results of a complex analysis course. And even the proof shown in a standard abstract algebra course relies on analysis, specifically on the fact that real polynomials of odd degree are reducible because they have a root, which is quickly shown using the intermediate value theorem. On the other hand, algebra comes up in complex analysis as well: The total differential of an injective holomorphic function is a member of the conformal group $\operatorname{CO}(\mathbb R^2)$, and the fact that conformal functions are holomorphic can be shown using linear algebra, see here.