Surprising Generalizations

3.3k Views Asked by At

I just learned (thanks to Harry Gindi's answer on MO and to Qiaochu Yuan's blog post on AoPS) that the chinese remainder theorem and Lagrange interpolation are really just two instances of the same thing. Similarly the method of partial fractions can be applied to rationals rather than polynomials. I find that seeing a method applied in different contexts, or just learning a connection that wasn't apparent helps me appreciate a deeper understanding of both.

So I ask, can you help me find more examples of this? Especially ones which you personally found inspiring.

14

There are 14 best solutions below

0
On

Localization

When I learned that you could localize categories(and not just abelian!) I was floored. The general idea that we take a class of morphisms in a category and send them functorially to another category where they are isos is awesome. It is also very important in my work, which is generalizing some ideas of Algebraic Geometry to a more categorical setting.

Here is a link!

3
On

Galois Connections

Let's be honest, the correspondence between Galois groups and field extension is pretty hott. The first time I saw this I was duly impressed. However, about two years ago, I learned about universal covering spaces. Wow! I swear my understanding of covering spaces doubled when the prof told me that this was a "Galois correspondence for fundamental groups and covering spaces".

Again here is a link!

5
On

Classification of finitely-generated abelian groups and Jordan normal form are two instances of the structure theorem for finitely generated modules over a principal ideal domain.

0
On

Model categories as a framework for both complexes of R-modules and topological spaces (making precise, for example, analogy between taking projective resolution and replacing a space with weakly homotopy equivalent CW-complex).

7
On

I agree! I spend much of my mathematical free time exploring such connections.

Here is a basic one that I constantly ponder. The rules of matrix multiplication encode two things:

  • How to compose a linear transformation $A$ with another linear transformation $B$, with respect to a fixed basis.
  • How to follow an edge of type $A$ on a graph, and then follow an edge of type $B$ (where $A$ and $B$ are just a disjoint partition of the set of edges).

This means that one can study walks on graphs by studying how a matrix called the adjacency matrix behaves. This leads into all sorts of beautiful mathematics; for example, this is the basic tool behind Google's PageRank algorithm, and it also in some sense motivated Heisenberg's matrix mechanics formulation of quantum mechanics. I often try to recast results in linear algebra in terms of some combinatorial statement about walks on graphs.

1
On

The law of cosines and the equation for the variance of a sum of (possibly correlated) random variables are both consequences of basic inner product space properties. Details here.

0
On

I loved learning about how differential forms and the exterior derivative generalize 3-d vector calculus (div, grad, curl). Differential forms are so elegant in comparison, work in arbitrary dimensions, and give rise to beautiful mathematics (e.g. de Rham cohomology, Hodge theory). And of course, the generalized Stoke's theorem is one of the prettiest equations: $\int_{\partial R} \phi = \int_R d\phi$.

0
On

Galois connections are further reaching than one first realizes. One small application is in model theory, where the relation R between sentences of a theory and models given by t R M if sentence t is true in model M, leads to deductively closed theories versus classes of models closed under certain operations, many of which are algebraic in nature. They also arise in algebraic geometry and computer science, among other fields. There is a book on Galois connections edited by Marcel Erne; for the strongly inquisitive I recommend checking it out.

1
On

I recall seeing a bitter rant by Paul Halmos that as a graduate student he wasn't told about a certain generalization, or connection, between two big topics in mathematics. As I recall, I saw this rant in a book on Finite Dimensional Vectors Spaces that he authored. I'm sure that the rant is well-known enough that someone can supply the details.

0
On

Falling more under the heading of "surprising connection" is the relationship between the determinant and the volume and thus, by Cramer's rule, the association between the solution of a system of linear equations and volume. Indeed, the determinant manages to creep its way into a surprising amount of mathematics.

0
On

Most surprising connection for me: random walk with, ...well more or less anything :) Heat equation, harmonic functions, quantum mechanics (through Wick rotation), statistical physics of spin models (where the correspondence is edge interaction strength $\leftrightarrow$ transition probabilities), Gaussian fields (covariance matrix $\leftrightarrow$ transition probabilities), the list goes on$\ldots$

2
On

The definition of spectrum for operators on an infinite-dimensional Hilbert space. When I first learned that there are nice operators (bounded, linear, self-adjoint) on infinite-dimensional Hilbert space which have no eigenvalues at all, the possibility of generalizing spectral theory to an infinite-dimensional setting seemed pretty hopeless to me. After all, in finite dimension the eigenvalues of an operator play such a large and significant role in the analysis of the operator. I found it surprising and astounding that there is a good substitute for eigenvalues in the infinite-dimensional case, and that one can actually develop a very powerful spectral theory using it. (I guess that in retrospect this isn't so surprising, seeing that in many infinite-dimensional situations one must apply some "completion" of the analogous construction in finite dimension)

1
On

Pretty basic, I know, but the Fundamental Theorem of Calculus (first form) is a generalization of the Telescoping Property of Finite Sums.

0
On

Here is a transformation formula for the Regularized $Q(a,z)$ Incomplete Gamma function $\Gamma(a,z)$ and the Regularized $\text I_z(a,b)$ Incomplete Beta function $\text B_z(a,b)$. Please see the definitions to understand the functions. We use this hypergeometric limit:

$$\lim_{b\to \infty}b^a\text B_\frac xb(a,b)=γ(a,z)=\Gamma(a)-\Gamma(a,z)$$

which works for many values, but $z>0$ and other restrictions may not. Now we have a truly a Generalized Incomplete Gamma function, but as a beta function with $p$ as if it was evaluated at $p=-\infty$

Now we see a double generalization of the Lambert W function $\text W_k(z)$, but only for a small domain of $\text W_0(z)=\text W(z)$ and $\text W_{-1}(z)$. We see that Inverse Beta Regularized $\text I^{-1}_z(a,b)$ generalizes $\text W(z)$, using limits, and Inverse Gamma Regularized $Q^{-1}(a,z)$ which generalizes $\text W_{-1}(z)$. The inverse Gamma\Beta functions are quantile functions implemented into Mathematica:

$$\boxed{\lim_{b\to\infty}b\text I^{-1}_x(a,b)=Q^{-1}(a,1-x)}$$

which also works for most values and using this identity may extend the domain of one function while giving a computable representation in terms of the other without needing a series. Please correct me and give me feedback!

Does this $4$ argument inverse function really generalize the exotic Inverse Beta Regularized function?