What are some slick proofs of basic algebraic facts?

614 Views Asked by At

Couple of years ago, I saw a proof in Horn's "A Second Course in Linear Algebra" of the fact that if a matrix (over a field) has a right inverse then it is automatically invertible, essentially using the fact that $M_n(K)$ is Artinian (the same works in any Artinian ring, I expect). At the time I did not know much mathematics and I was rather impressed by this proof, as the ones I'd seen previously were always "inelegant" rank-related arguments, and as a consequence it got stuck in my head.

As such, I would like to ask you to share what proofs of this kind you know, if any. Ideally, they should be either elementary using "advanced ideas" (as the one I posted might arguably be) or short one/two/three-liners directly applying more advanced results/theories. When I say "basic facts", I mean ones at the level of a ~2nd-3rd year undergrad who's had linear algebra, the basic algebra sequence, and topology (this does not refer to me necessarily, rather the hypothetical audience).

6

There are 6 best solutions below

9
On

Here is a proof for the basic fact that $\sqrt{2}$ is irrational. We use Fermat's Theorem:

Theorem (Fermat, 1640): The number $1$ is not congruent, i.e., there is no right triangle with rational sides whose area is equal to $1$.

Proof: If $\sqrt{2}$ were rational then $\sqrt{2},\sqrt{2}$,and $2$ would be the sides of a rational right triangle with area $1$. This is a contradiction to the fact that $1$ is not a congruent number.

8
On

There is a proof for the basic fact that there are infinitely many prime numbers, using Euler products. Suppose that there are only finitely many primes. Then $$\frac{\pi^2}{6} = \zeta(2)=\prod_{p\in \Bbb P} \frac{1}{1-\frac{1}{p^2}}$$ is rational. Hence also $\pi^2$ is rational, which is a contradiction.

6
On

The existence of Jordan normal form of a linear transformation is a corollary of the structure theorem for finitely generated modules over a PID.

Namely, suppose $k$ is an algebraically closed field, and we have a finite-dimensional vector space $V$ over $k$, along with a linear operator $T : V \to V$. Then this induces a structure of $k[t]$-module on $V$, where $t$ acts as $T$. Since $k[t]$ is a PID, then this module decomposes as $V \simeq k[t]^n \oplus \bigoplus_{k=1}^m k[t] / \langle p_i \rangle$, where each $p_i$ is a power of an irreducible of $k[t]$.

Now, the fact that $V$ is finite-dimensional over $k$ implies that $n=0$. Furthermore, since $k$ is algebraically complete, we see that each $p_i$ is of the form $(t - \lambda_i)^{d_i}$. However, the matrix representation of the action of $t$ on the module $k[t] / \langle (t-\lambda_i)^{d_i} \rangle$ with respect to the basis $\{ (t-\lambda_i)^{d_i-1}, \cdots, t-\lambda_i, 1 \}$ is exactly a block of the Jordan canonical form for eigenvalue $\lambda_i$ of size $d_i$.

1
On

Given any field, the fact that the set of diagonalizable matrices are dense over the field in the Zariski topology is a fascinating one line proof.

One notes that the set of diagonalizable matrices contain the matrices with distinct eigenvalues which is equal to the set of all those matrices such that the discriminant of the characteristic polynomial of the matrix does not vanish. Since the discriminant is a polynomial in the matrix entries, this is the complement of a zero-set of some polynomials and hence is closed in the Zariski topology, and hence is dense. Thus the set of diagonalizable matrices is Zariski dense over the given field.

This is very helpful to reduce many questions of linear algebra over any field to just the diagonalizable cases, because then by the Zariski topology, it would hold for all cases!

1
On

To show that matrix multiplication is associative you can compute it directly or instead we can consider the matrices to be functions that take other matrices as input. This allows us to use the associativity of function composition so you only have to establish that the multiplication is well-defined.

3
On

Here is a proof(paraphrased) I found of the division algorithm for polynomials in my Linear Algebra textbook that I thought was really nice.

Proposition: Let $p,s\in P(F)$ with $s\ne 0$. Then there exist unique polynomials $q,r\in P(F)$ such that $p=sq+r$ and $\deg r<\deg s$.

Proof: Let $\deg p =n$ and $\deg s=m$. If $n<m$, take $q=0$ and $r=p$. So we can assume that $n \ge m$. Define a linear map $T : P_{n-m}(F)\times P_{m-1}(F)\to P_n(F)$ by $$T(q, r)=sq+r$$

To show uniqueness it suffices to show that null $T = \{0\}$. If $(q, r)\in$ null $T$, then $sq+r=0$ implies that $q=0$ and $r=0$ because otherwise if $q\ne 0$, then $\deg sq \ge m$ and $sq\ne -r$. This shows that null $T=\{0\}$ and $T$ is injective and therefore $q$ and $r$ are unique.

To show existence, by the fundamental theorem of linear maps we know that $$\dim \text{range } T=\dim V -\dim \text{null }T $$ Because $\dim \text{null } T=0$, we have that $$\dim \text{range }T = \dim(P_{n-m}(F)\times P_{m-1}(F))=(n-m+1)+(m-1+1)=n+1$$ Which equals $\dim P_n(F)$. Thus, range $T=P_n(F)$, $T$ is surjective and such $q$ and $r$ always exist.