Proof of the Laplace formula using alternating tensors.

106 Views Asked by At

I am reading "Differential Forms with applications to the physical sciences" of Harley Flanders. I came across the proof of the Laplace formula, there were some details left to the reader, I would like to know if my proof is correct. Let $\{a_{i,\,j}\}_{i=1,\dots,n,\,j=1\dots n}$ be an $n\times n$ matrix. For an order $p-$tuple $H=\{h_{1},\dots,\,h_{p}\}$ , set $$b_{H}=\begin{vmatrix} a_{1,\,h_{1}}&& \dots && a_{1,\,h_{p}}\\ \vdots && && \vdots \\ a_{p,h_{1}} && \dots && a_{p,h_{p}} \end{vmatrix}.$$ Set $p+q=n.$ For $K=\{k_{1},\,\dots\, k_{q}\},$ set $$c_{K}=\begin{vmatrix} a_{p+1,\,k_{1}}&& \dots && a_{1,\,k_{q}}\\ \vdots && && \vdots \\ a_{n,k_{1}} && \dots && a_{n,k_{q}} \end{vmatrix}.$$ I will indicate with $H^{c}$ the ordered $q-$tuple complementar of $H$. Set $\alpha_{i}=\sum_{j=1}^{n}a_{i,\,j}\,\sigma^{j},$ where $(\sigma^{j})$ is a base of $\mathbb{R}^{n}.$ First we prove by induction that $$ \alpha_{1}\wedge\,\dots\,\wedge \alpha_{p}=\sum_{H} b_{H}\,\sigma^{H}$$ and $$\alpha_{p+1}\wedge\,\dots\, \wedge \alpha_{n}=\sum_{K}c_{K}\,\sigma^{K}.$$ The case $p=1$ is trivial. If $p>1$ we have (I am not really sure about the third equality) $$ \begin{align*} \alpha_{1}\wedge\,\dots\,\wedge \alpha_{p} =\left(\sum_{j=1}^{n} a_{j,1}\, \sigma^{j}\right)\wedge \, \left( \alpha_{2}\,\wedge \dots \, \wedge \alpha_{p} \right)\\ =\sum_{j=1}^{n}\,a_{j,1}\,\sigma^{j} \wedge \sum_{\bar{H}}b_{\bar{H}}\,\sigma^{\bar{H}}=\sum_{i=1}^{n}a_{h_{i},1}\,\sigma^{h_{i}}\wedge b_{\hat{H}_{i}}\, \sigma^{\hat{H}_{i}}\\ =\sum_{i=1}^{n} a_{1,h_{i}}\,b_{\hat{H}_{i}}\,(-1)^{h_{i}-1}\,\sigma^{H}=\sum_{H}\, b_{H}\,\sigma^{H}. \end{align*}$$ Where the last equality is by the definition of the determinant. Remark: the notation $\hat{H}_{i}$ means the ordered $p-1-$tuple obtained by $H$ subtracting $h_{i}.$

The other formula $$\alpha_{p+1}\wedge\,\dots\, \wedge \alpha_{n}=\sum_{K}c_{K}\,\sigma^{K}$$ is proven similarly. So the Laplace formula is readily proven : $$ \begin{align*} \det(A)\,\sigma^{1}\wedge \dots \wedge \sigma^{n}=\left(\alpha_{1}\wedge \dots \wedge \alpha_{p}\right)\wedge \left( \alpha_{p+1}\wedge \dots \alpha_{n} \right)=\sum_{H,K} b_{H}\,c_{K}\,\sigma^{H}\wedge \sigma^{K} \end{align*} $$ But $$ \sigma^{H}\wedge \sigma^{K}=\begin{cases} 0 \quad K\neq H^{c}\\ sgn(H,H^{c}) \quad K=H^{c} \end{cases} $$ In conclusion $$ \begin{align*} \det(A)\,\sigma^{1}\wedge \dots \wedge \sigma^{n}=\sum_{H,K} b_{H}\,c_{K}\,\sigma^{H}\wedge \sigma^{K}=\left( \sum_{H} sgn(H,H') a_{H}\,b_{H'} \right) \cdot \sigma_{1}\wedge \dots \sigma_{n} \end{align*} $$ Remark: obviously the notation $sgn(H,H')$ stands for the sign of the permutation $$ \begin{pmatrix} 1 &2 & \cdots &p& p+1 & \dots & n\\ h_{1} & h_{2} & \cdots & h_{p}& k_{1} & \cdots &k_{q}. \end{pmatrix} $$

1

There are 1 best solutions below

0
On BEST ANSWER

First, it appears you are trying to prove the classical Laplace expansion formula for an $n\times n$ real matrix $A=(a_{ij})$ along the first $p$ rows, which can be written more explicitly as $$\det A=\sum_{h_1<\cdots<h_p}\mathrm{sgn}(h_1,\ldots,h_p)A_{1\cdots p}^{h_1\cdots h_p}A_{p+1\cdots n}^{h_{p+1}\cdots h_n}\tag{1}$$ where $h_{p+1}<\cdots<h_n$ is the complementary ordered $(n-p)$-tuple of $h_1<\cdots<h_p$, $$\mathrm{sgn}(h_1,\ldots,h_p)=(-1)^{\sum_{i=1}^p(h_i-i)}\tag{2}$$ and $A_{i_1\cdots i_k}^{j_1\cdots j_k}$ denotes the $k\times k$ minor determinant of $A$ from rows $i_1<\cdots<i_k$ and columns $j_1<\cdots<j_k$. (Note (1) is equivalent to your final outer equation, just using slightly more explicit notation; in particular, (2) is the sign of the permutation you describe.)

As in your attempted proof we can let $\sigma^1,\ldots,\sigma^n$ be a basis of $\mathbb{R}^n$ and $\alpha_i=\sum_{j=1}^n a_{ij}\sigma^j$.

Your attempted inductive proof of the expansion for $\alpha_1\wedge\cdots\wedge\alpha_p$ is problematic, even ignoring the typos. The right-hand side of your third equality doesn't make sense since $H$ (and hence $h_i$ and $\hat{H}_i$) is not defined there. However, it is true that $$\alpha_1\wedge\cdots\wedge\alpha_p=\sum_{h_1<\cdots<h_p}A_{1\cdots p}^{h_1\cdots h_p}\sigma^{h_1\cdots h_p}\tag{3}$$ where $\sigma^{h_1\cdots h_p}=\sigma^{h_1}\wedge\cdots\wedge\sigma^{h_p}$, and this can be proved inductively using the Laplace cofactor expansion along a single row of a matrix (a simpler result which can be proved independently and which I assume you're familiar with).

As an illustrative example if $n=p=3$, then $$\alpha_2\wedge\alpha_3=\begin{vmatrix}a_{21}&a_{22}\\a_{31}&a_{32}\end{vmatrix}\sigma^{12}+\begin{vmatrix}a_{21}&a_{23}\\a_{31}&a_{33}\end{vmatrix}\sigma^{13}+\begin{vmatrix}a_{22}&a_{23}\\a_{32}&a_{33}\end{vmatrix}\sigma^{23}$$ so it follows that $$\begin{align*} \alpha_1\wedge\alpha_2\wedge\alpha_3&=\Biggl(a_{11}\begin{vmatrix}a_{22}&a_{23}\\a_{32}&a_{33}\end{vmatrix}-a_{12}\begin{vmatrix}a_{21}&a_{23}\\a_{31}&a_{33}\end{vmatrix}+a_{13}\begin{vmatrix}a_{21}&a_{22}\\a_{31}&a_{32}\end{vmatrix}\Biggr)\sigma^{123}\\ &=\begin{vmatrix}a_{11}&a_{12}&a_{13}\\a_{21}&a_{22}&a_{23}\\a_{31}&a_{32}&a_{33}\end{vmatrix}\sigma^{123} \end{align*}$$ I trust you can generalize the reasoning here.

The rest of your proof looks right apart from typos.