Confusing moment in Theorem 10.27 from PMA Rudin

471 Views Asked by At

Theorem 10.27 If $\sigma$ is an oriented rectilinear $k$-simplex in an open set $E\subset \mathbb{R}^n$ then $$\int \limits_{\overline{\sigma}}\omega=\varepsilon\int \limits_{\sigma}\omega \qquad (81)$$ for every $k$-form $\omega$ in $E$.

Proof: If $k=0$ then $0$-form is continuous function, namely $f$. Then RHS of $(81)$ is $\varepsilon\int \limits_{\sigma}f=\varepsilon^2f(\mathbf{p}_0)=f(\mathbf{p}_0).$ But LHS of $(81)$ is $\int \limits_{\overline{\sigma}}f=f(\mathbf{p}_0)$ since $\overline\sigma=\varepsilon \sigma=\varepsilon^2\mathbf{p}_0=\mathbf{p}_0.$(Here I just used the remark before theorem 10.27.)

Let's assume that $k\geqslant 1$ and $\sigma$ is given by $(75)$. Suppose $0<i<j\leqslant k$ and $\overline\sigma$ is obtained from $\sigma$ by intechanging $\mathbf{p}_i$ and $\mathbf{p}_j$ and in this case $\varepsilon=-1$. WLOG we can assume that $i=1$ and $j=2$.

Also by $(78)$ we have: $\sigma(\mathbf{u})=\mathbf{p}_0+A\mathbf{u}$ where $A\in L(\mathbb{R}^k,\mathbb{R}^n)$ and $A\mathbf{e}_i=\mathbf{p}_i-\mathbf{p}_0$ for $1\leqslant i\leqslant k$.

Also $\overline\sigma(\mathbf{u})=\mathbf{p}_0+C\mathbf{u}$ where $C$ has the columns as $A$, except that the $1$st and $2$nd columns have been interchanged.

Note that Rudin missed rigorous proof that $\int \limits_{\overline \sigma}\omega=\varepsilon \int \limits_{\sigma}\omega.$

Suppose that $\omega=f(\mathbf{x})dx_{i_1}\land \dots\land dx_{i_k}$ (we can consider this special case since general case follown from linearity).

$$\int \limits_{\overline \sigma}\omega=\int \limits_{Q^k}f(\overline\sigma(\mathbf{u}))\dfrac{\partial(\overline\sigma_{i_1},\dots,\overline\sigma_{i_k})}{\partial(u_1,\dots,u_k)}d\mathbf{u}\color\red{=}$$ It's easy to see that $\overline\sigma(\mathbf{u})=\sigma(B\mathbf{u})$ where $B$ is the flip operator which intechanges $1$st and $2$nd coordinates (definition 10.6). Note that $$\dfrac{\partial(\overline\sigma_{i_1},\dots,\overline\sigma_{i_k})}{\partial(u_1,\dots,u_k)}=-\dfrac{\partial(\sigma_{i_1},\dots,\sigma_{i_k})}{\partial(u_1,\dots,u_k)}.$$ Hence $$\color\red{=}-\int \limits_{Q^k}f(\sigma(B\mathbf{u}))\dfrac{\partial(\sigma_{i_1},\dots,\sigma_{i_k})}{\partial(u_1,\dots,u_k)}d\mathbf{u}=-\int \limits_{Q^k}f(\sigma(u_2,u_1,\dots,u_k))\dfrac{\partial(\sigma_{i_1},\dots,\sigma_{i_k})}{\partial(u_1,\dots,u_k)}d\mathbf{u}=$$$$=\int \limits_{Q^k}f(\sigma(u_2,u_1,\dots,u_k))\dfrac{\partial(\sigma_{i_1},\dots,\sigma_{i_k})}{\partial(u_2,u_1,\dots,u_k)}d\mathbf{u}$$ But the last integral is $\int \limits_{\sigma}\omega$ (Here I hidden use example 10.4 from PMA Rudin). Hence we get $\int \limits_{\overline\sigma}\omega=\int \limits_{\sigma}\omega$ but as you see before the second integral must be $\varepsilon=-1$.

Where I made mistake? Can you please help with this? I spent more than day but still can't find my mistake.

Remark: Note that there is also case when intechange vertices $\mathbf{p}_0$ and $\mathbf{p}_j$ for some $0<j\leqslant k$

EDIT: Let's consider the second case: Suppose that $0<i\leqslant k$ and $\overline \sigma=[\mathbf{p}_0,\mathbf{p}_1,\mathbf{p}_2,\dots,\mathbf{p}_k]$ is obtained from $\sigma$ by interchanging $\mathbf{p}_i$ and $\mathbf{p}_0$. WLOG we assume that $i=1$ and hence $\overline \sigma=[\mathbf{p}_1,\mathbf{p}_0,\mathbf{p}_2,\dots,\mathbf{p}_k]$.

It's easy to check that $\overline \sigma(\mathbf{u})=\overline \sigma(u_1,u_2,\dots,u_k)=\sigma(1-\sum \limits_{l=1}^{k}u_l,u_2,\dots, u_k)=\sigma(G\mathbf{u})$ where $G$ is primitive mapping (see definition 10.5) with $m=1$ and $g(\mathbf{u})=1-\sum \limits_{l=1}^{k}u_l$.

We have to show that $\int \limits_{Q^k}f(\sigma(\mathbf{u}))d\mathbf{u}=\int \limits_{Q^k}f(\overline\sigma(\mathbf{u}))d\mathbf{u}$.

Suppose that $f_{k-1}(u_2,\dots,u_k)=\int \limits_{0}^{1}f(\sigma(\mathbf{u}))du_1$ and $\overline f_{k-1}(u_2,\dots,u_k)=\int \limits_{0}^{1}f(\overline\sigma(\mathbf{u}))du_1$. If we prove that $f_{k-1}(u_2,\dots,u_k)=\overline f_{k-1}(u_2,\dots,u_k)$ then general problem follows immediately (+ example 10.4 from Rudin).

If $u_2+\dots+u_k>1$ then $\mathbf{u}\notin Q^k$ and $f_{k-1}=\overline f_{k-1}$ since integrand are zero.

If $S=u_2+\dots+u_k\leqslant 1$ then for $u_1>1-S$ integrand is zero hence $\int \limits_{0}^{1}=\int \limits_{0}^{1-S}$

Since $f_{k-1}(u_2,\dots,u_k)=\int \limits_{0}^{1}f(\sigma(\mathbf{u}))du_1=\int \limits_{0}^{1-S}f(\sigma(u_1,u_2,\dots,u_k))du_1$ if we regard the integrand as function of variable $u_1$ and making transform $g(u_1)=1-u_1-\sum \limits_{l=2}^{k}u_l$ (note that: $g(0)=1-S$ and $g(1-S)=0$) we get the following: $\int \limits_{0}^{1-S}f(\sigma(\mathbf{u}))du_1=\int \limits_{0}^{1-S}f(\overline\sigma(\mathbf{u}))du_1$ $\Rightarrow$ $\int \limits_{0}^{1}f(\sigma(\mathbf{u}))du_1=\int \limits_{0}^{1}f(\overline\sigma(\mathbf{u}))du_1$ and integrating we get $\int \limits_{Q^k}f(\sigma(\mathbf{u}))d\mathbf{u}=\int \limits_{Q^k}f(\overline\sigma(\mathbf{u}))d\mathbf{u}$.

Since Jacobians does not depend on $\mathbf{u}$ and reasoning in book shows that $$\text{det}\begin{bmatrix} D_1\sigma_{i_1} & D_2\sigma_{i_1} \cdots & D_k\sigma_{i_1} \\ D_1\sigma_{i_2} & D_2\sigma_{i_2} \cdots & D_k\sigma_{i_2} \\ \vdots & \ddots & \vdots \\ D_1\sigma_{i_k} & D_2\sigma_{i_k} \cdots & D_k\sigma_{i_k} \\ \end{bmatrix}=-\text{det}\begin{bmatrix} D_1\overline\sigma_{i_1} & D_2\overline\sigma_{i_1} \cdots & D_k\overline\sigma_{i_1} \\ D_1\overline\sigma_{i_2} & D_2\overline\sigma_{i_2} \cdots & D_k\overline\sigma_{i_2} \\ \vdots & \ddots & \vdots \\ D_1\overline\sigma_{i_k} & D_2\overline\sigma_{i_k} \cdots & D_k\overline\sigma_{i_k} \\ \end{bmatrix}$$ we obtain the main result: $$\int \limits_{\overline\sigma}\omega=-\int \limits_{\sigma}\omega$$

1

There are 1 best solutions below

11
On BEST ANSWER

The mistake is subtle. While - for sufficiently regular $g$ - we have

$$\int_{Q^k} g(u_1,u_2,u_3,\dotsc, u_k)\,d\mathbf{u} = \int_{Q^k} g(u_2, u_1,u_3,\dotsc,u_k)\,d\mathbf{u},\tag{$\ast$}$$

the $u_j$ in the "denominator" of $\dfrac{\partial (\sigma_{i_1},\dotsc, \sigma_{i_k})}{\partial (u_1,\dotsc, u_k)}$ are not arguments of the integrand, they just denote the order of the columns. Maybe a different notation would have prevented the mistake. If we denote the partial derivatives by $\frac{\partial}{\partial x_j}$, and still call the argument $\mathbf{u}$, we get

\begin{align} \int_{\sigma} \omega &= \int_{Q^k} f(\sigma(u_1,\dotsc,u_k))\cdot \frac{\partial (\sigma_{i_1},\dotsc,\sigma_{i_k})}{\partial (x_1,\dotsc,x_k)}(u_1,\dotsc,u_k)\,d\mathbf{u}\\ &= \int_{Q^k} f(\sigma(u_2,u_1,u_3,\dotsc,u_k))\cdot \frac{\partial (\sigma_{i_1},\dotsc,\sigma_{i_k})}{\partial (x_1,\dotsc,x_k)}(u_2,u_1,u_3,\dotsc,u_k)\,d\mathbf{u}\\ &= -\int_{Q^k} f(\sigma(u_2,u_1,u_3,\dotsc,u_k))\cdot \frac{\partial (\sigma_{i_1},\dotsc,\sigma_{i_k})}{\partial (x_2, x_1,x_3,\dotsc,x_k)}(u_2,u_1,u_3,\dotsc,u_k)\,d\mathbf{u}, \end{align}

and since the determinant is constant, the latter is

$$-\int_{Q^k} f(\sigma(u_2,u_1,u_3,\dotsc,u_k))\cdot \frac{\partial (\sigma_{i_1},\dotsc,\sigma_{i_k})}{\partial (x_2, x_1,x_3,\dotsc,x_k)}(u_1,\dotsc,u_k)\,d\mathbf{u} = -\int_{\overline{\sigma}} \omega,$$

as it should be.

If we use the constantness of the determinant and write $A_I$ for the matrix consisting of the rows $i_1,\dotsc, i_k$ of $A$ and analogously for $C$, then we have

\begin{align} \int_{\sigma} \omega &= \det A_I \int_{Q^k} f(\sigma(u_1,\dotsc,u_k))\,d\mathbf{u}\\ \int_{\overline{\sigma}} \omega &= \det C_I \int_{Q^k} f(\sigma(u_2,u_1,u_3,\dotsc,u_k))\,d\mathbf{u} \end{align}

and the assertion follows from $(\ast)$ and $\det C_I = -\det A_I$.