Transformation behavior of connection on vector bundle.

448 Views Asked by At

Using the notation from Jost's various books on geometry, let $$ D=d+A $$ be a connection on a vector bundle $\pi:E\rightarrow M$ with structure group $GL(n,\mathbb{R})$. Also let $\{U_\alpha\}$ be an open covering for $M$ that yields local trivialisations with transition maps $$ \varphi_{\alpha\beta}:U_\alpha\cap U_\beta\rightarrow GL(n,\mathbb{R}). $$ Then $D$ defines a $T^*M$-valued matrix $A_\alpha$ on $U_\alpha$. Let a section $s$ be given locally on $U_\alpha$ by $s_\alpha=s^i_\alpha\mu_i$, where $\{\mu_1,...,\mu_n\}$ is a frame for $E_{|_U}=\pi^{-1}(U)$.

Question I: Why does it hold that $$ s_\beta=\varphi_{\beta\alpha}s_\alpha\qquad\text{on $U_\alpha\cap U_\beta$}? $$

Question II: Why does it follow that $$ \varphi_{\beta\alpha}(d+A_\alpha)s_\alpha=(d+A_\beta)s_\beta\qquad\text{on $U_\alpha\cap U_\beta$}? $$ (He does give an "indication" of how this holds, but I don't see what he means.)

Question III: How do we then conclude that $$ A_\alpha=\varphi_{\beta\alpha}^{-1}d\varphi_{\beta\alpha}+\varphi_{\beta\alpha}^{-1}A_\beta\varphi_{\beta\alpha}? $$

Remark: Jost states this in each of his books on geometry, but I have never been able to find an elaboration. I would also be grateful for some other references that explain in more detail what is going on here.

1

There are 1 best solutions below

0
On BEST ANSWER

The following is taken more or less directly from section 4.1 of Jost's Riemannian Geometry and Geometric Analysis. (Where he uses $\mu$, I will use $s$ to match the OP's notation.)

I think you may be getting confused when you try to introduce the frame $\{ \mu_i \}$. I don't think there's any need to mention frames here.

Consider a section $s$ of $E$. On each $U_\alpha \subset M$ over which $E$ is trivial, we can represent $s$ by a local section $s_\alpha$. This means that $s_\alpha$ is a map $U_\alpha \to \mathbb{R}^n$, i.e., a vector-valued function on $U_\alpha$. $\varphi_{\beta \alpha}$ is a map $U_\alpha \cap U_\beta \to Gl(n, \mathbb{R})$, i.e., for each point $p \in U_\alpha \cap U_\beta$, $\varphi_{ \beta \alpha}(p)$ is an invertible linear map $\mathbb{R}^n \to \mathbb{R}^n$. $s_\alpha$ and $s_\beta$ are related, for $p \in U_\alpha \cap U_\beta$, by $s_\beta (p) = [\varphi_{\beta \alpha}(p)] ( s_\alpha(p))$ (to answer your Question I, this is essentially just the definition of the transition map $\varphi_{\beta \alpha}$). (I will drop the reference to the point $p$ from now on.)

Now, on to the connection $D$. As Jost discusses and as you mention, $D$ defines locally on $U_{\alpha}$ a matrix $A_\alpha$ with one-form entries. Again, we use the local trivialization over $U_\alpha$ to view our section $s$ locally as a map $s_\alpha: U_\alpha \to \mathbb{R}^n$. $D s$ is a section of $E \otimes T^\ast M$, meaning locally $Ds$ is a sum of terms of the form $\sigma \otimes \omega$, where $\sigma$ is a section of $E$ and $\omega$ is a one-form. If we write the "$E$-piece" of $Ds$ locally in terms of the local trivialization of $E$, we can view $Ds$ locally as a vector with one-form entries that I'll call $(Ds)_\alpha$ (my notation, not Jost's). As Jost discusses, $(Ds)_\alpha = ds_\alpha + A_\alpha s_\alpha$, where $d$ is the usual exterior derivative acting componentwise on the entries of the vector-valued function $s_\alpha$, and $A_\alpha$ acts on $s_\alpha$ by matrix multiplication to give a vector with one-form entries.

Now to your Question II. On $U_\alpha \cap U_\beta$, we can write $Ds$ as a vector with one-form entries in two different ways corresponding to the two local trivializations: $(Ds)_\alpha$ and $(Ds)_\beta$. These two ways should be compatible in the sense that applying $\varphi_{\beta \alpha}$ to $(Ds)_\alpha$ should give us $(Ds)_\beta$, and this is where the equation you wrote comes from: \begin{align*} \varphi_{\beta \alpha} ((Ds)_\alpha) &= (Ds)_\beta, \text{ i.e.,} \\ \varphi_{\beta \alpha} ((d+ A_\alpha) s_\alpha) &= (d+ A_\beta) s_\beta \end{align*}

Finally, we substitute the fact that $s_\beta = \varphi_{\beta \alpha} s_\alpha$ into the above equation to solve for $A_\alpha$ in terms of $A_\beta$. This answers your Question III: \begin{align*} \varphi_{\beta \alpha} ((d+ A_\alpha) s_\alpha) &= (d+ A_\beta) (\varphi_{\beta \alpha} s_\alpha) \\ \varphi_{\beta \alpha} (d s_\alpha) + \varphi_{\beta \alpha} (A_\alpha s_\alpha) &= (d\varphi_{\beta \alpha})s_\alpha + \varphi_{\beta \alpha} (ds_\alpha) + A_\beta \varphi_{\beta \alpha} s_\alpha \text{ (using Leibniz rule for $d$)}\\ \varphi_{\beta \alpha} (A_\alpha s_\alpha) &= (d\varphi_{\beta \alpha})s_\alpha + A_\beta \varphi_{\beta \alpha} s_\alpha \\ A_\alpha s_\alpha &= \varphi_{\beta \alpha}^{-1} (d\varphi_{\beta \alpha} + A_\beta \varphi_{\beta \alpha} ) s_\alpha \\ \end{align*} This holds for every $s_\alpha$ (the section $s$ was arbitrary), so we have the following equality of matrices with one-form entries on $U_\alpha \cap U_\beta$: $$ A_\alpha = \varphi_{\beta \alpha}^{-1} (d\varphi_{\beta \alpha} + A_\beta \varphi_{\beta \alpha} ) $$