If $S,T : V \to V$ are linear maps such that $\text{Im}\, S \subseteq \text{Im}\,T$, find $R : V \to V$ such that $S = T \circ R$.

125 Views Asked by At

I've been doing some problems in a Linear Algebra book. As in the title:

If $S,T : V \to V$ are linear maps such that $\text{Im}\, S \subseteq \text{Im}\, T$, find a linear map $R : V \to V$ such that $S = T \circ R$.

I was unable to prove that. I've thought about it for a while. I think $R$ should be take $v \in V$ to $R(v) \in T^{-1}(S(v))$. But the problem is that the preimage may contain more than one elements. I might just define $R(v)$ to be just one of the elements of $T^{-1}(S(v))$, but i think this is not immediately imply $R$ is a linear map.

I started to think that it might missed a hypothesis about injectivity of $T$. Anyone know how to construct $R$ ? Any help will be appreciated. Thank you.

4

There are 4 best solutions below

1
On BEST ANSWER

Decompose $V=\ker(T) \oplus W$. $T|_W$ is injective, therefore there exists $Q:\mathrm{im}(T) \to W$ such that $T\circ Q=\mathrm{Id}_{\mathrm{im}(T)}$. Define $R=Q\circ S$. Then $T\circ R = (T\circ Q)\circ S = S$.

4
On

I will prove a more general result. Let $U$, $V$, and $W$ be vector spaces. If linear maps $S:U\to W$ and $T: V\to W$ satisfy $\operatorname{im}S\subseteq \operatorname{im}T$, then there exists a linear map $R:U\to V$ such that $S=T\circ R$.

Fix a basis $\big\{z_\alpha:\alpha\in I\big\}$ of $Z=\operatorname{im}S$. Since $Z\subseteq \operatorname{im} T$, there exist $y_\alpha \in V$ such that $T(y_\alpha)=z_\alpha$ for each $\alpha\in I$. Note that the elements $y_\alpha$ of $V$ are linearly independent.

Now, for each $\alpha\in I$, consider $X_\alpha\subseteq U$ to be the inverse image of $z_\alpha$ under $S$. Let $N$ be a basis of the kernel of $S$. Observe that $N\cup X$, where $X= \bigcup_{\alpha \in I}X_\alpha$, spans $U$. Therefore, we can pick a set $B\subseteq X$ such that $N\cup B$ is a basis of $U$. Let $B_\alpha$ denote $X_\alpha \cap B$. Define $R:U\to V$ by linearly extending $$R(b)=y_\alpha$$ for all $\alpha \in I$ and $b\in B_\alpha$, and $$R(s)=0$$ for all $s\in N$.

Why does $X$ span $U$? Well, write $Z_\alpha\subseteq Z$ for the span of $z_\alpha$. Then, $Z=\bigoplus_{\alpha \in I}Z_\alpha$. Define $\pi_\alpha:Z\to Z_\alpha$ to be the canonical projection. Then, we claim that each $u\in U$ lies in $\sum_{\alpha\in I}U_\alpha$, where $U_\alpha=S^{-1}(Z_\alpha)$. Because $S(u)\in Z$, $S(u)$ lies in the direct sum of finitely many $Z_\alpha$, say $Z_{\alpha_1}\oplus Z_{\alpha_2}\oplus\ldots \oplus Z_{\alpha_n}$. Then, $$S(u)=S(u_{\alpha_1})+S(u_{\alpha_2})+\ldots+S(u_{\alpha_n})$$ for some $u_{\alpha_j}\in U_{\alpha_j}$. That is, $$S\left(u-\sum_{j=1}^nu_{\alpha_j}\right)=0$$ for every $j=1,2,\ldots,n$. Consequently, $u-\sum_{j=1}^nu_{\alpha_j}\in \ker S$. The claim follows.

1
On

Per request by the OP, I shall state and prove the dual problem to the problem stated in Zvi's answer. I also restate the primal problem here. As noted in the comments after Zvi's answer, the two problems are equivalent to stating that, for a given field $K$, all objects the category of $K$-vector spaces are both projective and injective.

Primal Problem. Let $U$, $V$, and $W$ be vector spaces over a field $K$. If $K$-linear maps $S:U\to W$ and $T: V\to W$ satisfy $\operatorname{im}(S)\subseteq \operatorname{im}(T)$, then there exists a $K$-linear map $R:U\to V$ such that $S=T\circ R$.


Dual Problem. Let $U$, $V$, and $W$ be vector spaces over a field $K$. If $K$-linear maps $S:U\to V$ and $T: U\to W$ satisfy $\operatorname{ker}(S)\supseteq \operatorname{ker}(T)$, then there exists a $K$-linear map $R:W\to V$ such that $S=R\circ T$.

In the spirit of Federico's answer, let $K:=\ker(T)$, and we decompose $U$ as $K\oplus X$ for some subspace $X$ of $U$. Then, $T|_X:X\to W$ is an injective linear map. Let $Y:=\text{im}(T)$. Ergo, $W=Y\oplus Z$ for some subspace $Z$ of $W$.

Because $\tilde{T}:=T|_X$ is a bijective map from $X$ to $Y$, $\tilde{T}$ is invertible and write $\tilde{T}^{-1}:Y\to X$ for the inverse map of $\tilde{T}$. Now, define $R:W\to V$ as follows: $$R(y+z):=S\big(\tilde{T}^{-1}(y)\big)\text{ for all }y\in Y\text{ and }z\in Z\,.$$

We shall prove that $S=R\circ T$. Let $k\in K$ and $x\in X$. Then, $$S(k+x)=S(x)\text{ since }K=\ker(T)\subseteq \ker(S)\,.$$ Now, $$(R\circ T)(k+x)=R\big(T(k+x)\big)=R\big(T(x)\big)\text{ as }k\in \ker(T)\,.$$ Since $T(x)=\tilde{T}(x)$, we get $$(R\circ T)(k+x)=R\big(\tilde{T}(x)\big)=S\Big(\tilde{T}^{-1}\big(\tilde{T}(x)\big)\Big)=S(x)\,,$$ by the definition of $R$. Therefore, $S=R\circ T$, as required.

2
On

Fix a basis $v_1, \ldots, v_n$ of $V$; now, for each basis element, find some $w_i \in T^{-1} (S(v_i))$. Then there is a unique linear transformation $R : V \to V$ such that $R(v_i) = w_i$ for each $i$. But then, we must have that $T \circ R = S$, since both sides are linear and the equality holds at each $v_i$ in the basis. (This argument is also easy enough to generalize to cover the case that $V$ is not finite dimensional.)