Direct Sum (Linear Algebra) explanation

913 Views Asked by At

I just started a Matrix Theory course and I am having a hard time understanding some basic concepts. I am trying to get my head in the game!

Can someone explain, in more elementary terms, what direct sum actually means? The technical answer is not making sense to me. Examples and non-examples? Please break it down as much as possible. Thank you so much!

3

There are 3 best solutions below

3
On

First off, let V be a vector space ans U,W subspaces of V. Then we define the sum of the subspaces to be $U+W=\{u+w: u \in U, w \in W\}$. Now, we say the sum is direct if $U \cap W = \{0\}$.

Per your request lets look at an example. Consider $\mathbb{R}^2$ where can decompose $\mathbb{R}^2= \text{x-axis} \oplus \text{y-axis} \simeq \mathbb{R} \oplus \mathbb{R}$. If is clear that $\{x-axis\} + \{y-axis\}=\mathbb{R}^2$ and moreover the two subspaces merely intersect at the origin, making the sum direct. Furthermore, we can identify each as an isomorphic copy of $\mathbb{R}$ and we have the result.

1
On

Here is one example where a direct sum has some physical meaning.

Consider two matrix-exponential distributions with vector-matrix representations:

$$<\bf{p}_1,\ \bf{B}_1,\ \bf{e}_1>$$

and

$$<\bf{p}_2,\ \bf{B}_2,\ \bf{e}_2>$$.

Then, the direct sum, $\bf{B}_k$, with vector reprentation $<\bf{p}_k,\ \bf{B}_k,\ \bf{e}_k>$, where

$$\bf{B}_k = \bf{B}_1 \otimes \bf{I}_2 + \bf{I}_1 \otimes \bf{B}_2 $$

$$\bf{p}_k = \bf{p}_1 \otimes \bf{p}_2$$

is the random variable that represents the minimum of $\bf{B}_1$ and $\bf{B}_2$. That is $<\bf{p}_k,\ \bf{B}_k,\ \bf{e}_k>$ is a representation of the minimum order statistic.

1
On

By definition a vector space $V$ is the direct sum of two subspaces $U$ and $V$, denoted by $V = U \oplus W$, if every element $v$ of $V$ can be written as a sum of elements of $U$ and $W$, i.e. $v = u + w$, for some $u \in U, w\in W$. In addition to this ability of being able to reproduce all of $V$ in terms of elemens of $U$ and $W$, we require $U \cap V = {0}$.

It can be shown that this definition is equivalent to being able to write $v = u + w$ such that the choice of $u$ and $w$ are unique. By uniqueness we can represent each $v$ as $(u, w)$ without loss of generality.

Entry-wise operations follow. Let $v_1 = u_1 + w_1$ and $v_2 = u_2 + w_2$, then $v_1 + v_2 = (u_1 + u_2) + (w_1 + w_2)$. By closure in their respective subspaces, $(u_1 + u_2) \in U$ and $(w_1 + w_2) \in W$. So that $v_1 + v_2$ which ca be written as $(u_1, w_1) + (u_2, w_2)$ equals $( u_1 + u_2, w_1 + w_2)$. Similarly, we can show $\lambda (u, w) = (\lambda u, \lambda w)$.

Note that in this representation, elements of $U$ and $W$ are written as $(u, 0)$ and $(0, w)$, respectively.

The canonical example is $\mathbb{R}^{2}$ with the $x$ and $y$ axes as subspaces: $\mathbb{R}^{2} = (\mathbb{R} \times \{0\}) \oplus (\{0\} \times \mathbb{R})$.