Why do block matrices behave so similarly to regular matrices?

215 Views Asked by At

I've been using block matrices a bit in my numerical analysis course. They have many identities that mimic the identities for regular matrices. I understand the proofs, which involve boiling things down into summations. Is there some algebraic structure behind block matrices that is causing these nice theorems? Are block matrices linear maps on the vector space of matrices or something?

2

There are 2 best solutions below

0
On

One intuition as to why block matrices behave nicely (but not too nicely) is because the set of n by n matrices forms a ring, so block matrices act as linear operators on the module over the ring of matrices. This is explains a little bit why some of the algebraic properties are retained.

2
On

Conventional linear algebra defines matrix as elements of a field, which means that they can be added and multiplied with the properties you'd expect from the reals - the relevnt ones here being that multiplicative inverses are guarenteed to exist bar $0^{-1}$, and that both these operations are commutative.

Square matrices do not multiply commutatively, and also not every non-zero matrix has an inverse. However, they have all the other properties required to count as a field, which effectively means that so long as you don't "touch" these exceptions, you'll find the same theorems as you would for matrices over an actual field.

Handily, adding and multiplying matrices themselves rely more on the fact that multiplication is possible at all, and don't involve division by scalars. Addition is easy - it works componentwise, and has no weird edge cases. When multiplying, you have a choice to make about which order you do the multiplications inside the calculation of a product matrix's elements, but since you weren't expecting the "outer" matrix product to be commutative anyway, you can naturally choose to put them in the same order as the factors they came from.

The problems start appearing when you try relating these matrices-over-matrices to the underlying scalars though. For starters, unlike with matrices over fields, $\lambda A \neq A\lambda$ in general for "scalar" - that is, inner matrix - $\lambda$. However, we do have an analogous $kA=Ak$ for a $k$ chosen from the field that the inner matrices range over, because we can define that over the outer matrices componentwise. However, even if they're distinct, $\lambda A $ and $A \lambda$ will both exist.

A lack of inverses of the elements also means that the the matrices can lack inverses, because e.g. the factor $\frac{1}{ad-bc}$ in inverting a 2x2 matric can fail to be defined even if $ad-bc\neq0$. That said, it's not much of a change given that we weren't expecting them to be invertible anyway.