Can someone help me understand the purpose of finding the NULL space? I understand how to calculate but I’m failing to see the big picture. To be honest I feel like I understand certain aspects of linear algebra but I feel like I’m failing to understand the big picture and how the pieces relate like null space, column space, etc. I searched but couldn’t find a good explanation of how it all fits as a whole. Any help is appreciated and I’m new so if I did something wrong please correct me.
Purpose of the NULL Space and big picture
497 Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail AtThere are 4 best solutions below
On
The Null space is the stuff which go to zero under a linear transformation.
When you take a picture you loose one dimension so the third dimension disappears.
Mathematically speaking, $ T(x,y,z)= (x,y,0)$.
What happened to $z$? It is gone to $0$.
$T(0,0,5)=(0,0,0)$ so $(0,0,5)$ is in the null space of $T$
On
Null space / Kernel of a transformation is collection of all those vectors which maps to identity under transformation.
Main purpose of finding null space or kernel of a transformation is to check whether transformation is one-one or not ? if null space is trivial then transformation is injective (one-one).
Moreover , In case of matrices ; we define linear transformation as like
$T$ : $\mathbb{R}^n$ $\rightarrow$ $\mathbb{R}^m$ defined as
$T(x)$ = $Ax$ where $A$ is corresponding $m\times n$ matrix.
kernel here is defined as
ker($T$) = {${x \in \mathbb{R}^n \mid T(x) = Ax = 0 }$}
On
Many systems can be modeled using a state vector $\mathbf{x}$ whose evolution in time satisfies an equation of the form $$\mathbf{x}_{t+1}-\mathbf{x}_t=\mathsf{A}\mathbf{x}_t$$ or $$\frac{\mathrm{d}}{\mathrm{d}t}\mathbf{x}(t)=\mathsf{A}\mathbf{x}(t)$$ for some constant matrix $\mathsf{A}$. For instance,
- in mechanics, $\mathbf{x}$ could be the positions and momenta of particles connected by springs relative some equilibrium, and $\mathsf{A}$ could be related to the masses, spring constants, and damping coefficients;
- in electrical engineering, $\mathbf{x}$ could be a vector of voltages and currents in various circuit elements, and $\mathsf{A}$ could be related to the resistance, capacitance, and inductance of the various elements; and
- in chemistry, $\mathbf{x}$ could be the probability distribution of finding particles in certain state, and $\mathsf{A}$ could be related to reaction rates.
Sometimes we are interested in finding the fixed points of the evolution—these correspond to nontrivial mechanical, thermal, or chemical equilibria. Fixed points are the vectors $\mathbf{x}$ such that $$\mathbf{x}_{t+1}-\mathbf{x}_t=\mathbf{0}$$ or $$\frac{\mathrm{d}}{\mathrm{d}t}\mathbf{x}(t)=\mathbf{0}\text{.}$$ But these are exactly the vectors $\mathbf{x}$ such that $$\mathsf{A}\mathbf{x}=\mathbf{0}\text{,}$$ that is, the null space of $\mathsf{A}$.
I think the big picture is best seen from a historical perspective. The roots of linear algebra lie in the attempt to solve systems of linear equations. I trust you've seen those as examples in your linear algebra class, so can write them compactly as $$ Ax = b $$ where the matrix $A$ contains the coefficients of the unknowns that make up the vector $x$ and $b$ is the vector of values you want to achieve by choosing values for the components of $x$.
It turns out that solving these equations when $b$ is the $0$ vector is a good way to start. Those solutions are the null space or kernel.
The size of the null space (its dimension) tells you (in a sense) how many solutions there are to the general problem (vectors $b \ne 0$) when there are any at all.
Then the column space can tell you for which vectors $b$ there are solutions.