What is the difference between a linear mapping and a bilinear mapping?

4.9k Views Asked by At

Especially, what's the difference between their matrixes.

I think the main difference is that in linear you cannot multiply two vectors but in bilinear mapping you can. But I'm not sure if I understand it correctly. Thank you for your explainations :)

2

There are 2 best solutions below

2
On

The difference is that while

a linear map $L:V\to W$ is a function that take a vector and gives a vector :$L(\vec v)=\vec w$

(and is linear: $L(a\vec x+b\vec y)=aL(\vec x) +b L(\vec y)$)

a bilinear map $B:V_1\times V_2 \to W$ take two vectors ( a couple in the cartesian product) and gives a vector: $B(\vec v_1,\vec v_2)=\vec w$

(and it is linear in the two arguments)

I'm not sure to understand what you say for ''in linear you cannot multiply two vectors but in bilinear mapping you can'' but also in a bilinear map we cannot ''multiply the arguments'', in the sense we cannot have something as $B(xy,z)=B(x,z)B(y,z)$.

As an example of bilinear form, that I suppose you know, you can think at the cross product of two vectors in $\mathbb{R}^3$ that take two vectors $\vec v, \vec u$ and gives the vector $\vec v \times \vec u$.

0
On

I find the simple example of multiplying a matrix $A$ with a column vector $x$ useful in thinking about linear and bilinear forms.

Viewing $A$ as a linear transformation, $Ax$ is the weighted sum of columns of $A$ with column $i$ weighted by $x_i$. This is the linear column space perspective.

A bilinear transformation is the dot product, which as the OP says, takes two vectors to a number.

From a dot product perspective, $A$ is simply a collection of rows, each of we dot product with $x$. The resulting list of numbers is $Ax$. If $Ax=0$, we have found a vector perpendicular to the row space of $A$. The focus is thus on the row space.

Strang calls the two perspectives outer vs. inner products.

The difference can also be seen in how you undo the effects of $A$ on $x$. In the linear case, if you know $Ax-b=0$, your best bet (not always exactly correct) is $x-A^+b = 0$, where $A^+$ is the pseudoinverse.

In the dot product bilinear case $(Ax,b) = 0$ becomes $(x,A^Tb)=0$ where $A^T$ is the transpose.

In a linear vector space, you can’t ask for an angle between the two vectors; you can only ask for a matrix that moves one vector to the other. Multiplying and inverting these matrices is composition and inversion of the transformation.

But once you have a dot product, you can immediately get the angle.

You can see all of this in action together in the OLS regression of $y$ on $X$. The first step minimizes the dot product of $y-Xb$ with itself. That gives $X’y = X’Xb$. So only transposes (‘). Next we take a linear perspective and pseudoinvert to $b = (X’X)^+X’y$.