In $\mathbb{R}^n$, is the dot product the only inner product?

2k Views Asked by At

Just what the title says. I'm reading, from various resources, that the inner product is a generalization of the dot product. However, the only example of inner product I can find, is still the dot product.

Does another "inner product" exist on the $\mathbb{R}^n$ space?

3

There are 3 best solutions below

5
On BEST ANSWER

Yes and are all in the form $$ \langle x, y\rangle=x^T\cdot A\cdot y $$ where $x^T$ is the transpose vector of $x$ and $A$ is a $n\times n$ symmetric definite positive matrix.

In fact let $\langle x, y\rangle$ a generic inner product on $\mathbb R^n$ then for every $y$ the function $$ f_y:x\rightarrow \langle x, y\rangle $$ is linear from $\mathbb R^n$ to $\mathbb R$ then exists a vector $\alpha(y)\in\mathbb R^n$ such that $$ \langle x, y\rangle = \alpha(y)^T\cdot x $$

Observe that $$ \langle x, ay+by'\rangle = a\langle x, y\rangle + b\langle x, y'\rangle\Rightarrow \alpha(ay+by')=a\alpha(y)+b\alpha(y') $$ then $\alpha$ is a linear operator from $\mathbb R^n$ in itself then exists an $n\times n$ matrix $A$ such that $$ \alpha(y)=A\cdot y $$ so $$ \alpha(y)^T\cdot x=y^T\cdot A^T\cdot x $$

Now remember that $\langle x, y\rangle=\langle y, x\rangle$ then you can easly prove that $A^T=A$ and $A$ must be symmetric.

Why now $A$ must be definite positive? Because $\langle x, x\rangle\geq 0$ and holds equality if and only if $x=0$. Applying it to the initial formula we obtain the definition of a definite positive matrix.

0
On

The dot product on $\mathcal{R}^n$ is defined as follows:

$$(a,b) = a^i b^j (e_i,e_j) = a^i b^j \delta_{ij} = a^i b^i ,$$

where $a,b \in \mathcal{R}^n$ and $e_i,e_j$ standard basis vectors. I used Einstein summation convention here.

In general we can express $a,b$ in a different basis i.e. $a=\tilde{a}^i \tilde{e}_i$ and $b = \tilde{b}^i \tilde{e}_i$ so now not the standard basis but an arbitrary basis of $\mathcal{R}^n$ assuming still $(,)$ is positive-definite. This then gives:

$$(a,b) = \tilde{a}^i \tilde{b}^j (\tilde{e}_i,\tilde{e}_j) = \tilde{a}^i \tilde{b}^j A_{ij} \equiv a^T A \ b.$$

Note that $A$ now is not the identity matrix like in the standard inner product.

0
On

The tensor metric allows for the vector norm to remain constant under change of basis vectors, and is an example of inner product (page 15). In the simple setting of basis vectors constant in direction and magnitude from point to point in $\mathbb R^2,$ here is an example:

Changing basis vectors from Euclidean orthonormal basis $\small\left\{\vec e_1=\begin{bmatrix}1\\0 \end{bmatrix},\vec e_2=\begin{bmatrix}0\\1 \end{bmatrix}\right\}$ to

$$ \left\{\vec u_1=\color{red}2\vec e_1 + \color{red}1\vec e_2,\quad \vec u_2=\color{blue}{-\frac 1 2 }\vec e_2+\color{blue}{\frac 1 4} \vec e_2\right\}\tag 1$$ would result in a different norm of the vector $\vec v=\begin{bmatrix} 1\\1\end{bmatrix}_\text{e basis},$ i.e $\Vert \vec v \Vert^2=v_1^2 + v_2^2 = 2,$ when expressed with respect to the new basis vectors.

In this regard, since vector components transform contravariantly, the change to the new coordinate system would be given by the backward transformation matrix:

$$B=\begin{bmatrix} \frac 1 4 & \frac 1 2\\-1&2\end{bmatrix}$$

i.e. the inverse of the forward transformation for the basis vectors as defined in $(1),$ which in matrix form corresponds to

$$F=\begin{bmatrix} \color{red} 2 & \color{blue}{-\frac 1 2}\\\color{red}1& \color{blue}{\frac 1 4}\end{bmatrix}.$$

Hence, the same vector $\vec v$ expressed in the new basis vectors is

$$\vec v_{\text{u basis}}=\begin{bmatrix} \frac 1 4 & \frac 1 2\\-1&2\end{bmatrix}\begin{bmatrix} 1\\1\end{bmatrix}=\begin{bmatrix} \frac 3 4\\1\end{bmatrix}.$$

And the norm would entail an inner product with the new metric tensor. In the orthonormal Euclidean basis this is simply the identity matrix. Now the matrix tensor is a $(0,2)-\text{tensor},$ and transforms covariantly:

$$g_{\text{ in u basis}}=(F^\top F) I= \begin{bmatrix} 2 & 1\\-\frac 1 2& \frac 1 4\end{bmatrix}\begin{bmatrix} 2 & -\frac 1 2\\1& \frac 1 4\end{bmatrix}\begin{bmatrix} 1 & 0\\0& 1\end{bmatrix}=\begin{bmatrix} 5 & -\frac 3 4\\- \frac 3 4& \frac{5}{16}\end{bmatrix}$$

The actual multiplication of basis vectors to obtain the metric tensor is explained in this presentation by @eigenchris. This metric tensor indeed renders the right norm of $\vec v:$

$$\begin{bmatrix}\frac 3 4 & 1\end{bmatrix}\begin{bmatrix} 5 & -\frac 3 4\\- \frac 3 4& \frac{5}{16}\end{bmatrix}\begin{bmatrix}\frac 3 4 \\ 1\end{bmatrix}=2$$

following the operations here.