What are the dual basis vectors?

1.8k Views Asked by At

What exactly are dual basis vectors such as those which arise in non-orthogonal co-ordinate systems? What is their physical interpretation.

Please note, I don't know much tensor calculus yet. I am in fact interested in them because of their connection with the reciprocal lattice in solid state physics.

However I could not get a satisfactory answer in physics SE hence I am hoping to get a better insight from mathematicians.

Thank you.

3

There are 3 best solutions below

1
On BEST ANSWER

The simplest explanation is the following: Given a basis $({\bf e}_i)_{1\leq i\leq n}$ of some vector space $V$ over a field $F$ each vector ${\bf x}\in V$ gets coordinates $x_i\in F$ $\>(1\leq i\leq n)$ with respect to that basis: $${\bf x}=\sum_{i=1}^n x_i{\bf e}_i\ .$$ In fact, for each $i$, the $i^{\rm th}$ coordinate of ${\bf x}$ depends linearly on ${\bf x}$. This means that we have $n$ linear functionals $$\phi_i:\quad V\to F,\qquad{\bf x}\mapsto x_i\qquad(1\leq i\leq n)\tag{1}$$ which compute the $n$ coordinates of any input vector ${\bf x}$. These $\phi_i$ together constitute the dual basis of $({\bf e}_i)_{1\leq i\leq n}$, and are denoted by ${\bf e}_i^*$ (or similar). We therefore may replace $(1)$ by $${\bf e}_i^*:\quad V\to F,\qquad{\bf x}\mapsto x_i\qquad(1\leq i\leq n)\ .$$ It is then obvious that ${\bf e}_i^*\bigl({\bf e}_k\bigr)=\delta_{ik}$ (Kronecker-Delta), since ${\bf e}_k$ has its $k^{\rm th}$ coordinate $=1$, and all other coordinates $=0$.

0
On

This is not a mathematical explanation, but for me I find this interpretation more intuitive. Consider a surface in $\Bbb{R}^3$ parameterised by a set of 3 equations (indexed with latin aplhabets that take a value in $\{1,2,3\}$). As the surface is 2 dimentional we naturally have 3 equations of 2 variables (indexed with greek alphabets in $\{1,2\}$). Adequately termed, the surface is 'embedded' in the 'ambient' space. It is 2 dimensional living in a 3 dimensional world... Like all of us!. Copactly we can write the equations of surface in index notation as:

$$x^i = x^i(S^{\alpha})$$

Which represent the set of points on the surface and $x^i$ are components of the position vector on the surface. The covatriant basis say $\vec{g}_\alpha$ are computed as (I'll use $\vec{e}_i$ to denote the basis in the ambient space in $\Bbb{R}^3$):

$$\vec{g}_\alpha = \frac{\partial x^i}{\partial S^{\alpha}}\vec{e}_i$$

Which is just the partial derivative of the position vector on the surface. These basis are infact tangent to the lines of constant $S^\alpha$ and span the tangent space on the surface. Their 'Dual' basis $\vec{g}^a$ are obtained by calculating the normal vectors to the lines of constant $S^\alpha$. Easiest way to compute them I find is by 'raising the index' using the mertic on the surface. Lets denote the metric as $g_{\alpha \beta} = \vec{g}_\alpha \bullet \vec{g}_\beta$ (in this case a 2x2 matrix) and its inverse $g^{\alpha \beta}$:

$$\vec{g}^\alpha = g^{\alpha \beta}\vec{g}_{\beta}$$

There is a nice picture that shows this here https://upload.wikimedia.org/wikipedia/commons/b/b2/Basis.svg

You can check that indeed $\vec{g}^\alpha \bullet \vec{g}_\beta = g^{\alpha \rho}g_{\rho \beta} = \delta^{\alpha}_{\beta}$ by virtue of the relationship between the metrics. hence the two sets of basis are 'kinda reciprocal' as I like to imagine them.

If you want to know more about tensor calculus in a non-intimidating context i find Pavel Grinfield's book tensor calculus of moving surfaces to be a brilliant introduction to the subject. Whats even better is he has youtube videos that i found extremely helpful. It is a beautiful topic, worth taking the time to appreciate fully.

Added: Note although I used a surface to explain the idea this carries over to the ambient space and infact any vector space will have 2 sets of bases, a 'covariant' (lower index) set and a 'contravariant' (upper index) set which are dual. Note infact for any vector say:

$$\vec{a} = a_i\vec{e}^i = a^i\vec{e}_i$$

I can get both 'flavoured' components by dotting the vector with the appropriate basis vector, eg:

$$a^i = \vec{a} \bullet \vec{e}^i, a_i = \vec{a} \bullet \vec{e}_i$$

The two basis vectors also transform in a cetrain way. Briefly, If I were to perform a change of basis where I double the 'covariant' basis, the 'contravariant' basis will half. Hope this helps.

3
On

maybe you tought of this, let $S$ be subspace of Hilbert space $V$ and $g_1,...,g_n$ be a basis of $S$ but not orthogonal basis and let them have unit norm, and lets say you want to find projection of $f\in V$ on S,

if $g_i , 1\le i \le n$ were orthogonal you would have $P_S f = \sum_{i=1}^n \langle f,g_i\rangle g_i$

but since they are not find dual basis of $g_i , 1\le i \le n$, dual basis is $h_1,...,h_n$ such that $\langle g_i,h_j \rangle=\delta_{i,j}$ ($1$ when $i=j$ $0$ otherwise)

now you can get projection as $P_S f = \sum_{i=1}^n \langle f,g_i\rangle h_i$ or $P_S f = \sum_{i=1}^n \langle f,h_i\rangle g_i$

this can be sometimes more usefull then to use gram-schmidt algorithm on $g_i$ to find orthogonal basis , if you want projection in terms of $g_i$