What does it mean exactly that every finite dimensional vector space is isomorphic to its dual space?

133 Views Asked by At

For context, I'm trying to really understand bra-ket notation in QM. I tried a few years ago and IIRC, something like $\langle a|b\rangle$ is just the dot product of vectors $a$ and $b$. Technically, $|b\rangle$ is a member of a vector space $V$ and $\langle a|$ is a member of its dual space. Where I'm struggling though is in really understanding the relationship between the dual space and the dot product (or inner products more generally).

I get that the dual space is the vector space of all linear functions between $V$ and its field of scalars, call it $F$. And if I have something like, say, $\langle 1,2,3\rangle \cdot \langle 4,5,6\rangle=4+10+18=32$, that's clearly a linear transformation between $\mathbb{R}^3$ and $\mathbb{R}$. And either one of those vectors could be treated as the linear function acting on the other one, just like multiplying two matrices together can interpreted as applying a linear transformation. But that's the key part that doesn't quite make sense to me -- while we can choose to interpret the inner product of two vectors as one vector acting as a linear transformation on the other, it doesn't seem like such an interpretation necessarily follows from the definition of a vector space or that of the inner product.

Further, if I simply have the vector space of all row vectors of the form $[a,b]$ where $a$ and $b$ are real numbers, it seems seems even less clear why they should be thought of as linear transformations from $\mathbb{R}^2$ to $\mathbb{R}$, rather than merely as pairs of real numbers with the additional structure imposed by the vector space axioms.

In other words, it seems like, in order for the dual space, $V^*$, to really be the same as $V$ in a meaningful way, we have to make additional assumptions about how to interpret what the elements of $V^*$ mean. And yet, an isomorphism, by definition, is supposed to be "structure preserving"; at least that's how Wikipedia defines it and it feels like that's not happening here, not without imposing additional assumptions. So what's going on here? What kind of structure does mapping $V$ to $V^*$ actually preserve?

3

There are 3 best solutions below

0
On BEST ANSWER

Let $V$ be a vector space over $\mathbb{R}$. If we have a vector $v \in V$ and a functional $f \in V^*$, then $f$ is a map $V \rightarrow \mathbb{R}$, so we have $f(v) \in \mathbb{R}$. So, the bra-ket notation just means $\langle f \mid v \rangle = f(v)$. But then, what does it mean to write $\langle u \mid v \rangle$ if $u,v$ are both vectors in $V$?

If $V$ is equipped with an inner product $\langle \cdot, \cdot \rangle$, then for any $u \in V$, there is a linear functional $f_u : V \rightarrow \mathbb{R}$ defined by $f_u(v) = \langle u, v \rangle$. That is, $f_u \in V^*$. So, the bra-ket notation means $\langle u \mid v \rangle = \langle f_u \mid v \rangle = f_u(v) = \langle u , v \rangle$. Essentially, we're considering $u$ to be part of $V^*$ by identifying it with $f_u$.

In fact, this defines a linear map $i : V \rightarrow V^* : u \mapsto f_u$. I'll let you check that it's injective, and if $V$ is finite-dimensional, then $i$ is actually an isomorphism. This means that any $f \in V^*$ can be written as $f = f_u$ for some $u \in V$. So, when we write the bra-ket notation $\langle u \mid v \rangle$ for finite-dimensional spaces, it's not so important whether we think of $u$ as belonging to $V$ or $V^*$, because we have this handy way of converting between them.

Note: in infinite-dimensional spaces, or if $V$ is over $\mathbb{C}$ instead, things are more tricky. I'd encourage you to check out the Riesz Representation Theorem for more info.

(It's been a while since I've done linear algebra, so please edit if you notice any mistakes!)

0
On

If $V$ is a finite dimensional vector space over $\mathbb{R}$, then $V^{*}$ is defined to be Hom$_{\mathbb{R}}(V,\mathbb{R})$ i.e. set of all linear maps $V\rightarrow \mathbb{R}$ with the additional structure of pointwise addition and scalar multiplication ( which turns it into a vector space ). A thing about linear algebra is that if two finite dimensional spaces have the same dimension they then turn out to be isomorphic. An isomorphism of vector spaces over $\mathbb{R}$ is a linear map between the two vector spaces that is bijective. Linear algebra tells us $V^{*}$ has the same dimension as $V$ and therefore the two must be isomorphic. So yes, in that sense $V$ and $V^{*}$ are the same, but they are not naturally so. The term natural is slightly vague, but in this case it means that the isomorphism between $V$ and $V^{*}$ is dependent on additional structural properties like basis. However, once we add an inner product on $V$ then we can take the isomorphism to be independent of the basis and so in this sense once inner products are involved, there exists a more natural isomorphism.

0
On

This is easier than it seems. Say I have two column vectors, call them $c$ and $v$ with $c$ held constant. I can now turn $c$ into a linear functional by considering $c^Tv$. I think physicists call one side of this bra and the other ket.(maybe $\langle c \vert $ and $ \vert v \rangle$?) So the space of linear functions can be represented by row vectors of the same size. I think you'll find it no surprise that the map $x \rightarrow x^T$ is an isomorphism which is why in the finite case the dual space is isomorphic to the original space.