Need help in understanding an example illustrating a theorem in Linear Algebra

68 Views Asked by At

Recently, I cam accross a theorem in Linear Algebra when studying about dual spaces. The theorem goes like this:

Let $V$ and $W$ be finite-dimensional vector spaces over $F$ with ordered bases $β$ and $γ,$ respectively. For any linear transformation $T: V → W,$ the mapping $T^t : W^∗ → V^∗$ defined by $T^t(g) = gT$ for all $g ∈ W^∗$ is a linear transformation with the property that $[T^t]^{β^∗}_{γ^∗} = ([T]^γ_β)^t.$

I came accross an example illustrating the theorem as follows:

Define $T: P_1(R) → R^2$ by $T(p(x)) = (p(0), p(2)).$ Let $β$ and $γ$ be the standard ordered bases for $P_1(R)$ and $R^2,$ respectively. Clearly, $[T]^γ_β =\begin{pmatrix}1 &&0\\1&& 2\end{pmatrix}.$ We compute $[T^t]^{β^∗}_{γ^∗}$ directly from the definition. Let $β^∗ = \{f_1, f_2\}$ and $γ^∗ =\{g_1, g_2\}.$ Suppose that $[T^t]^{β^∗}_{γ^∗} =\begin{pmatrix}a && b\\c && d\end{pmatrix}.$ Then $T^t(g_1) = af_1 + cf_2.$ So $T^t(g_1)(1) = (af_1 + cf_2)(1) = af_1(1) + cf_2(1) = a(1) + c(0) = a.$ But also $(T^t(g_1))(1) = g_1(T(1)) = g_1(1, 1) = 1.$ So $a = 1.$ Using similar computations, we obtain that $c = 0, b = 1,$ and $d = 2.$ Hence a direct computation yields $[T^t]^{β^∗}_{γ^∗}=\begin{pmatrix}1&& 1\\0 && 2\end{pmatrix}=([T]^γ_β)^t,$ as predicted by the Theorem.


However, tge example seems quite strange to me, for I don't understand, why did they write $T^t(g_1)(1) = (af_1 + cf_2)(1) = af_1(1) + cf_2(1) = a(1) + c(0) = a$? This is because, the domain of the function $T^t(g_1)$ is $R^2$ and $1$ is clearly not an element makes no sense at all. Am I missing something?