What is the connection between the two ways in which functions get represented as vectors when discussing inner product?

217 Views Asked by At

When I interpret the inner product and orthogonality of two functions, I consider it the following way. Say you have a basis for a function space $\{\sin(x), \cos(x)\}$ and you have two vectors $u = (2,2)$ and $v = (-1,1)$ representing the functions $2\sin(x)+2\cos(x)$ and $-\sin(x)+\cos(x)$ respectively. These vectors $u$ and $v$ are orthogonal on the function space defined by the basis. Hence the functions they represent are orthogonal on some interval, in this case $[-\pi,\pi]$, why? I dont really know.

But that was just a side question, my main is that is in other texts I have seen when defining the inner product, the functions are being sampled at different points. A function $f(x)$ is sampled at points $t_0, t_1, ... t_n$ and it is represented by a vector $(f(t_0), f(t_1), ... f(t_n))$. And then you can represent another function as the vector $(g(t_0), g(t_1), ... g(t_n))$. And define the inner product of the functions $f$ and $g$ as the dot product between these vectors as

$f(t_o)\cdot g(t_o) + f(t_1)\cdot g(t_1) + ... + f(t_n)\cdot g(t_n)$.

My question is what is the connection between the vectors $u,v$ and the sampled vectors? Both of these pairs of vectors represent the same thing. If the functions are orthogonal then dot product between these pairs are 0. But I still cant see the link between them.

I'll give an example of how the text I have read defines the inner product by sampling....

Consider $p(t) = 2t^2 - t + 1, q(t) = 2t-1$. These functions can be defined on a function space by the basis {$1, t, t^2$}. And this is how my text defines the inner product between these functions, I quote:

"Step 1: You should first sample the polynomials at the values -1, 0, and 1. The samples of each polynomial will give you a vector in $R^3$."

"Step 2: You take the dot product of the two vectors created by sampling. (As a variation, this step could be a weighted inner product)"

3

There are 3 best solutions below

2
On

I think you are getting bogged down in switching back and forth between different spaces. Orthogonality is all defined in a single space. Looking at an example like yours, if you have a space with two orthonormal vectors $v_1,v_2$, and $(a_1,a_2)$ and $(b_1,b_2)$ are two orthogonal vectors in $\mathbb{R}^2$, then $a_1v_1+a_2v_2$ is orthogonal to $b_1v_1+b_2v_2$. That comes about by expanding the inner product using bilinearity, and canceling out any terms which involve $v_1$ and $v_2$ using their orthogonality. Thus $(a_1v1+a_2v_2,b_1 v_1+b_2v_2)=a_1 b_1 + a_2 b_2 = 0$ by the orthogonality in the plane. But this only works if $v_1,v_2$ are orthogonal and have the same norm.

I don't think I understand your question about sampled functions.

0
On

Your $u$ and $v$ are finite linear combinations of some base functions.

The sample vector with components $f_i = f(t_i)$ is in general an approximation for $f$ and the one with components $g_i = g(t_i)$ in general an approximation for $g$.

It depends on the properties of $f$ and $g$ and the used number of samples, if the original functions can be reconstructed fully. E.g. see Whittaker-Shannon interpolation formula.

So it could be the case (or not) that some function $f$ has an exact representation as $u$ and thus the two coordinates you mention, and some set of samples, which can be seen as $N$ coordinates.

0
On

You can define the inner product between vectors (functions, in this case) in many ways, as long as the axioms are satisfied (see inner product space). One way is by considering the integral of the product over a finite range, others multiply a third fixed function in, ... All those are different inner product spaces (as the inner product is different), even if the vector spaces (functions in your example) are the same; it makes little sense to try to use the inner product of one in the other.