I can find the component of a vector on an axis of the rectangular coordinate system by using orthogonal projection since it is same to the parallel projection here. But in case of non rectangular Cartesian coordinate system we can find the components of a vector only by parallel projections on respective axis. So what's the point of orthogonal projection in general?
What is the signifance of orthogonal projection in non rectangular cartesian coordinate system?
297 Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail AtThere are 2 best solutions below
On
Calling a coordinate system “non-rectangular” involves a hidden assumption: that the vector space is already equipped with some inner product that defines angles and lengths, effectively giving the space (if finite-dimensional) a Euclidean geometry. It looks like that’s what you have in mind in asking that question: you’re starting with the Euclidean space $\mathbb R^n$, which comes pre-equipped with structure beyond that of a bare vector space. That structure includes a “natural” inner product unsurprisingly called the Euclidean scalar product. Expressed in coordinates relative to the standard basis, it’s the familiar dot product $\langle\mathbf x,\mathbf y\rangle = \sum_ix_iy_i$, but in other coordinate systems it might be expressed by more complex-looking bilinear forms. When working with $\mathbb R^n$, then, it’s often tacitly understood that the Euclidean scalar product is meant when talking about orthogonality.
Other scalar products can be defined on this space, though. Using the example in your question, the coordinates of a vector are its scalar projections onto the basis vectors. If this basis isn’t orthonormal (with respect to the “standard” inner product), then those projections aren’t going to be orthogonal—again relative to the “standard” inner product. However, one can define an inner product relative to which these projections are orthogonal. (Interestingly, but not coincidentally, directions that are orthogonal under that inner product correspond to pairs of conjugate diameters of a certain ellipse.)
There are numerous other applications of orthogonal projection relative to an arbitrary inner product. Without going into too much detail, I’ll give you an example from the theory of electrical networks. For a network of resistors and current and voltage sources, voltage drop and current distributions are functions on the set of $n$ resistors, which are conveniently represented as elements of $\mathbb R^n$. Kirchhoff’s laws determine which distributions are legal and the topology of the network together with Ohm’s law induces a “natural” inner product on this space that is almost never the standard Euclidean scalar product (which would correspond to all of the resistances being 1 ohm). It turns out that for any specific network the legal current and voltage drop distributions are unique, so a basic problem is to find these distributions. One method (due to Weyl) is to take an arbitrary non-zero current distribution and then project it orthogonally, relative to the network’s “natural” inner product, on a particular subspace.
There are also spaces for which there’s no obvious choice for a “natural” inner product. Take, for instance, the space of polynomials with real coefficients of degree $\le n$ for some fixed $n$. A common inner product is $(p,q) = \int_0^1 p(t)q(t)\,dt$, and another is $(p,q) = \int_{-1}^1 p(t)q(t)\,dt$. There’s no particular reason per se to prefer one over the other—it depends more on what you want to do with these polynomials. (There are also vector spaces for which it’s not possible to define an inner product at all, but those have other issues as well.)
All of that aside, speaking in very broad terms the salient feature or orthogonality (and thus also orthogonal projection) is that the contributions of orthogonal components to a whole are independent. Going back to your original example of Cartesian coordinate systems, in a rectangular coordinate system, changing the value of one coordinate doesn’t affect any of the others—the orthogonal projections onto the other basis vectors don’t change.
There are many reasons to want to find an orthogonal projection of a vector $\vec{v}$, some of which have to do with the fact that, with respect to a given inner product, it will be the closest element in a vector space $\mathcal{V}$ to $\vec{v}$. You can think of an inner product as simply a function taking in two elements (call them $A$ and $B$) of a vector space $\mathcal{V}$ to a scalar such that:
Positive Definiteness: $A \cdot A = ||A||^2 > 0$, with equality if and only if $A$ is the zero element in $\mathcal{V}$
Linearity: $(A + B) \cdot C = A\cdot C+ B \cdot C$
Symmetry: $A \cdot B = B \cdot A$
I'll give you an example that does not involve the "geometric interpretation" of vectors. Consider a smooth function $f(x)$ defined from $-1 \leq x \leq 1$. In fact, it can be shown that any smooth function over $-1 \leq x \leq 1$ can be written as the combination of $f_0(x) = 1/\sqrt{2}$, $f_k(x) = \sin(k\pi x)$, and $g_k(x) = \cos (k \pi x)$ with $k \in \mathbb{N}^+$. In other words, $\{f_0(x), f_k(x), g_k(x) \text{ } | \text{ } k \in \mathbb{N}^+\}$ forms a basis for all smooth functions defined in this region. You can define many inner product structures over this region. I arbitrarily choose this one (you can verify it satisfies the above properties): $$f(x) \cdot g(x) = \displaystyle \int_{-1}^1 f(x)g(x) dx$$
Let's consider a finite-dimensional subspace that looks pretty similar to what we've discussed above (with the same inner product). In particular, let's look at the $(2n + 1)$-dimensional space of smooth functions that are in $\mathcal{W} = \text{span}(\{f_0(x), f_k(x), g_k(x) \text{ } | \text{ } 1 \leq k \leq n \})$. My question becomes, how can we approximate a function $f(x)$ with the given functions?
You might have guessed it: we look for the orthogonal projection of our function $f(x)$ onto $\mathcal{W}$! If you've studied orthogonal projections, you will know that the projection of a vector $\vec{v}$ onto an inner product space with orthonormal basis $\{\vec{u}_1, \vec{u}_2, ... \vec{u}_n \}$ is $\vec{w} = \sum (\vec{u}_i \cdot \vec{v}) \vec{u}_i$. Applying this idea to our vector space of smooth functions over $-1 \leq x \leq 1$, we get that the best approximation is $$f(x) \approx \displaystyle \frac{a_0}{\sqrt{2}} + \sum_{i=1}^n a_i \cos (i \pi x) + \sum_{i=1}^n b_i \sin (i \pi x),$$ with $$a_0 = f_0 \cdot f = \int_{-1}^1 \frac{f(x)}{\sqrt{2}}, \text{ } a_i = f_i \cdot f = \int_{-1}^1 f(x) \cos(i \pi x) dx, \text{ } b_i = g_i \cdot f = \int_{-1}^1 f(x) \sin(i \pi x) dx$$ with $1 \leq i \leq n$ (not to confused with the imaginary unit). If this looks familiar to you, you're not seeing things! This is the well-known Fourier approximation for a smooth function.
So we've seen how orthogonal projections have more than geometric significance!