According to the definition of orthogonality (on finite vector spaces),
Given an inner product space, two vectors are orthogonal if their inner product is zero.
So as an example, assuming the inner product is the "the standard" Euclidean inner product, two vectors (1,0) and (0,1), in $\mathbb{R}^2$, are obviously orthogonal in standard basis. However, if we change the basis to another basis of two linearly independent vectors which are not orthogonal in standard $\mathbb{R}^2$ basis, then these new basis vectors would be also represented as, (1,0) and (0,1) in the new basis. So keeping the same Euclidean inner product, they would be orthogonal in the new basis, although, they are not orthogonal in standard $\mathbb{R}^2$ basis. This would mean that orthogonality depends on the basis chosen, which seems "strange" to me?
Can you please explain what is wrong with in the above reasoning (assuming it is wrong).
Thanks.
Note, I have read the thread with the very similar title, change of basis and and inner product in non orthogonal basis but that doesn't answer my question
The key point to understand here is that you really are dealing with two $\mathbb R^2$ here, although it's not that obvious when using the standard basis.
The first $\mathbb R^2$ is your vector space. Let's write this vector space and everything in it in blue. This first $\color{blue}{\mathbb R^2}$ is equipped with a vector space structure and additionally with the dot product $\color{blue}{\mathbf x\cdot\mathbf y = x_1y_1+x_2y_2}$.
Now, as soon as you choose a basis $\{\color{blue}{\mathbf b_1},\color{blue}{\mathbf b_2}\}$ that spans ${\mathbb R^2}$, you can write every vector $\color{blue}{\mathbf x}\in\color{blue}{\mathbb R^2}$ in an unique way as $\color{blue}{\mathbf x}=\color{red}{\xi_1}\color{blue}{\mathbf b_1}+\color{red}{\xi_2}\color{blue}{\mathbf b_2}$. Note that $\color{red}{\xi_1}$ and $\color{red}{\xi_2}$ are not the components of the vector in $\color{blue}{\mathbb R^2}$, but are base dependent.
But, you need always two of them, and when doing vector addition and multiplication with scalar, you'll find they behave exactly like the components of a vector should behave. Therefore, it does make sense to consider them as part of a $\color{red}{\mathbb R^2}$, which, however, is a different $\mathbb R^2$ than the original $\color{blue}{\mathbb R^2}$ we started with. In particular, the coordinate $\color{red}{\mathbb R^2}$ is not pre-equipped with an inner product.
The basis then defines a linear map $\beta$ from the coordinate $\color{red}{\mathbb R^2}$ to the original $\color{blue}{\mathbb R^2}$ given by $$\beta(\color{red}{\boldsymbol\xi})=\color{red}{\xi_1}\color{blue}{\mathbf b_1} + \color{red}{\xi_2}\color{blue}{\mathbf b_2}.$$
Now, remember that I said that the coordinate $\color{red}{\mathbb R^2}$ is not pre-equipped with an inner product. That doesn't mean we cannot give it one. That said, if we do give it an inner product, then we want to do it in a way that the product is preserved by the map $\beta$, that is, you want to have $$\color{red}{\langle \boldsymbol\xi,\boldsymbol\eta\rangle} = \beta(\color{red}{\boldsymbol\xi})\color{blue}{\cdot}\beta(\color{red}{\boldsymbol\eta})$$ where $\color{red}{\langle \boldsymbol\xi,\boldsymbol\eta\rangle}$ denotes the inner product in the coordinate $\color{red}{\mathbb R^2}$. By inserting the explicit formula of $\beta$, one easily sees that $$\color{red}{\langle \boldsymbol\xi,\boldsymbol\eta\rangle} = \sum_{j,k=1}^2 (\color{blue}{\mathbf b_j\cdot\mathbf b_k})\color{red}{\xi_j\eta_k}.$$
Now, note that if $\{\color{blue}{\mathbf b_1},\color{blue}{\mathbf b_2}\}$ is not an orthogonal basis, then $\color{red}{\langle \boldsymbol\xi,\boldsymbol\eta\rangle}\ne\color{red}{\xi_1\eta_1}+\color{red}{\xi_2\eta_2}$; and, we confirm that the inner product on the coordinate $\color{red}{\mathbb R^2}$ explicitly depends on the chosen basis $\{\color{blue}{\mathbf b_1},\color{blue}{\mathbf b_2}\}$. This is not surprising, because the vector in $\color{blue}{\mathbb R^2}$ that those coordinates (i.e., $\color{red}{\xi_1,\xi_2,\eta_1,\eta_2}$) describe depends on the basis chosen. Thus, in general, different pairs of vectors have different inner products.
Note that by definition of the inner product, with $\beta(\color{red}{\boldsymbol\xi})=\color{blue}{\mathbf x}$ and $\beta(\color{red}{\boldsymbol\eta})=\color{blue}{\mathbf y}$ it is of course still true that $$\color{red}{\langle \boldsymbol\xi,\boldsymbol\eta\rangle} = \color{blue}{\textbf x\cdot\textbf y} = \color{blue}{x_1y_1} + \color{blue}{x_2y_2}.$$ But in general, $\color{blue}{x_1y_1} + \color{blue}{x_2y_2} \ne \color{red}{\xi_1\eta_1}+\color{red}{\xi_2\eta_2}$.
However, if you chose the standard basis $\color{blue}{\mathbf b_k}{\mathbf e_k}$ then you obviously have $\color{red}{\xi_k}=\color{blue}{x_k}$ and $\color{red}{\langle \boldsymbol\xi,\boldsymbol\eta\rangle} = \color{blue}{x_1y_1+x_2y_2} = \color{red}{\xi_1\eta_1}+\color{red}{\xi_2\eta_2}.$ This is precisely how come it is easy to miss the fact that you are working with two different $\mathbb R^2$ when using the standard basis.