Consider an n-dimensional vector space $V$ with an inner product $(.,.) : V \times V → \mathbb{R}$ (see Definition 3.3) and an ordered basis $B = (b_1,..., b_n) $ of $V$. Recall from Section 2.6.1 that any vectors $x, y \in V$ can be written as linear combinations of the basis vectors so that $x = \sum_{i=1}^n \psi_i b_i \in V$ and $y = \sum_{j=1}^n \lambda_j b_j \in V$ for suitable $\psi_i, \lambda_j \in \mathbb{R}$. Due to the bilinearity of the inner product, it holds for all $x, y \in V$ that
$$ \langle x, y\rangle = \langle\sum_{i=1}^n \psi_i b_i , \sum_{j=1}^n \lambda_j b_j \rangle = \sum_{i=1}^n\sum_{j=1}^n \psi_i \langle b_i, b_j\rangle \lambda_j = \hat{x}^T A \hat{y}$$
where $A_{ij} := \langle b_i,b_j\rangle$ and $\hat{x}, \hat{y}$ are the coordinates of x and y with respect to the basis B. This implies that the inner product $\langle.,.\rangle$ is uniquely determined through $A$.
Source: Mathematics for Machine Learning by Marc Peter Deisenroth, A. Aldo Faisal, and Cheng Soon Ong (2020)
Let us consider a basis $\mathcal{B}:=\left\{b_1=\begin{pmatrix}1\\1\end{pmatrix}_\mathcal{S},b_2=\begin{pmatrix}0\\2\end{pmatrix}_\mathcal{S}\right\}$ of $\mathbb{R}^2$ to make it a bit more interesting and $A=\begin{pmatrix}1&0\\0&3\end{pmatrix}_\mathcal{B}$ the matrix that defines the inner product relative to the basis $\mathcal{B}.$ Here $\mathcal{S}$ denotes the standard basis $ \mathcal{S} := \left\{ \begin{pmatrix}1\\0 \end{pmatrix}_\mathcal{S},\begin{pmatrix}0\\1 \end{pmatrix}_\mathcal{S} \right\}.$ Then \begin{align*} \langle b_1,b_1 \rangle = b_1^\tau \cdot A_\mathcal{B} \cdot b_2=\begin{pmatrix}1,0\end{pmatrix}_\mathcal{B}\begin{pmatrix}1&0\\0&3\end{pmatrix}_\mathcal{B}\begin{pmatrix}1\\0\end{pmatrix}_\mathcal{B}=(1,0)_\mathcal{B}\cdot \begin{pmatrix}1\\0\end{pmatrix}_\mathcal{B}=1\\ \langle b_1,b_2 \rangle = b_1^\tau \cdot A_\mathcal{B} \cdot b_2=\begin{pmatrix}1,0\end{pmatrix}_\mathcal{B}\begin{pmatrix}1&0\\0&3\end{pmatrix}_\mathcal{B}\begin{pmatrix}0\\1\end{pmatrix}_\mathcal{B}=(1,0)_\mathcal{B}\cdot \begin{pmatrix}0\\1\end{pmatrix}_\mathcal{B}=0\\ \langle b_2,b_1 \rangle = b_2^\tau \cdot A_\mathcal{B} \cdot b_1=\begin{pmatrix}0,1\end{pmatrix}_\mathcal{B}\begin{pmatrix}1&0\\0&3\end{pmatrix}_\mathcal{B}\begin{pmatrix}1\\0\end{pmatrix}_\mathcal{B}=(0,3)_\mathcal{B}\cdot \begin{pmatrix}1\\0\end{pmatrix}_\mathcal{B}=0\\ \langle b_2,b_2 \rangle = b_2^\tau \cdot A_\mathcal{B} \cdot b_2=\begin{pmatrix}0,1\end{pmatrix}_\mathcal{B}\begin{pmatrix}1&0\\0&3\end{pmatrix}_\mathcal{B}\begin{pmatrix}0\\1\end{pmatrix}_\mathcal{B}=(0,3)_\mathcal{B}\cdot \begin{pmatrix}0\\1\end{pmatrix}_\mathcal{B}=3\\ \end{align*} Note that I originally defined the vectors $b_1$ and $b_2$ according to the standard coordinate system, but defined the inner product according to the ordered basis $\left\{b_1,b_2\right\}.$ This means \begin{align*} b_1=1\cdot b_1+0\cdot b_2 =\begin{pmatrix}1,0\end{pmatrix}_\mathcal{B}\\ b_2=0\cdot b_1+1\cdot b_2 =\begin{pmatrix}0,1\end{pmatrix}_\mathcal{B} \end{align*} according to this basis, which is why I multiplicated $A$ with the unit vectors $\begin{pmatrix}1,0\end{pmatrix}_\mathcal{B}$ and $\begin{pmatrix}0,1\end{pmatrix}_\mathcal{B}$ in that basis.
Let us now consider the standard basis as we know it: \begin{align*} x&=\begin{pmatrix}1,0\end{pmatrix}_\mathcal{S}=b_1-\frac{1}{2}b_2\\ y&=\begin{pmatrix}0,1\end{pmatrix}_\mathcal{S}=\frac{1}{2}b_2 \end{align*} which means that $x=\begin{pmatrix}1\\-\frac{1}{2}\end{pmatrix}_\mathcal{B}$ and $y=\begin{pmatrix}0\\-\frac{1}{2}\end{pmatrix}_\mathcal{B}$ according to the basis $\mathcal{B}=\left\{b_1,b_2\right\}.$
The coordinates of $(x,y)$ according to the standard basis of $\mathbb{R}^2$ are $\left(\begin{pmatrix}1\\0\end{pmatrix}_\mathcal{S},\begin{pmatrix}0\\1\end{pmatrix}\right)_\mathcal{S}$ and the coordinates of $(x,y)$ according to the basis $\left\{b_1,b_2\right\}$ of $\mathbb{R}^2$ are $\left(\begin{pmatrix}1\\-\frac{1}{2}\end{pmatrix}_\mathcal{B},\begin{pmatrix}0\\-\frac{1}{2}\end{pmatrix}_\mathcal{B}\right).$ The entries in these vectors are called the component according to the corresponding basis vector. E.g. $-\frac{1}{2}$ is the component of the vector $y$ of the second basis vector $b_2.$ Its first component (of the first basis vector $b_1$) is zero, thus $y=0\cdot b_1-\frac{1}{2}b_2.$
Is it confusing? Yes, absolutely. That's why it is so important always to note according to which basis a vector is expressed. Different bases mean different components, which is another word for the coordinates in that basis. Let us finally compute \begin{align*} \langle x,y \rangle&=\langle \sum_{i=1}^2\psi_ib_i,\sum_{j=1}^2\lambda_jb_j, \rangle=\sum_{i=1}^2\sum_{j=1}^2\psi_i \lambda_j\langle b_i,b_j \rangle\\ &=\psi_1\lambda_1\langle b_1,b_1 \rangle+\psi_1\lambda_2\langle b_1,b_2 \rangle+\psi_2\lambda_1\langle b_2,b_1 \rangle+\psi_2\lambda_2\langle b_2,b_2 \rangle\\ &=1\cdot 0\cdot 1+1\cdot \left(-\dfrac{1}{2}\right)\cdot 0 + \left(-\dfrac{1}{2}\right)\cdot 0 \cdot 0+ \left(-\dfrac{1}{2}\right)\cdot \left(-\dfrac{1}{2}\right) \cdot 3\\ &=\dfrac{3}{4}\\[6pt] x^\tau Ay&=(1,-\dfrac{1}{2})_\mathcal{B}\cdot \begin{pmatrix}1&0\\0&3\end{pmatrix}_\mathcal{B} \cdot \begin{pmatrix}0\\-\dfrac{1}{2}\end{pmatrix}_\mathcal{B}=(1,-\dfrac{3}{2})_\mathcal{B}\cdot \begin{pmatrix}0\\-\dfrac{1}{2}\end{pmatrix}_\mathcal{B}=\dfrac{3}{4} \end{align*} ... according to the basis $\left\{b_1,b_2\right\}.$ The inner product is defined by $A=(A_{ij})=\left(\langle b_i,b_j \rangle\right).$
You maybe used to the standard basis $\left\{\begin{pmatrix}1\\0\\ \vdots \end{pmatrix},\begin{pmatrix}0\\1\\ \vdots \end{pmatrix},\ldots\right\}$ and an inner product given by $\begin{pmatrix}1&0&\ldots \\ 0&1&\ldots \\ \vdots& \vdots&\ldots \end{pmatrix}$ which would lead to $\langle v,w \rangle= v^\tau\cdot I \cdot w=\sum_{i,j}v_i \delta_{ij}w_j=\sum_{i=1}^nv_i w_i.$ But there is no property in the definition that would require that this is the only way to do it. We can have different bases and different (symmetric, positive definite) matrices $A$.
The notation $x=\begin{pmatrix}x_1\\x_2\\ \vdots\\x_n \end{pmatrix}_{\left\{b_1,b_2,\ldots,b_n\right\}}=\sum_{i=1}^nx_ib_i$ has been sacrificed to $x=(x_1,x_2,\ldots\, , \,x_n)$ for better reading and the assumption that the choice of basis is clear. I have noted the basis above and you certainly recognized that it is not pleasant to read it all the time. However, a strict notation would mention the basis since the coordinates (components) are always relative to a given basis.