This snapshot is from Mathematics For Machine Learning book, Symmetric, Positive Definite Matrices section:
I have no problem with that paragraph at all except for the equation in the red rectangle, specifically, this part over the blue arrow:
where $A_{ij} := \langle b_i , b_j \rangle$ and $\hat{x}, \hat{y}$ are the coordinates of x and y with respect to the basis B
I can't understand what has been done to reach this equality, can anyone simplify this for me?


If $\alpha=[\alpha_{ij}]_{ij}$, $\beta=[\beta_{ij}]_{ij}$ and $\gamma=[\gamma_{ij}]_{ij}$ are matrices of appropriate sizes, we can compute the triple product $\alpha\beta\gamma$ as the matrix $\alpha\beta\gamma=[\delta_{pq}]_{pq}$ where $$\delta_{pq}=\sum_i\sum_j\alpha_{pi}\beta_{ij}\gamma_{jq}\tag{$*$}$$ (I'm using entries $p$ and $q$ just to make the summation indices $i$ and $j$).
Use this with $\alpha=x'$, $\beta=A$ and $\gamma=y$. The resulting matrix $x'Ay$ will be $1\times 1$, so it will be a number, i.e., you don't really need to care about $p$ and $q$ in $(*)$. This will give you precisely your problematic equality (in reverse order, but who cares).
As for the first equality, here's the definition of bilinearity:
Now we should know that the inner product map $\langle\cdot,\cdot\rangle$ is bilinear, in the sense that if we define $\rho\colon\mathbb{R}^n\times\mathbb{R}^n\to\mathbb{R}$ as $\rho(x,y)=\langle x,y\rangle$, then $\rho$ will be bilinear in the sense above. The inner product map has some extra properties, such as being positive and non-degenerate (these are not satisfies by the example above), but these properties do not matter for the first equation.
In any case, you can show that if $\rho$ is any bilinear map, between any vector spaces, we have $$\rho\left(\sum_i\lambda_i x_i,\sum_j\mu_j y_j\right)=\sum_i\sum_j\lambda_i\mu_j\rho(x_i,y_j)$$ Indeed, to prove this, first fix $y=\sum_j\mu_j y_j$ in the second entry, and use linearity of the first entry to obtain $\rho\left(\sum_i\lambda_i x_i,y\right)=\sum_i\lambda_i\rho(x_i,y)$. Now, for each $i$, use linearity of the second entry and the equation $y=\sum_j\mu_jy_j$ to obtain $\rho(x_i,y)=\sum_j\mu_j\rho(x_i,y_j)$, for each $i$.
The equation above is precisely what you need; Use the inner product as $\rho$, $b_i$ as $x_i$, $b_j$ as $y_j$, $\psi_i$ as $\lambda_i$ and $\lambda_j$ as $\mu_j$.