So I have this example for solving a PDE in one of my lecture courses and I can't figure out if it's wrong or there's something I'm missing? The question is as follows:
PDE: $\quad u_t=a^2 u_{xx} \quad \quad 0<x<1 \quad 0<t<\infty$
BCs: $\quad u(0,t)=0$
$ \quad \quad \quad u_x (1,t)+hu(1,t)=0$
IC: $\quad u(x,0)=x \quad \quad \quad 0 \le x\le 1 $
Step 1: They separate the PDE into ODEs using $u(x,t)=X(x)T(t)$. This yields $\left\{ \begin{array}{rcrcrc@{\qquad}l} T'-\mu\alpha^2T=0\\ X''-\mu X=0\\ \end{array} \right. \\[5ex]$
Step 2: They find the separation constant. They determine that $\mu<0$ and label it $\mu=-\lambda^2$. They rewrite the ODEs then get a general solution of the form $u(x,t)=e^{-(\lambda \alpha)^2 t} (A \sin{\lambda x} +B \cos{\lambda x}) $ and sub this general solution into the BCs to get $B=0$ & $-\tan{ \lambda} = \lambda /h $ (the $A$ somehow doesn't matter and just disappears I guess???)
This gives what they introduce as eigenvalues (something I've not heard of since matrices) of the boundary-value problem and corresponding eigenfunctions $X_n (x)=\sin{\lambda_n x}$
Step 3: They find the fundamental solutions. The sum the infinite number of fundamental solutions each with their own coefficient (as you normally would when doing the separation of variables technique). They evaluate it so that it matches the IC. I.e. $u(x,0)=\displaystyle\sum_{n=1} ^{\infty} a_n \sin{(\lambda_n x)} = x$
Now some of this is new stuff (mainly naming things as eigenstuff) to me but I'm happy up to here, the next section is where I'm dubious. I don't see how they obtain the coefficients bc the eigenfunctions don't seem to be orthognal?? Is this right and if not is there a way to get the value of the coefficients?

Eigenfunctions corresponding to different eigenvalues are orthogonal. Here is the standard proof: let $$ I_{mn}:=\int_0^1X_m(x)X_n(x)\,dx, \tag{1} $$ where $X_k(x):=\sin(\lambda_kx)$. Now, let's multiply both sides of $(1)$ by $\lambda_m^2$, and use the fact that $X_m''(x)=-\lambda_m^2X_m(x)$: $$ \lambda_m^2I_{mn}=\lambda_m^2\int_0^1X_m(x)X_n(x)\,dx =-\int_0^1X_m''(x)X_n(x)\,dx. \tag{2} $$ Integrating by parts twice, we obtain $$ \lambda_m^2I_{mn}=F_{mn}(1)-F_{mn}(0)-\int_0^1X_m(x)X_n''(x)\,dx. \tag{3} $$ where $$ F_{mn}(x):=-X_m'(x)X_n(x)+X_m(x)X_n'(x). \tag{4} $$ Because of the boundary conditions, the first two terms on the RHS of $(3)$ vanish. Indeed, $$ F_{mn}(1)=-X_m'(1)X_n(1)+X_m(1)X_n'(1) =hX_m(1)X_n(1)-X_m(1)hX_n(1)=0, \tag{5} $$ and $F_{mn}(0)=0$ because $X_m(0)=X_n(0)=0$.
Returning to $(3)$, we end up with $$ \lambda_m^2I_{mn}=-\int_0^1X_m(x)X_n''(x)\,dx =\int_0^1X_m(x)\lambda_n^2X_n(x)\,dx=\lambda_n^2I_{mn}. \tag{6} $$ It follows from $(6)$ that $I_{mn}=0$ if $\lambda_m\neq\lambda_n$, that is, eigenfunctions corresponding to different eigenvalues are orthogonal.