$$Y_t=\beta_1+\beta_2 X_{t2}+\dots +\beta_k X_{tk}+\epsilon_t \qquad (t=1,\dots,T)$$
$$\epsilon_t=\rho \epsilon_{t-1}+v_t, \qquad v_t \sim \mathrm{i.i.d.}(0,\sigma^2_v)$$
GLS estimation under AR(1) errors:
$Y=X\beta +\epsilon$, $\epsilon \sim (0,\Phi)$
$$\Phi=\frac{\sigma^2_v}{1-\rho^2}\begin{pmatrix} 1 &\rho&\rho^2&\dots&\rho^{T-1}\\ \rho&1&\rho&\dots\\ \dots \\ \rho^{T-1}&\dots\ & \dots & \dots & 1 \end{pmatrix} =\sigma^2_\epsilon \Psi$$
$$\hat\beta_{GLS}=(X' \Psi^{-1}X)^{-1}X'\Psi^{-1}Y$$
The book says "transform and apply OLS",
$$P'P=\Psi^{-1}=\begin{pmatrix} 1 &-\rho&0&\dots&0\\ -\rho&1+\rho^2&-\rho&\dots\\ \dots \\ 0&\dots\ & \dots & \dots & 1 \end{pmatrix}$$
and $$P=\begin{pmatrix} \sqrt{1-\rho^2} &0&0&\dots&0\\ -\rho&1&0&\dots\\ \dots \\ 0&\dots\ & \dots & \dots & 1 \end{pmatrix}$$
However I can't follow that process.
How can I get $P'P$ and $P$?
If I read you correctly, you're asking two questions:
How to get $P$?
Once you get $P$, what to do?
Basically you're asking how GLS reduces to OLS.
Answer:
Diagonalize the positive definite matrix $\Psi$. You would have diagonalized $f(\Psi)$ for any "reasonable" $f$, in particular $f(x) = \frac{1}{x}$. This gives you $P = \Psi^{-\frac{1}{2}}$.
Now transform your model
$$ PY = PX \beta + P \epsilon. $$
The transformed error terms $P \epsilon$ are now spherical. So the transformed model satisfies the usual linear model assumption under which OLS is suitable. The OLS formula then gives you
$$ \hat{\beta} = (X^T P^T P X)^{-1} X^T P^T P Y = (X^T \Psi^{-1} X)^{-1} X^T \Psi^{-1} Y. $$