$\bbox[5px,border:2px solid gray]{ \text{ Case 3 } }$ If $\dim E_{\lambda}=1$, take a nonzero $v\in E_{\lambda}$, then $\{v\}$ is a basis for $E_{\lambda}$. Extend this to a basis $\mathfrak{B}=\{v,\ w\}$ for $\mathbb{C}^{2}$ by choosing $w\in \mathbb{C}^{2}\backslash E_{\lambda}$. If $Aw = \alpha v+\beta w$, then there exists a transformation matrix $P$ (that transforms the original $\{v\}$ to $\mathfrak{B}$), such that $C =P^{-1}AP=\left(\begin{array}{ll} \lambda & \alpha\\ 0 & \beta \end{array}\right) \quad (♦) $
But in this case 3, $C$ must satisfy $\dim E_{\lambda}=1$. So for $C$, and thence $A$, to have $\lambda$ of algebraic multiplicity of 2, then $ C =\left(\begin{array}{ll} \lambda & \alpha\\ 0 & \lambda \end{array}\right) $. (I omit a paragraph deemed unnecessary)
Moreover, since $ Cw=u+\lambda w, $ if we consider the alternative basis $B' =\{u,\ w\}$ then there is a transformation matrix, say $Q$, such that $Q^{-1}CQ =\left(\begin{array}{ll} \lambda & 1\\ 0 & \lambda \end{array}\right). $ Finally, use $(♦)$ to substitute A for $C$ : $Q^{-1}CQ = (PQ)^{-1}A(PQ)$.
$5.$ Christiaan Hattingh's answer below says: Because "w is not an eigenvector of C, ... $Cw=u+λw$ " ? Why not $ku+λw$ for some $k$?
Then how does he prove that $u = \alpha v$? I don't perceive the "noticing Aw = ..." ?
$6.$ Why is $Q = \begin{bmatrix} \alpha & 0 \\ 0 & 1 \end{bmatrix}$ ? (See his answer to 6)
$9.$ What's the proof strategy? I'm flummoxed by all this algebra.
5.$\;$Again here, since $w$ is not an eigenvector of $C$ we cannot have $Cw=\lambda w$...so there must be some vector $u$, so that $Cw=u+\lambda w$. In fact we can do better, by noticing $Aw=1\cdot(\alpha v)+\lambda w$, and $A(\alpha v)=\lambda \cdot(\alpha v)$...so in fact $u=\alpha v$.
6.$\;$Again here, just take $Q=\begin{bmatrix} \alpha & 0 \\ 0 & 1 \end{bmatrix}$, and then calculate $Q^{-1}CQ$ to get the required matrix. The verification that this is indeed so, is the same as for point 2 above.
9.$\;$It's a constructive proof, showing you how to find a Jordan Canonical form for a matrix in $\mathscr{M}_{2 \times 2}(\mathbb{C})$ for a matrix that cannot be diagonalized.
I respond separately, to try and clarify as per OP's request in comments:
ok, now to further expound 5. If $Cw \neq \lambda w$, then there must be some other vector which is not a scalar multiple (i.e. linearly dependent) of $w$ so that $Cw$ equals $\lambda w$ plus this other ("unknown") vector. Let us denote this vector as $u$. So then $Cw=u+\lambda w$. We cannot have a third linearly independent vector in this equation, since the dimension of $\mathbb{C}^2$ is 2.
I am not going to add any more to this answer, but here is my suggestion to you - why not try this out in "practice". Take the following matrix: \begin{equation} \begin{bmatrix} \frac{3}{2} & \frac{1}{4} \\ -1 & \frac{5}{2} \end{bmatrix}, \end{equation} find it's eigenvalue and a corresponding eigenvector. Then follow the proof step by step (so find the matrix $P$, calculate $P^{-1}AP$, etc.) and see if you can get to the form \begin{equation} \begin{bmatrix} \lambda & 1 \\ 0 & \lambda \end{bmatrix}. \end{equation}