I am studying the lecture 23 in Numerical Linear Algebra book and I cannot follow the part that explains the Cholesky Factorization's algorithm. Specifically it is written:
When Cholesky factorization is implemented, only half of the matrix being operated on needs to be represented explicitly. This simplification allows half of the arithmetic to be avoided. A formal statement of the algorithm (only one of many possibilities) is given below. The input matrix $A$ represents the super-diagonal half of the $m\times m$ hermitian positive definite matrix to be factored. The output matrix $R$ represents the upper triangular factor for which $A=R^{*}R$. Each outer iteration correspond to single elementary factorization: the upper-triangular part of the sub-matrix $R^{*}_{k:m,k:m}$ represents the super diagonal part of the hermitian matrix being factored at step $k$.
$\underline{\text{Algorithm 23.1: Chelosky Factorization}}$
$R=A$
for $k = 1$ to $m$
$\quad$for $j = k+1$ to $m$
$\quad\quad\quad R_{j,j:m} = R_{j,j:m} - R_{k,j:m}\bar{R}_{kj}/R_{kk}$
$\quad R_{k,k:m} = \frac {R_{k,k:m}} { \sqrt{R_{k,k}}}$
Since I did not completely understand the explanation, I traced the algorithm with an example. At the end, the algorithm did not make the sub-diagonal entries of $A$ zero and only super diagonals were correct. But from what I understood the output , $R$ , should be upper triangular.
Does the explanation before the algorithm means: we need to zero out all sub diagonal entries of $A$ before entering the first for loop?
Any insight would be appreciated. Thank you.