From "Introduction to Linear Algebra", Fifth Edition (2016) by Gilbert Strang (page 113):
When $S$ is symmetric, the usual form $A = LDU$ becomes $S = LDL^T$. The final $U$ (with 1's on the diagonal) is the transpose of $L$ (also with 1's on the diagonal). The diagonal matrix $D$ containing the pivots is symmetric by itself.
If $S = S^T$ is factored into $LDU$ with no row exchanges, then $U$ is exactly $L^T$
How do I prove that this is true (textbook does not present justification, at least, it is not obvious to me)?
An example (not exactly proof) is provided at Question 27 in Problem Set 2.7 of the book (5-th edition). Here is the spirit behind it:
For a general matrix $A$, we can apply Gaussian-(row)-Elimination into its row echelon form. And such an operation, in matrix language, is usually denoted $EA=U \Rightarrow A=LU$, where $L=E^{-1}$, sometimes we divide out the pivot element from U and write $A=LDU$ instead.
More specifically, for a symmetric matrix $S$, we can apply the transposed Gaussian-(column)-Elimination to get its column echelon form, which would be a lower-triangular matrix and would be the transpose of the row echelon form, due to symmetry. In matrix language, $ES=DU$ and $S^T=S$, we have $S^T E^T=SE^T=U^TD^T=U^TD$.
Also notice that, if we apply the Gaussian-(row)-Elimination for S on its column echelon form $SE^T$ instead, we would get exactly $D$. This is because, in Gaussian-(row)-Elimination of S, any element above the pivots are not affected, which implies any element below the pivots in the column echelon would also be the same as that in S, so, a Gaussian-(row)-Elimination of S would exactly eliminate them. In matrix language, $EU^TD=D \Rightarrow ESE^T=D \Rightarrow S=E^{-1}D(E^T)^{-1}=LDL^T$.
For example, $S=\begin{bmatrix}a & b &c\\ b & * & *\\ c & * & * \end{bmatrix}$
$E_1S=\begin{bmatrix}a & b &c\\ 0 & * & *\\ 0 & * & * \end{bmatrix}$
$SE^T_1=\begin{bmatrix}a & 0 &0\\ b & * & *\\ c & * & * \end{bmatrix}$
$E_1SE^T_1=\begin{bmatrix}a & 0 &0\\ 0 & * & *\\ 0 & * & * \end{bmatrix}$