I'm attempting to write a software library for handling matrix calculations. The implementation for calculating the determinant I've written consists of decomposing a matrix $A$ into a lower triangular matrix $L$ and an upper triangular matrix $U$ such that $A = LU$; I do this via Crout's method as it's easy to 'translate' to code. Then, I multiply together every element on the diagonals of these matrices, a.k.a. $\text{det} A = \prod_{i=0}^{n}L_{ii}U_{ii}$ where $n$ is the dimension of the matrix $A$.
In absolute value, this gives me the right result for my test cases, however sometimes the determinant is negative, yet my program always produces positive outputs. In other implementations of this (using Doolittle's method, if I recall correctly) a factor was kept track of during the algorithm, and the final determinant was then equal to the expression above multiplied by $(-1)^S$ where $S$ is that special factor.
Does such a factor exist for Crout's method, or am I better off trying to implement Doolittle instead?
In Crout's method the diagonal of $U$ should be all 1's... If the factorization is properly computed (and you don't perform any row/column swaps along the process) the determinant of $A$ is just $$\det A = \prod_{i=1}^n L_{ii}.$$
The "special factor" that you mention is just $(-1)^s$, where is the number of line swaps. When there are line swaps, the factorization that actually holds is $$ P A = L U, $$
where $P$ is a permutation matrix. Again, since the diagonal of $U$ is made of one's, what you get for the determinant is $$ \det A = \det P \cdot \det L. $$