Which one is the most expensive way to solve for linear equation? LU-decomposition $$A = LU$$
Or finding the inverse $$A^{-1} = \frac{1}{\det(A)} \operatorname{adj}(A)$$
If I have to choose, I would say that this equation $$A^{-1} = \frac{1}{\det(A)} \operatorname{adj}(A)$$
Is much faster than compute LU-factorization. Why? Well, I wrote a linear algebra library in C, which I'm going to upload on GitHub very soon. First I find the determiannt, by using gaussian elemination. Very smooth and easy method. If I divide with zero, I just change that zero to a very small number so I don't get an error.
Then I find the minor matrix. Which also is a easy method to do, nummericaly. Example:
$$M_{2,3} = \det \begin{bmatrix} \,\,1 & 4 & \Box\, \\ \,\Box & \Box & \Box\, \\ -1 & 9 & \Box\, \\ \end{bmatrix}= \det \begin{bmatrix} \,\,\,1 & 4\, \\ -1 & 9\, \\ \end{bmatrix} = (9-(-4)) = 13$$
Then I change the minor matrix to cofactor matrix just by multiply them with $-1$. Easy! Divide $1$ with the determiant and multiply it with the cofactor matrix. Then I got the inverse.
But with LU-factorization, in code, I first need to find $L$ and $U$ from $A$, then I need to solve this linear equation $$Ax = b$$
Which can be described as
$$LUx = b$$
and then I need to solve these two equations
$$Ly = b$$ $$Ux = y$$
That requries more for-loops in C code, than finding the determiant with Gaussian elimination and finding the minor matrix.
Or am I wrong?
Performing Gaussian elimination on a matrix of size $m \times m$ takes $\mathcal{O}(m^3)$ operations. To find the cofactor matrix of a $n \times n$ matrix you need to calculate $n^2$ determinants of $(n-1)\times(n-1)$ matrices. If you don't find any way to take advantage of previous work that takes $n^2 \mathcal{O}((n-1)^3) = \mathcal{O}(n^5)$ operations.
Computing a LU-decomposition of that $n \times n$ matrixtakes $\frac23 n^3$ operations in leading order (so in particular $\mathcal{O}(n^3)$ operations). Therefore calculating the LU-decomposition is clearly faster than the method you are currently using for calculating the inverse.
(My argument only shows this for large enough $n$, but I'm very sure a detailed analysis will show that LU-decomposition is faster for basically any size.)
Note however that you can use Gauß elimination to directly calculate the inverse of a matrix, without the costly computation of the adjoint, see here. If my calculation is correct this requires $\frac56 n^3$ operations in leading order so it is still a bit slower than LU-decomposition.
In theory the Strassen algorithm or even faster algorithms for matrix multiplication give rise to matrix inversion algorithms that is even faster than $\mathcal{O}(n^3)$, but only for very large matrices.
Summary: LU-decomposition is the fastest way to solve a reasonably sized system of linear equations and superior to calculating inverses in almost all aspects. In particular performing the multiplication $A^{-1} x$ is as expensive as solving the linear equations $L y = b$ and $U x = y$, since this can be done very efficiently using forward/backward substitution.