This is such a basic topic but there are so many different methods proposed for solving a linear system of equations. I recently found a very good source but couldn't really make sense of all the stuff it contains.
If I had to pick only 5 methods that I should understand thoroughly for solving big systems of linear equations which would they be? And what would be the most important reason for each choice?
I.e. direct methods?gauss-siedel?conjugate gradient?...
Thanks in advance for the help.
This is a very interesting but broad question. I will try to give a succinct answer as to when to prefer what according to me. The summary of my answer is: it depends on the structure of the matrix. Iterative Solvers (or any solvers really) try to use as much of the structure of the matrix when computing the solution. If you know ANYTHING about the matrix, there is likely an iterative solver designed precisely for that case. For instance:
Sparse Systems:
Positive Definite and Symmetric :: Conjugate Gradient
Symmetric but not Positive Definite :: MINRES
No Structure :: GMRES/QMR/BICGSTAB
Within a single class as well, the choice isn't arbitrary. For instance, GMRES requires more memory than QMR for operating while it might have better convergence behavior for a certain type of eigenspectrum. Most likely, if a method exists, there is a reason for it.
Dense Systems:
LU Decomposition / Gaussian Elimination
Hard to beat.
Of course, then there are issues of preconditioners as well. This entire field is very rich and interesting. Also, if you aren't already aware, there is another StackExchange which might be more suited for questions like these : SciComp