The acquisition of Fourier coefficients is better known than proofs of convergence. You set $f(x)$ equal to its Fourier Series and multiplying by a trigonometric function of given frequency, and then integration on both sides will yield the coefficients. This proof in itself is considered not to prove anything about convergence. Not even pointwise convergence?
It was much later that Dirichlet came up with a proof for continuous functions of bounded variety (the function can be split into monotonic subintervals). This proof involves rewriting the Fourier Series to its Dirichlet-kernel form, and the Riemann-Lebesgue Lemma is applied to prove pointwise convergence.
A proof of the pointwise convergence of square integrable functions was also published by Carleson in 1966.
The uniform convergence is generally proven by the Weierstrass M-Test with the help of the Parseval Equality. The criterium is the function shall be twice continuously differentiable (while confusingly other sources say it is enough ig it is once continuously differentiable, can not tell the reason why).
I also heard of proving uniform convergence with the Fejér-kernel and Césaro means.
Is the list complete and true, am I missing something?
Also, some people argue the fact that you can obtain sensible coefficient results in itself should intuitively mean the series should converge. But what interests me is the reasoning why the algebraic proof of the coefficients proves nothing about convergence and why we need these separate proofs.
A few comments are in order: