Many of the Fourier series problems I deal with right now are with discontinuous functions. Many times the integrals involved have to be separated because there are discontinuities.
However this is making my head hurt, why do the theorems work if the sums of trigonometric functions are continuous and the original function isn't? Namely, the theorems depend on the equality, and there isn't really any equality here.
Perhaps I'm incorrectly assuming that a Fourier series expansion is continuous, but anyway, the problem of the equality I mention still stands.
Edit: I mean specifically jump discontinuities. I think this is important for my point. Because intuitively I think that the Fourier series should have the same type of discontinuity as the original function.
The simplest sense in which a Fourier series of a function converges to it is in the $L^2$ sense. This is a strictly weaker notion of convergence than the ones you first learn about in analysis, e.g. pointwise or uniform convergence. Part of the problem is that functions in an $L^2$ space are not really functions in the familiar sense: for example, you can't evaluate them at points. So, by default, the sense in which a function "equals" its Fourier series is a weaker notion of equality than ordinary equality of functions. (For example, it is invariant under changing a function at countably many points!)
Getting any stronger notion of convergence requires hypotheses on the original function. For example, you might think that if $f$ is continuous then its Fourier series converges to it pointwise. This is famously not true, although it is true if $f$ is differentiable. In fact it was studying convergence properties of Fourier series that was in part responsible for Cantor inventing set theory.