I am not very well versed with how limits work at a deeper level, but is there an intuitive reason why:
- The limit of the overshoot of a fourier approximation as $m\to \infty$ is approximately $0.0895.$ I've seen this for partial sums of the fourier approximation for a function with a jump discontinuity.
- But I've also heard as $m\to\infty$ the fourier series you get should be an exact replica of the function it is imitating (that is, with no overshoot)?
Then how come the error/overshoot does not seem to vanish due to the first dot point, but does seem to vanish due to the second dot point? (Do the limits not seem contradictory?)
The point at which the over shoot is measured is not fixed. So if you fix an $x$ value your limit will be your original function (which is your bullet point #2), but if you look at the max value over a fixed small interval around your jump, that limit yields an overshoot.
This Desmos graph of $f_n(x) = n(1-x)x^n$ exhibits a similar phenomenon in a much simpler way. Play around with the slider for the value of $n$, making it large. You can see that for a fixed $x$ value the limit as $n \rightarrow \infty$ is always $0$, but the $\max$ over $[0,1]$ approaches some finite non-zero value. (I'd bet it's actually $e^{-1}$, though I haven't verified it.)