I am looking at formulas for testing pointwise versus uniform convergence from Kenneth Ross' text, which are respectively: $$\lim _{n \rightarrow \infty} f_n (x) = f(x), \ \text{for all } x \in S$$ and $$\lim_{n \rightarrow \infty} [\sup \{|f(x) - f_n(x)|: x \in S\}] = 0.$$
In all the examples and exercises in the text, however, the $f(x)$'s are in reality given as constants.
Here is my question: Why doesn't the text just use the constant notation $c$ directly instead of using $f(x)$? Is it because in the eye of $f_n(x)$, that $f(x)$ is seen as a constant? What does $f(x)$ really mean? The typical $f_n(x)$ given in the exercises is $f_n(x) = x^n / (n + x^n)$ in intervals of either $[0, 1]$ or $[0, \infty)$
Thank you for your time and help.
Of course you can put $$g_n(x)=f_n(x)-f(x)$$ and reduce to the case $$g(x)=0$$ So your book treats the case $f(x)=c$ for the sake of simplicity knowing that it is as interesting as the general case