Consider the epsilon delta definition of the limit below, as it is usually stated:
the limit of f as f approaches a is L if and only if, for all ε > 0, there is a δ > 0 such that for all x, 0 < |x - a| < δ implies |f(x) - L| < ε.
In this form, before the "if and only if", f(x), L, and a are not bound and therefore x, L and a must be considered free in the formula as a whole.
Question 1. Is it considered good practice, in logic, to specify formulas containing free variables? If so, what are the semantics of free variables in formulas containing them?
Question 2. When proving that a function has a limit, we replace f(x), L and a with terms (e.g. L = 0, f(x) = x^2, a = 0). What logical rule of inference allows us to do this? If there is no such rule of inference, then the above definition would have to be considered shorthand for something like
for all f, L and a, the limit of f as f approaches a is L if and only if, for all ε > 0, there is a δ > 0 such that for all x, 0 < |x - a| < δ implies |f(x) - L| < ε.
Then, we could use universal specification to replace f, L and a with specific functions or constants. But that leads to question 3.
Question 3. To what extent can we treat function symbols (e.g. f), as variables? Can we quantify function symbols?
I'm not sure how to answer your second question, but I think I can manage the other two.
Answer 1. A formula with free variables is typically written as something resembling $\varphi(v_1,\ldots,v_n)$ where $v_1,\ldots,v_n$ are the free variables and the bound variables are not among $v_1,\ldots,v_n$. When written this way, $\varphi(v_1,\ldots,v_n)$ does not have a truth value; but if $\mathcal M$ is a model and $a_1,\ldots,a_n \in |\mathcal M|$ as elements, then either $ \mathcal M \models \varphi(a_1,\ldots,a_n)$ or $\mathcal M \models \neg \varphi(a_1,\ldots,a_n)$. (I'm assuming this is the sort of answer you're looking for when you're asking about semantics.)
Answer 3. You can quantify over function symbols, but then you would be leaving first-order logic and venturing into second-order logic. (Note that if you want to quantify over a symbol $f$, then $f$ should not be in your language $\mathcal L$ because then it would have a fixed interpretation in any $\mathcal L$-structure.) In first-order logic, quantifiers operate on individual elements in the a model, but second-order logic allows for quantification over sets. For example, $\forall x \exists y (y>x)$ is a first order statement that we could interpret in $\mathbb N$ with $x,y \in \mathbb N$. However,
$$\forall X((0 \in X \wedge (n \in X \rightarrow n+1 \in X)) \rightarrow \forall n(n \in X)$$
uses a sort for $X \in P(\mathbb N)$, making it second-order.
The reason you can't quantify over function symbols in first-order logic is because they implicitly represent sets. A formula $\varphi(v_1,\ldots,v_n):=\exists f \psi(f,v_1,\ldots,v_n)$ where we try to quantify over functions may be interpreted as follows: if $\mathcal M$ is an $\mathcal L$-structure and $a_1,\ldots,a_n \in |\mathcal M|$, then $\mathcal M \models \varphi(a_1,\ldots,a_n)$ if there is some expansion $\mathcal M^*$ to the language $\mathcal L \cup \{f\}$ in which $\mathcal M^* \models \theta(a_1,\ldots,a_n)$ where $\theta(\bar v)$ is the formula we get from $\psi(f,\bar v)$ by interpreting $f$ as function symbol instead of a variable. The reason this entails quantifying over a set is because $f$ is essentially a subset of $|\mathcal M|^2$ that is explicitly represented when we move up to $\mathcal M^*$.