I don't know how to ask this question in general, so I'll start with an example. Suppose I want to minimize the function
$$ f(x) = |x|, $$
which trivially has minimum in $\hat{x} = 0$, I wouldn't be able to use the gradient method in this case because the function isn't smooth in 0, I could use some methods based on the subgradient, but let's disregard that. As alternative I'm thinking is to build a convergent sequence $f_n$ converging to $f$, find the sequence of min $\hat{x}_n$ and maybe it will happen that
$$ \lim_{n \to \infty} \hat{x}_n=\hat{x} $$
where $\hat{x}$ is the minimizer of $f$. Now in my case since
$$ f(x) = |x| = \max(0,x)+\max(-x,0), $$
And since from this question I know that
$$ \lim_{n \to +\infty} \frac{1}{n} \ln(1+e^{nx}) = \max(0,x) $$
I can write
$$ f_n(x) = \frac{1}{n}\ln\left(2+e^{nx}+e^{-nx} \right), $$
In this case trivially we have $\hat{x}_n = 0$, which trivially converges to the min of $f$, but for all $n$ we can use standard unconstrained minimization algorithms to find $\hat{x}_n$. My question now is, in general is there a theorem that relates function approximations to algorithms that minimize some function? (like $f : \mathbb{R}^n \to \mathbb{R}$, this is what I mean as general case).
As theorem I mean features of the sequence that would tell me "look, if for all $n$ you can manage to find $\hat{x}_n$ then the sequence will converge to $\hat{x}$".
I'm not an expert on optimization theory, I know some algorithms, but not much about the theory behind them.
Consider $f:\mathbb{R}^d \to \mathbb{R}$, and suppose you are interested in approximating the solution to $\underset{x}{\min} f(x)$ using a sequence of approximations $\underset{x}{\min} f_n(x)$, where $\{f_n\}$ is a sequence of approximating functions with $f_n:\mathbb{R}^d \to \mathbb{R}$, and we assume that corresponding minimizers exist.
A minimum requirement for the minimizers $\{\hat{x}_n\}$ of the sequence of approximating problems to "converge" to a minimizer $\hat{x}$ of the original problem is, of course, that $f_n \to f$ pointwise on $\mathbb{R}$.
A technical point is that you can only ask for any accumulation point of $\{\hat{x}_n\}$ to converge to a minimum of the original problem. To see this, suppose $f(x) = c$, $\forall x \in \mathbb{R}$, where $c$ is a constant. Then every point $x \in \mathbb{R}$ is a minimizer of $f$, and we can have $\hat{x}_n = (-1)^n$.
With the above technical point in mind, a sufficient condition for any accumulation point of $\{\hat{x}_n\}$ to converge to a minimizer of the original problem is that the functions $\{f_n\}$ converge uniformly to $f$ on $\mathbb{R}$. A proof of this claim follows readily from the definition of uniform convergence.
The above uniform convergence requirement is typically stronger than what is needed. The notion of convergence that suffices (I think it may be the minimal notion needed, but I'm not sure) is epi-convergence, see Chapter 7 of Rockafellar and Wets. In particular, Definition 7.1 therein defines epi-convergence, Proposition 7.2 provides a general way to verify epi-convergence, Proposition 7.4 provides easily checkable sufficient conditions for verifying epi-convergence, Proposition 7.15 relates epi-convergence to uniform convergence, and Theorems 7.31 and 7.33 provide conditions under which the objective values and solutions of the sequence of approximations "converge to those of the original problem".
As a bonus, the above framework readily extends to the case when you have (approximations of) constraints as well.