Let we have an unknown system with two inputs and two outputs.
inputs $x=[x_1 x_2]$ and outputs $y=[y_1 y_2 ]$
The system have the following properties
$ y_1 = f_1(x_1,x_2)$ ; $y_2 = f_2(x_1,x_2) $
I don't really know $ f_1 $ and $ f_2 $ but for given $ x $ the system can generate $y$
Find $x$ which minimizes $(y_1+y_2)/2$
$ 1\leq x_1,x_2\leq $2
[Note : the system actually have $n$ numbers of input and output but for simplification I assumed $n=2$]
One class of methods commonly used for such purposes are the gradient descent methods. The linked wiki article gives even the pseudocode and an example.
For more thoughts, consider reading chapter 10 in Numerical Recipes in C, especially sections 10.5 (Powell's Methods or Direction Set Methods), 10.6 (Gradient Descent) and 10.9 (Simulated Annealing).
Most such techniques seek local extrema (as opposed to global ones). If you want global extrema, without any assumptions on $f_1,f_2$, you have to restart the local method in multiple positions of the search space, and then choose the largest/smallest of the received candidates.