I was reading on optimization and thought of the following proposition:
The Maximum (Minimum) of a function in a multivariate optimization problem is always higher (lower) than the Maximum (Minimum) of a function in a univariate optimization
Is there a theorem justifying this? In regression analysis, this effect emerges on OLS estimation of the parameters. For instance, the $R^2$ of the equation $y_i=\hat{b_0}+\hat{b_1}x_{1i}+\hat{b_2}x_{2i}+\hat{e_i}$ will always be larger than the $R^2$ of the equation $y_i=\hat{b_0}+\hat{b_1}x_{1i}+\hat{u_i}$
In your italicised text "a function" occurs twice. Is the same function meant? If "Yes" it is obvious that we have $≥$: If a function on a large domain is restricted to a smaller domain it may very well happen that the maximum is decreased. In formulas: If $A\supset B$ then $\max_{x\in A}f(x)\geq \max_{x\in B}f(x)$. This is not a proposition, but "pure logic".