In OLS, we're given $n$ regressors $x_1,\ldots,x_n$, assembled into a matrix $X$, and $n$ scalar responses $y_1,\ldots,y_n$ assembled into a vector $y$, and we want to find a vector $c$ of coefficients which minimizes the $L^2$-error $||y - Xc||^2$.
Suppose now that the scalar responses are obtained in two different ways, yielding $y^{(1)}_1,\ldots,y^{(1)}_n$ and $y^{(2)}_1,\ldots,y^{(2)}_n$. I merge them into a single scalar response by taking a weighted average: $$ y_i = \alpha y^{(1)}_i + \beta y^{(2)}_i \qquad \text{for} \qquad i = 1,\ldots,n \text{.} $$ I now want to find $\alpha$ and $\beta$ which produces the smallest possible fitted $L^2$-error; in other words I want to find $$ \operatorname{argmin}_{\alpha,\beta} \operatorname{min}_c ||(\alpha y^{(1)} + \beta y^{(2)}) - Xc||^2 \text{.} $$
Surely this optimization problem is known in the literature. What is it called?
I'd rewrite this in the following way: Let $z = \alpha y^1 + \beta y^2$. The optimal minimizer $c$ is given by $$(X^\top X)^{-1}X^\top z$$ so that your problem reduces to $$\min \| z - X (X^\top X)^{-1}X^\top z \|^2 = \min \| (I - (X^\top X)^{-1}X^\top ) z \|^2. $$ Now, let $C = I - (X^\top X)^{-1}X^\top$. The above problem is equivalent to solving $$\min z^\top (C^\top C) z,$$ with the restriction that $z$ is contained in the convex hull between $y^1$ and $y^2$. Additionally, $(C^\top C)$ is positive semidefinite, which ensures that the above problem is an instance of convex quadratic programming, which is solvable in polynomial time.