The squared loss of an hypothesis $h: X \to [0,1]$ is
$$\ell(h) = E_{x,y \sim D}[(h(x)-y)^2]$$
when $D$ is a distribution over $X \times \{0,1\}$.
In my case, I have the following two convex combinations:
\begin{gather} h_a = a \cdot h_1 + (1-a)\cdot h_2 \\ \tilde{h}_a = a \cdot h_1 + (1-a)\cdot \tilde{h}_2 \end{gather}
I want to prove the following statement: if $\ell(\tilde{h}_2) \leq \ell(h_2)$, then $\ell(\tilde{h}_a) \leq \ell (h_a)$.
I have a somewhat lengthy proof that uses basic algebra to arrive at $\ell(\tilde{h}_{a})-\ell(h{}_{a}) = (1-a)^{2}\cdot\left[\ell(\tilde{h}_{2})-\ell(h_{2})\right]$, which concludes the required since $0 \leq a \leq 1$.
I am wondering if perhaps there is a much simpler or more elegant argument for proving this basic fact. I feel like there should be, but I'm not seeing it.