The problem occurs in the Convex Optimization II lecture by Stephen Boyd. There we want to choose $\alpha_i$'s to minimize $$\frac{R^2+G^2 \sum_{i=1}^k \alpha_i^2}{2\sum_{i=1}^k \alpha_i}$$
where $\alpha_i>0,R>0,G>0$. The professor argued this in the video 20:00 with the statement that a quadratic over a positive linear function is convex. How to show that it is true? Also, could somebody give a proof for the general case where the function is of the form:$$\frac{x^T Ax+b^Tx+c}{d^Tx+e}$$ with $\textbf{Dom}(x)=\{x:d^Tx+e>0\}$ and $A$ positive semi-definite.
A function of many variables is convex iff every one-dimensional slice of it is also convex, so it's enough to consider the 1-variable case. So suppose $$f(x)=\frac{\frac12ax^2+bx+c}{dx+e};$$
then a bit of calculation yields
$$(dx+e)^3f''(x)=ae^2-2bde+2cd^2$$
so we want the RHS of that (a constant, which is a promising start) to be positive. Well, it's a quadratic function of $e$ whose discriminant is $(bd)^2-(a)(2cd^2)=(b^2-2ac)d^2$.
If the numerator of $f$ is an everywhere-positive quadratic (which I think the lecturer intended, though he didn't say it, and is certainly true of his particular function) then $b^2-ac<0$ and also $a>0$ (to make the numerator positive for large $x$) and $c>0$ (to make the numerator positive for $x=0$), and therefore the discriminant in the previous paragraph is also $<0$ (unless $d=0$, but in that case our function is convex for easier reasons), and therefore $ae^2-2bde+2cd^2$ has the same sign for all $e$ and in particular the same sign as $2cd^2$, i.e., the same sign as $c$. And since, again, we are assuming that our numerator is positive everywhere, $c>0$ and we're done.
(You can maybe save some of that "bit of calculation", which I got wrong at least twice, as follows: consider the function $g(x,y)=\frac{ax^2+b}y$. The second derivatives of this are pretty painless to compute and it's then easy to see that $g$ is convex where $y>0$. And now we can use the fact that an affine-then-convex composition is itself convex.)