The linear speedup theorem informally says the following thing:
If $M$ is a Turing machine that operates with time $f(n)$ to do a certain task on some input $x$ then for every $\epsilon>0$ we can construct another Turing machine that operates on $x$ with time $g=\epsilon f(n)+n+2$.
Now on Papadimitriou's book at page 33 (bottom) I read:
"If $f(n)$ is superlinear, say $14n^2+31n$, then the constant in the leading term ($14$ in this example) can become arbitrary small, that is, the time bound can suffer an arbitrary linear speed up. The lower order term, like $31n$ above, can be completely discarded "
I don't understand the part in bold of the above quote, infact applying for example the speed up theorem to $14n^2+31n$ with $\epsilon=10^{-10}$ we obtain $g=14\cdot10^{-10}n^2+(31\cdot 10^{-10}+1)n+2$; but why one should discard the term of degree $1$?
Remember that when talking about complexity (and related areas), the size of the problem at hand is implicitly taken as very large. The interest is what mathematicians would call "limit when n tends to infinity." And if n is large, n^2 is definitely much larger than n, and even more than any given constant, be it 1 or a trillion.
One has to be careful with the conclusions, however. In several cases the "most efficient algorithm known" is such that it becomes better than simpler ones only when the size of the problem is larger than the known universe.