Lots of basic polynomial inequalities in one variable end up on Math.SE, so I'm thinking that a computer program can fix this once and for all by providing a detailed proof of a polynomial inequality in one variable or a counterexample. However, I don't know how practical this idea is. What is the computational complexity of proving or disproving an inequality in one variable, where a polynomial of that real variable is stated to be greater than or equal to zero?
I'm thinking that the basic way to be able to solve this would be to take the derivative of that polynomial. If the derivative is nonzero for all $x,$ then the original polynomial function must be monotonic, so it must be of odd degree, so it is less than zero for all sufficiently small or sufficiently large $x.$ However, I don't know how to find zeros of the derivative and apply them to proving the inequality quickly with a computer.
Here's a procedure that is guaranteed to find such proof.
Consider a polynomial function
$$ f(x):=\sum_{k=0}^{n}{a_kx^k} $$
such that $f(x)\geqslant0$ holds for all $x\in\mathbb{R}$ . It's apparent that when $x$ approaches infinity (no matter $+$ or $-$) , the limit of any polynomial function is either $+\infty$ or $-\infty$ . Here $f(x)\geqslant0$ , thus
$$ \lim_{x\to+\infty}f(x)=\lim_{x\to-\infty}f(x)=+\infty $$
and by the Rolle's Theorem, there exists at least one stationary point in $\mathbb{R}$ . Since $f$ is of degree $n$ , by the Fundamental Theorem of Algebra, the number of stationary points are finitely many (at most $n-1$).
Consider a finite set $X$ such that $X=\{f(\eta)~|~f^\prime(\eta)=0\}$ , since this set is finite, there exists the smallest elements $\min X=\{f(\xi_1),f(\xi_2),\ldots,f(\xi_p)\}$ . It follows that $f(\xi_1)=f(\xi_2)=\cdots=f(\xi_p)\leqslant f(x)$ for all $x\in\mathbb{R}$ and the equality occurs if and only if $x=\xi_1\vee x=\xi_2\vee\cdots\vee x=\xi_p$ .
Let $g(x)=f(x)-f(\xi_p)$ , note that $g(\xi_1)=g(\xi_2)=\cdots=g(\xi_p)=0$ , thus again by the Fundamental Theorem of Algebra, we can write $g(x)$ in such manner that
$$ g(x)=\left[\prod_{t=1}^{p}{(x-\xi_t)^{m_t}}\right]\left(\sum_{k=0}^{n-m}{b_kx^k}\right) $$
where $m=\sum_{t=1}^{p}{m_t}$ , and $m_t$ is the largest possible exponent of $(x-\xi_t)$ . It's clear that $m_t$ is always even, else either $f(\xi_t+\varepsilon)<0$ or $f(\xi_t-\varepsilon)<0$ , where $\varepsilon>0$ .
Now we can replicate this procedure for
$$ h(x):=\sum_{k=0}^{n-m}{b_kx^k} $$
It is obvious that the degree of the polynomial function is strictly decreasing, thus such procedure will end after finitely many iterations.
Because we can write
$$ f(x)=\left[\prod_{t=1}^{p}{(x-\xi_t)^{m_t}}\right]h(x)+\xi_p $$
The proof of your polynomial $f(x)\geqslant0$ is constructed.
For example, take $f(x)=x^4+x^3+x+2$ , the procedure outputs
$$ f(x)=(x+1)^2\left[(x-\frac12)^2+\frac34\right]+1\geqslant0 $$