I have attempted to proof the following, based on the given data:
Let $f(x)= x^{T}Ax +2b^{T}x + c$, where $A\in\mathbb R$ symmetric matrix, $b\in\mathbb R$ and $c\in\mathbb R$. Then:
i) $x$ is critical point iff $Ax=-b$
ii) If $A\geq 0$, then x is a global minimum iff $Ax=-b$
iii) If $A > 0$, then $x=-A^{-1}b$ is a strict global minimum.
My attempt:
Let $Q(x)=x^{T}Ax$.
By definition of convex, for any $x,y\in\mathbb R$, we have $$Q(\frac{x+y}2)\leq\frac12(Q(x)+Q(y))$$ Thus it is sufficient to reduce and prove that $$\frac12(x+y)^TA(x+y)\leq x^TAx+y^TAy\\ x^TAy+y^TAx\leq x^TAx+y^TAy$$ Namely $$(x-y)^TA(x-y)\geq0$$ which is directly followed by positive semi-definite.
$f(x) = Q(x) + 2b^{T}x+c$, hence $f$ convex as sum of convex functions.
i) Let $x$ be a critical point. By definition of critical point, $x$ satisfies $\nabla f(x) = 0 $. Therefore, $\nabla f(x)=2x^{T}A^{T} + 2b = 0 => Ax=-b$. For the other direction, if $Ax=-b => x=-A^{-1}b$, substituting in the $f$ equation I derive $\nabla f(x)=0$ so $x$ is a critical point.
ii) Let $x$ a global minimum. Then $x$ satisfies $\nabla f(x) = 0 => \nabla f(x) = 2x^{T}A^{T} + 2b =0 => Ax=-b$ . For the other direction, let $Ax=-b$, then $f(x)= bx+c$. A linear function which is convex and which implies that every local minimum is a global minimum. But it seems I don't have enough data to derive that an optimal solution even exists.
iii) If $A>0$ then $f$ is strictly convex. By first order characterization of strict convexity,
$f(y)> f(x) + \nabla f(x)^T(y-x) \forall x, y \in dom(f), x \neq y$ (1) .
If $x=-A^{-1}b$ then by substituting x to the right side of (1) I derive:
$f(y)>-b^{2}A^{-1}+c$. But it fails to show how $x$ is a global minimum.
Any help or opinion on how to prove i), ii), iii) is welcome
Any symmetric real matrix $H$ is related to an infinite number of diagonal matrices. There are algorithms for simply producing such, see reference for linear algebra books that teach reverse Hermite method for symmetric matrices
In what follows note $PQ=QP = I.$ If the original $H$ is positive semi-definite, the diagonal $d_{ii}$ elements of $D$ all have $d_{ii} \geq 0.$ If positive definite, we have the stronger $d_{ii} > 0.$ This is Sylvester's Law of Inertia. In either case, there is an additional nonsingular diagonal matrix $E$ such that $D_{1} = E^T P^T H P E$ is diagonal, with some $r$ initial diagonal elements equal to $1,$ the remainder $0.$ Here the rank $r$ has $r < n$ if semidefinite, while $r=n$ and $D_1=I$ when positive definite. Take $U = PE,$ we have invertible real $U$ with $U^T H U = D_1$ The point is that we can take $V = U^{-1}$ and $H = V^T D_1 V.$ When $D_1$ is the identity, we just have $H = V^T V.$ This sort of thing is sometimes called a Cholesky decomposition. When $D_1$ has one or more zero elements on the diagonal, we can replace them by 1's as long as we trim the matching columns of $V,$ call that $V_2.$ In that case $H = V_2^T V_2.$
It seems your original expression had $2 b^T x,$ the we do not need the $1/2.$
You can use this decomposition to complete the square in your expression. Suppose we just say $$ A = V^T V. $$ Then $$ \color{red}{ (x^T V^T + b^T ) ( Vx + b) + ( c - b \cdot b) \; \; = \; \; ( Vx + b)^T ( Vx + b) + (c - b \cdot b)}$$ is your expression. It is just the dot product of $( Vx + b)$ with itself added to $(c - b \cdot b)$ and is always at least $(c - b \cdot b).$
Note that $b^T x = x^T b = b \cdot x$ is just a number, the ordinary dot product. One thing to remember is that a 1 by 1 matrix is just a single number, and is its own transpose.
When $A$ is invertible, there does exist an $x_0$ with $A x_0 + b = 0.$ When $A$ is singular, there might be such an $x_0$ and there might not.
Editing is very slow on my computer today. When there is an $A x_0 + b =0$ take $x = x_0 + t v,$ where $t$ is a scalar variable and $v$ is any vector. When there is no such $x_0,$ call the current vector $x_1$ ans switch to $x=x_1 + t v.$ If needed, one may take ordinary first and second derivatives by the single variable $t.$
$$ P^T H P = D $$ $$\left( \begin{array}{rrrr} 1 & 0 & 0 & 0 \\ - 2 & 1 & 0 & 0 \\ 1 & - 2 & 1 & 0 \\ 2 & - 3 & 0 & 1 \\ \end{array} \right) \left( \begin{array}{rrrr} 1 & 2 & 3 & 4 \\ 2 & 3 & 4 & 5 \\ 3 & 4 & 5 & 6 \\ 4 & 5 & 6 & 7 \\ \end{array} \right) \left( \begin{array}{rrrr} 1 & - 2 & 1 & 2 \\ 0 & 1 & - 2 & - 3 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{array} \right) = \left( \begin{array}{rrrr} 1 & 0 & 0 & 0 \\ 0 & - 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ \end{array} \right) $$ $$ Q^T D Q = H $$ $$\left( \begin{array}{rrrr} 1 & 0 & 0 & 0 \\ 2 & 1 & 0 & 0 \\ 3 & 2 & 1 & 0 \\ 4 & 3 & 0 & 1 \\ \end{array} \right) \left( \begin{array}{rrrr} 1 & 0 & 0 & 0 \\ 0 & - 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ \end{array} \right) \left( \begin{array}{rrrr} 1 & 2 & 3 & 4 \\ 0 & 1 & 2 & 3 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{array} \right) = \left( \begin{array}{rrrr} 1 & 2 & 3 & 4 \\ 2 & 3 & 4 & 5 \\ 3 & 4 & 5 & 6 \\ 4 & 5 & 6 & 7 \\ \end{array} \right) $$