I have the following matrices and vectors:
$ C \in \mathbb{R}^{2x2} $
$ \vec{d} \in \mathbb{R}^2 $
$ \vec{x} \in \mathbb{R}^2 $
And I have the following function:
$$ f_1(\vec{x}) = \vec{x}^T C \vec{x} - \vec{x}^T \vec{x} + \vec{d}^T \vec{x}$$
I need to develop a function where I can check whether the function $f_1$ has a minimum when we provide some $d, e,$ and $C$.
I know that the first derivative w.r.t. $\vec{x}$ of this function is:
$$ \frac{\partial f_1}{\partial \vec{x}} = 2 \vec{x}^TC - 2 \vec{x}^T + \vec{d}^T $$
And that the second derivative w.r.t. $\vec{x}$ of this function is:
$$ \frac{\partial^2 f_1}{\partial \vec{x}^2} = 2C - 2I $$
However, at this point I am completely lost as to how to determine whether this point can be a minimum or not.
Usually I would develop a Hessian matrix, except I only have one variable ($\vec{x}$). I understand that the second partial derivative test is typically used to check for whether a point is the minimum, but how can I do this here when I have a vector ($\vec{x}$) instead of a scalar?
I am confused on how to calculate a determinant or check if it is positive definite when using a matrix instead of a scalar. Could someone please clarify this for me.
Expanding the definition of $f$ we have $$\begin{align} f(x_1,x_2)&=\vec{x}^T C \vec{x} - \vec{x}^T \vec{x} + \vec{d}^T \vec{x}\\ &=c_{1,1}x_1^2+(c_{1,2}+c_{2,1})x_1x_2+c_{2,2}x_2^2-x_1^2-x_2^2+d_1x_1+d_2x_2 \end{align}$$ where $\vec{x}=(x_1,x_2)$. Is it more familiar in that way?
It follows that $$\begin{align} \nabla f(x_1,x_2)&=(2(c_{1,1}-1)x_1+(c_{1,2}+c_{2,1})x_2+d_1,2(c_{2,2}-1)x_2+(c_{1,2}+c_{2,1})x_1+d_2)\\ &=(C+C^T-2I)\vec{x}+\vec{d} \end{align}$$ and $$H_f(x_1,x_2)=\left [ \begin{matrix} 2(c_{1,1}-1) & c_{1,2}+c_{2,1} \\ c_{1,2}+c_{2,1} & 2(c_{2,2}-1) \\ \end{matrix} \right ]=C+C^T-2I.$$ Can you take it from here?