I have an optimization problem which I'm not too sure how to go about doing... I need to show that the particular value of $x = (x_1, x_2)$ where $x_1, x_2$ are numbers, I'm just omitting them from the example to get a better sense of how to approach the problem are the optimal numbers to minimize the following:
${\min_{x}}\frac{1}{2}x^TAx + y^Tx + c$
Where A is a $3x3$ symmetric matrix and $y$ is a $3x1$ matrix and c is a constant let's say $2$. Note that all the matrices/vectors in this equation are given and are not variables, they're actual numbers.
My line of thinking is to prove that the above equation first order derivative is 0 and the second order derivative is positive-definite, but I'm not sure how to calculate those given that I have numbers. Is this the correct line of reasoning? If so, how do I go about calculating them?
Here's a complete answer of what I had in the comments. You are given a function \begin{equation} f(x) = \frac{1}{2}x^TAx + y^Tx + c, \end{equation} and an unconstrained minimization problem: \begin{equation} \min_x f(x). \end{equation}
Stationary Points
We know that stationary points of functions are situated at points that satisfy $\nabla f(x) = 0$. Let the Hessian be denoted as $(H_f)_{ij} = \partial_{x_i}\partial_{x_j}f$ then if $H(p)$ is positive (semi-)definite at the point $p$ we have a minimum, if $H(p)$ is negative (semi-)definite then it is a maximum, if it is indefinite then it is a saddle point.
Let us now compute the derivative of $f$ with respect to the vector $x$, we can denote this either as $\nabla f$ or $\frac{df}{dx}$: \begin{equation} \nabla f(x) = \frac{1}{2}(A+A^T)x + y. \end{equation}
Setting this to zero results in the linear system: \begin{equation} \frac{1}{2}(A+A^T)x = -y. \end{equation} Now if you are given a specific point $p$, then to verify whether that point is a stationary point, it suffices that you plug it instead of $x$ and check whether it fulfills the equation: \begin{equation} \frac{1}{2}(A+A^T)p = -y. \end{equation} If it does then it is a stationary point. If it doesn't, then it isn't a stationary point.
Minima, Maxima, and Saddle Points
Now we also need to check the type of that stationary point (minimum, maximum, or saddle). For this purpose we will compute the Hessian, but taking a derivative with respect to $x$ again: \begin{equation} \frac{d^2f}{dx^2}(x) = \frac{d}{dx}\left(\frac{df}{dx}\right)(x) = \frac{1}{2}\frac{d}{dx}\left((A+A^T)x + y\right) = \frac{1}{2}(A+A^T). \end{equation}
It is clear that $A+A^T$ is symmetric, so now you have to study whether it is positive (semi-) definite (then you have a minimum), whether it is negative (semi-) definite (then you have a maximum), or whether it is indefinite (then you have a saddle). Since you have a $3\times 3$ matrix then $(A+A^T)$ is also $3 \times 3$ and it should be easy for you to find its eigenvalues. If all eigenvalues are greater than zero (or equal for semidefinite) then you have a positive definite matrix and thus a minimum.
How to Compute Matrix Derivatives Using Summation
We can compute derivatives in the usual way by expanding matrix expressions. For example: \begin{align} \frac{\partial}{\partial x_i}(x^TAx) &= \frac{\partial}{\partial x_i}\sum_{jk}x_jA_{jk}x_k \\ &= \sum_{jk}\delta_{ij}A_{jk}x_k + \sum_{jk}x_jA_{jk}\delta_{ik} \\ &= \sum_{k}A_{ik}x_k + \sum_{j} x_j A_{ji} \\ &= \sum_{k}A_{ik}x_k + \sum_{k} (A^T)_{ik} x_k \\ &= \sum_{k}(A+A^T)_{ik} x_k = (A+A^T)_i x \\ &\implies \\ \frac{d}{dx}(x^TAx) &= \begin{bmatrix} \frac{\partial}{\partial x_1}(x^TAx) \\ \ldots \\ \frac{\partial}{\partial x_n}(x^TAx)\end{bmatrix} = \begin{bmatrix} (A+A^T)_1 x \\ \ldots \\ (A+A^T)_n x\end{bmatrix} = (A+A^T)x \end{align}
The derivative for $y^Tx$ is trivial: $\frac{\partial}{\partial x_i}\sum_j y_j x_j = y_i \implies \frac{d}{dx}y^Tx = y$.
How to Compute Derivatives Using The Definition of Directional Derivative
The following approach is similar to the approach one uses to take functional derivatives (in fact we have a linear function here, so this example is just a special case. We can write the directional derivative $\partial_v f$ as $v^T \nabla f$. Additionally we can use the definition of directional derivative: \begin{align} \partial_v f(x) &= \lim_{\epsilon\to 0} \frac{f(x+\epsilon v) - f(x)}{\epsilon} \\ &= \lim_{\epsilon\to 0} \frac{(x+\epsilon v)^TA(x+\epsilon v)^T - x^TAx}{\epsilon} \\ &= \lim_{\epsilon\to 0} \frac{\epsilon x^TAv + \epsilon v^TAx + \epsilon^2 v^TAv}{\epsilon} \\ &= \lim_{\epsilon\to 0} v^TA^Tx + v^TAx + \epsilon v^TAv \\ &= v^T(A^T+A)x. \end{align} In the above I have used (scalar transposed is equal to scalar) $x^TAv = (x^TAv)^T = v^TA^Tx$. So now we have the above expression for the directional derivative. However we also know that $\partial_v f = v^T\frac{df}{dx}$ then by equating: \begin{equation} v^T \frac{df}{dx} = \partial_v f = v^T(A^T+A)x, \end{equation} we find that $\frac{df}{dx} = (A^T+A)x$.