As simple as this may sound, I just do not understand what this statement implies.
An $n \times n$ matrix A is symmetric if and only if: $$\bar{x}.(A\bar{y}) = (A\bar{x}).\bar{y}$$
Why is this true, and what does it even signify?
As simple as this may sound, I just do not understand what this statement implies.
An $n \times n$ matrix A is symmetric if and only if: $$\bar{x}.(A\bar{y}) = (A\bar{x}).\bar{y}$$
Why is this true, and what does it even signify?
I assume that you know what matrix multiplication means. The given matrix $A$ is assumed square, i.e., it has an equal number of rows and columns. That number is called $n$ here.
The symbols $\overline x$ and $\overline y$ stand for arbitrary column vectors, i.e., (nonsquare) matrices with $n$ rows (the same $n$ as above) and exactly one column.
There is something missing in your statement. It should really say '... if and only if for all column vectors $\overline x$ and $\overline y$...' and then the formula.
When an $n\times n$ matrix such as $A$ is multiplied on the right by a column vector, the result is another column vector. So the notation $A\overline y$ means 'the column vector that is the matrix product of $A$ and $\overline y$'.
The dot (.) in the formula means scalar product of vectors. The scalar product of two column vectors is obtained by multiplying the corresponding components of the two vectors, and then taking the sum of the $n$ results.
Example: let $n=3$ and consider the column vectors
$$\overline x=\left( \begin{matrix} 1\\-5\\0 \end{matrix} \right),\ \overline y=\left( \begin{matrix} 1\\7\\8 \end{matrix} \right)$$
Then the dot product is
$$\overline x.\overline y=1.1-5.7+0.8=-34.$$
The statement that your question starts with says that a square matrix is symmetric if the matrix, first multiplied by an arbitrary column $\overline y$, and then dot-producted with another arbitrary $\overline x$ gives the same number as first multiplying by $\overline x$ and then taking the dot product with $\overline y$.
In terms of matrix multiplication this is equivalent to the formula
$$\forall x_1,\ldots,x_n,y_1,\ldots,y_n:\sum_{i=1}^nx_i\left(\sum_{j=1}^na_{ij}y_j\right)=\sum_{i=1}^ny_i\left(\sum_{j=1}^na_{ij}x_j\right).$$
One can prove that the above statement is equivalent to the condition that the matrix elements $a_{ij}$ of $A$ do not change when $i$ and $j$ are interchanged.
If the $a_{ij}$ are symmetric in the sense that $a_{ij}=a_{ji}$ then the sums in the above formula are identical with the roles of the two indices interchanged.
On the other hand, if the formula holds for all $\overline x$ and $\overline y$, then it holds in particular for the column vector $\overline x$ that consists of all zeroes except that it has the number 1 in the $i$-th place, and for the column vector $\overline y$ that has zeroes everywhere except that it is 1 in the $j$-th place. Now the large number of zeroes ensure that all the terms in the double sums cancel except for the equality $a_{ij}=a_{ji}.$