Suppose F(A) is a quadratic function of a real symmetric matrix, A. This means that there are numbers $f_{ijkl}$ so that F(A) = $\sum_{ijkl}f_{ijkl}a_{ij}a_{kl}$.
Suppose that $F(A) = F(QAQ^t)$ for every orthogonal matrix, Q. Show that there are numbers c and d so that $F(A) = cTr(A^2) + d(Tr(A))^2$. Here, Tr(A) is the trace of A.
Edited work:
Since A is symmetric, then we know that A is orthogonally diagonalizable, so that there's an orthogonal matrix Q such that $QAQ^t$ = D, where D is a diagonal matrix with the eigenvalues of A on the main diagonal.
Using the assumption that F is invariant under all orthogonal similarity transformations, including (of course) the transformations that diagonalize A, I have that:
$$F(A) = F(QAQ^t)$$
$$= F(D)$$ $$=\sum_{i,j,k,l} f_{i,j,k,l}d_{ij}d_{kl}$$ $$=\sum_{i,k} f_{iikk}d_{ii}d_{kk}$$ $$= F(Q_1DQ_1^t)$$ $$= F(Q_2DQ_2^t)$$ $$= F(Q_3DQ_3^t)$$ $$....$$
$$=\sum_{i,k} f_{iikk}d_{ii}d_{kk},$$
where $Q_i$ is an orthogonal, permutation matrix, so that $Q_iDQ_i^t$ is swapping the eigenvalues on the diagonal, resulting in a permuted diagonal matrix, still with the eigenvalues of A on the main diagonal.
Now, the problem reduces to proving the equation for diagonal matrices.
As whacka stated in his answer below, considering only the permuted, diagonal matrices, then F defines a quadratic form in the variables $\lambda_i$, for $1\le i \le n$.
So, we have $$ F(\lambda_1, ... \lambda_n) = \sum_{i,k} f_{iikk}\lambda_i \lambda_k $$
$$ = \sum_{i=k} \alpha_i (\lambda_i)^2 + \sum_{i,k} \beta_{ik}\lambda_i \lambda_k $$
$$ = \sum_{i=k} \alpha_i (\lambda_i)^2 + \sum_i \sum_k \beta_{ik}\lambda_i \lambda_k$$
$$ ?? = \sum_{i=k} \alpha_i (\lambda_i)^2 + \beta_{ik} (\sum_i\ \lambda_i\sum_k \lambda_k) $$
$$ = \alpha_i Tr(A^2) + \beta_{i,k}(Tr(A))^2$$
$$=F(A)$$
And, since F is invariant under any permutation of the eigenvalues on the diagonal, then this quadratic polynomial is also invariant; hence, the coefficients $\alpha_i$ and $\beta_{i,k}$ exist, are well-defined, and unique.
How is my proof? I don't feel confident about the equality that I labeled (??). But somehow, I have got to make that number, $\beta_{i,k}$ not dependent on the indices, in order to pull it outside of the summation, so that I can get my $(tr(A))^2$.
Any hints or suggestions are welcome and greatly appreciated.
Thanks,
You've shown it suffices to prove $\exists c,d\in\Bbb R$ s.t. $F(A)$ is of the form $c\,{\rm tr}(A^2)+d\,{\rm tr}(A)^2$ for all diagonal matrices $A$, as $F$ and ${\rm tr}$ are both ${\rm O}_n$-invariant (the action being conjugation).
One may identify diagonal $n\times n$ real matrices with $\Bbb R^n$. A quadratic function $F:\Bbb R^n\to\Bbb R$ must be of the form $F(\lambda_1,\cdots,\lambda_n)=\sum_ia_i\lambda_i^2+\sum_{j<k} b_{jk}\lambda_j\lambda_k$. Two polynomials define the same function of $\Bbb R$ iff they are the same polynomial, two polynomials are identical iff their monomials have the same coefficients, and $F$ defines the same function for any permutation of its variables.
The key here is that $S_n$ acts transitively and $2$-transitively (in the language of group actions), so that we may permute all of the $a$ coefficients in front of the $\lambda_i^2$ terms, and permute any two $b$ coefficients in front of the $\lambda_i\lambda_j$ terms. If you are unfamiliar with the concept of a group action, you should still be able to take all this in stride I think.
Keeping in line with the language of group actions, the contragredient action of $S_n$ on functions is actually given by $(\sigma\cdot F)(\lambda_1,\cdots,\lambda_n)=F(\lambda_{\sigma^{-1}(1)},\cdots,\lambda_{\sigma^{-1}(n)})$ (the inverses are necessary for this to define a left action instead of a right action, and it has a sensible explanation in terms of graphs of functions). Explicitly writing out the polynomials, we have
$$\begin{array}{ll} \displaystyle \sum_i a_i\lambda_i^2+\sum_{j<k}b_{jk}\lambda_j\lambda_k & \displaystyle = \sum_i a_i\lambda_{\sigma^{-1}(i)}^2+\sum_{j<k}b_{jk}\lambda_{\sigma^{-1}(j)\sigma^{-1}(k)} \\ & \displaystyle = \sum_i a_{\sigma(i)}\lambda_i^2+\sum_{j<k}b_{\sigma(j)\sigma(k)}\lambda_j\lambda_k \end{array} $$
We must interpret $b_{\bullet\bullet}$ as symmetric to make sense of the above when $\sigma(j)>\sigma(k)$. Note the substitution of index that implicitly occurred. If we identify the coefficients of $\lambda_i^2$ above we get $a_i=a_{\sigma(i)}$ regardless of which $\sigma$ we picked, hence all $a$s are equal, and similarly equating coefficients of $\lambda_i\lambda_j$ yields $b_{jk}=b_{\sigma(j)\sigma(k)}$, hence all $b$s are equal: given any two pairs $j<k$ and $l<m$ there is a $\sigma\in S_n$ s.t. $\sigma(j)=l,\sigma(k)=m$ (this is what $2$-transitivity speaks to).
Now, given $F(\lambda_1,\cdots,\lambda_n)=a\left(\sum_i\lambda_i^2\right)+b\left(\sum_{i<j}\lambda_i\lambda_j\right)$ for some $a,b\in\Bbb R$, can you conclude from this that it's of the form $F(\lambda_1,\cdots,\lambda_n)=c\left(\sum_i\lambda_i^2\right)+d\left(\sum_j\lambda_j\right)^2$ for some $c,d\in\Bbb R$?