Definition of independent components of a tensor

872 Views Asked by At

How can the number of independent components of a tensor $T_{i_1 i_2 \dots i_p}$ defined?

Example: Let us consider a symmetric matrix $A$ such that $A{}_{ij} = A{}_{ji}$, and hence in $n$ dimensions it has $\#(A) := \frac 12 n(n+1)$ independent components.

Question: How can the definition be generalized to accomodate an arbitrary rank-$p$ tensor $T_{i_1 i_2 \dots i_p}$?

My idea: Let $T$ be a rank-$p$ tensor regarded as a map $V_p := V \times V \times \dots \times V \rightarrow \mathbb{R}$. Then, let $\Phi_i: V_i \subset V_p$, $i=1,\dots,N$ denote the maximal set of independent projection operators such that

  1. $\Phi_i(T) = 0$
  2. $\Phi_i \circ \Phi_j = \delta_{ij} \Phi_j$

The number of independent components can then be defined as $\#(T) := \text{dim}\left( V_p \right) - \sum\limits_{i=1}^N \text{dim} \, \text{ker}\left( \Phi_i\right) .$

The weak point in this definition is obviously the vague "maximal set of independent projection operators." How can I make this more precise?