Let $(x_1, x_2,\dotsc, x_n)$ be a sequence of vectors: ($\forall i=1,\dots,n$) \begin{pmatrix} x_i^1 \\ x_i^2\\ \vdots\\ x_i^m \end{pmatrix}
In statistics, one often has to compute the sample mean vector: $$X = \frac{1}{n}\sum_{i=1}^n x_i$$
and the sample covariance (or variance-covariance) matrix: $$\mathbb{V} = \frac{1}{n-1}\sum_{i=1}^n (x_i - X)(x_i - X)^T$$ where $x^T$ is the transpose of $x$.
My questions are:
what canonical form of information would I suggest to represent the sequence $(x_1, x_2, \dotsc, x_n)$ in order to compute the sample mean vector and the sample covariance matrix?
How can I verify that all the desirable properties of canonical information are satisfied: Existence and Uniqueness, Completeness, Elementary, Empty, Combination, Update, and Compactness and Efficiency.
What are the minimum number of observations $n$ for which $X$ and $\mathbb{V}$ are defined?
Thanks in advance.