Given is a multinomial distribution with $k$ mutually exclusive events with probabilities $p_1, .., p_k$. We draw a sample of size $n$ and get sample sizes $s_1, .., s_k$. The expected value for each event is $np_i$. If I want a single parameter to evaluate how far away my sample is from the expected value I need to consider the values $s_i - np_i$ and combine them with some norm. I want to use the $L^1$-norm (the $L^2$ norm or some others could also be suitable), so I define the distance of my sample from the expected value to be:
$$ \sum_{i=1}^k |s_i - np_i| $$
What is the expected value of this quantity?
For one of the events the standard deviation is just $\sqrt{np_i(1-p_i)}$ so a naive first guess might be the sum of these but the events are not independent so this doesn't work.
The covariance matrix of a multinomial distribution can be easily computed with $p_i(1-p_i)$ on the diagonal and $-p_ip_j$ off the diagonal and I feel there should be a simple formula for the expected standard deviation using this matrix but I can't find it.
Note that in the simplest case where $k=2$ this simplifies to a binomial distribution and the expected standard deviation is $2\sqrt{np(1-p)}$.
The application of this is testing with a sample whether the assumed distribution is indeed the correct one. So you assume you have a fair dice, through it a bunch of times and then check whether the sample you got is consistent with a fair dice or whether the sample seems highly unlikely with a fair dice and you have a strong suspicion that the dice is not fair.