If I roll a six sided dice twice there is a $1$ in $6$ chance that the results will sum up to $7$ (giving an average of $3.5$ per dice). And if I only roll it once it is not possible to get a $3.5$.
The more I roll it the more likely that the average result will be close to $3.5$.
However for $n$ rolls what is the probability that the average result is exactly $3.5$, and how can I generalize this to an $m$ sided dice?
This is more of a contribution rather than an answer, let $X_1, ..., X_n$ be identically distributed and mutually independant random variables with range $\{1, ..., m\}$, such that $Pr[X_i = k] = \frac{1}{m}$. These random variables account for each one of the rolls. Notice that
$$E[X_i] = \sum_{k=1}^mkPr[X_i = k] = \frac{1}{m}\sum_{k=1}^mk = \frac{1}{m} \frac{m(m+1)}{2} = \frac{m+1}{2}$$
Then, let $X = \frac{1}{n} \sum_{k=1}^n X_k$. Notice that $X$ is the average of the rolls. Now,
$$E[X] = \frac{1}{n}\sum_{k=1}^nE[X_k] = \frac{1}{n}\frac{m+1}{2}\sum_{k=1}^n1 = \frac{n}{n}\frac{m+1}{2} = \frac{m+1}{2}$$
That is, the expected value of the average of $n$ rolls is $\frac{m+1}{2}$. This explains why your experiment would give you those numbers. Now, I'm not sure how you'd go about computing $Pr[X = E[X]]$. What I do know is that you could try using Chebyshev's inequality to bound the probability that it gets too carried away from the expected value.