Consider this
Dice A has the numbers 1, 1, 2, 2, 2, 4. We find the expected value to be $E(X)=2$ and the $Var(X)=1$.
Dice B has the numbers 2, 2, 4, 4, 4, 8. As one would expect, $E(2X)=4$ and the $Var(X)=4$. Slightly unintuitively the variance increases by the square of the factor. However, my intuition for this, arises from the way variance is ultimately defined and calculated, we find the distance squared from the mean. Hence our factor should also be squared when determining the new variance.
Now consider this
Observations $X_1$ and $X_2$ are independent and of dice A. Combining the score of the seperate observations. We do this by drawing a table of all the possibilities, after adding the two values on the independent dices, a new probaility distribution is then constructed with the values 2, 3, 4, 5, 6, 7, 8 and there respective probailities.
As one would expect, $E(X_1+X_2)=2$, the same as $E(2X)$. However now, the $Var(X_1+X_2)=2$ and not $4$. Unfortunately, this is slightly hard to understand for me. Would we not expect the same result as $E(2X)$? Can't we treat $X_1+X_2=2X$. What is the difference between these scenarios?
Edit ( What is the intuition then, for why the variance changes by a factor equivalent to the number of independent observations, why this number and not something else? Can we use the same intuition for why the variance changes by the square of the factor for the other scenario? ( a rigorous proof also works ) )
$X_1+X_2$ is not like $2X$, because extreme values are less likely. For example: $$\Pr(2X=8)=\Pr(X=4)=\frac16,$$ but$$\Pr(X_1+X_2=8)=\Pr(X_1=4\text{ and }X_2=4)=\frac{1}{6}\times\frac{1}{6}=\frac{1}{36},$$ and similarly $$\Pr(2X=2)=\frac 13\text{ but }\Pr(X_1+X_2=2)=\frac19.$$ Since extreme values of $X_1+X_2$ are less likely, it makes sense that its variance is smaller.