I have this statement:
Some game of throw a dice with two equiprobable sides, with the number $1$ and $2$. If you get $1$ You win $200\$$ and if you get $2$ you lose $100\$$, after $100$ launches what value is expected to win?
My attempt was:
I understand that expected value is a theorical concept that give us an averague value of the experiment, something like the average, but with probabilities.
For this exercise i should calculate the expected value, but my doubt are:
i) Is my theorical concept correct?
And more important, to solve this exercise:
ii) The expected value applies by independent to each repetition of the experiment?, i.e when i go to calculate the expected value i should multiplicate the expected value by the number of repetitions?. Per example if i play this game $1$ time i hope to win $50\$$ but if i play this game the $100$ times i should hope The same money as if I played it once?, i.e $50\$$ Or multiplied by the times I will play? in this case 100 times $50\$ \cdot 100 = 5000\$$ ? Thanks in advance.
Hint: For each roll, the expected outcome is $$(\textrm{net result for }1) \cdot P(1) + (\textrm{net result for }2) \cdot P(2)$$
You should be able to compute this explicitly.
Since the results for each roll don’t depend on previous results, the expected outcome for $n$ rolls is $n$ times the expected outcome for a single roll.