Today, I had posted this question about the "Payoff of a dice game" and in the answers, most had explained by associating the problem with the concept of the expected value of a probability distribution. So, I caught up on that but I find that I'm still confused.
With regard to the probability distribution curve, I fully understand the reason why the sum of product of the random variable and the probability gives the expected value for a given experiment. However, I wish to obtain a deeper and more intuitive understanding of this concept. I watched Sal Khan's videos and he explained it by using the following example:
"Imagine that you are playing basketball. The probability that you will make a basket is 40%. If you throw $n$ baskets, the expected value is given by $np$, where $p$ is the probability of success, defined here as the event of making a basket. Think of it this way; for every basket that you throw, the ball is falling 40% into the hoop."
Although this is a clever way to think about this, I was wondering if anybody else had any more insights. How do you make sense of this concept in your head? Please share for it would help me tremendously.
Edit: I realize that I have not been very clear while stating the question and want to clarify that I understand the logic behind the other question that I had posted. The reason that I posted this question separately is because I wanted to know how other people made sense of these concepts of averages and expectation values inside their head. What is the most useful way to think about these concepts?
Thanks ever so much :) Regards.
Assume a random variable $X$ that denotes the number of heads when you toss two coins together. $X$ has three possible values; $0, 1, 2.$ The probability of getting a $0$ (that is, no heads) is the probability of getting tails on one coin and getting a tails on the second coin. This is $0.25$.
The probability of getting $X$ as $1$ is the probability of getting a heads on the first coin, and the probability of getting a tails on the second coin, OR, vice versa (tails on first coin, heads on the second coin). This is $0.25+0.25=0.5$.
Similarly, the probability of $X$ assuming the value $2$ is when you get $2$ heads (that is, $0.25$).
Now you ask yourself, "what result should I expect when I toss two coins, one after the other? Am I most likely to get two tails, two heads, or one head and one tail?"
To answer this, you multiply each value of $X$ with its corresponding probability. Why do we do this? Well, let's look at the definition of probability. Probability can be defined as the frequency of an event happening, divided by the sum of the frequencies of all events happening. That is $$\frac{frequency\ of\ event\ A}{sum\ of\ frequencies\ of\ events\ A,\ B,\ C...}$$
The sum of all frequencies happens to be the entire sample space. Now let's look at the definition of the "mean". If we are given a frequency table with values A, B and C and their corresponding frequencies (freqA, freqB, freqC), the mean is $$\frac{(A)(freqA)+(B)(freqB)+(C)(freqC)}{freqA +freqB +freqC}$$ or $$\frac{(A)(freqA)}{freqA +freqB +freqC}+\frac{(B)(freqB)}{freqA +freqB +freqC}+\frac{(B)(freqB)}{freqA +freqB +freqC}$$ This is nothing but $$(A)(ProbA)+(B)(ProbB)+(C)(ProbC)$$ (where ProbA is the probability of A).
So we see that the mean value (or expectation value) is defined as above.
In the case of my example, the expected value is 1. This means that if I toss 2 coins a very large number of times (in an unbiased way), the most common outcome would be 1 heads and 1 tails, (not necessarily in that order). Calculating the mean or "expectation" value before hand means that I can safely predict that for a large set of two-coin-tossing, I will get 1 heads and 1 tails on average.