Does probability really matter when we are dealing with a single trial?

244 Views Asked by At

I understand the idea that if a random experiment is conducted N times and the probability of an event E is P(E) then the number of times event E will occur is close to P(E)*N. However, is the value of P(E) useful when N=1? For example, if the weather forecast predicts that the chances of not raining today is 95%, there will still be 5% chance that it will rain. It may be unlikely that it will rain, but it is possible. So does this probability help us in any way? Should we take an umbrella when we go out?

4

There are 4 best solutions below

0
On

Probabilities are quantitative means to reason and make decisions about uncertain facts (i.e. for small N - for large N the CLT restores certainty).

Assume that carrying your umbrella when it doesn't rain causes a discomfort that you rate as 1 on a 0 to 10 scale, while missing your umbrella when it rains is rated 9.

What decision will you make if the odds of a rain today are 5% ?

  • expected cost of taking the umbrella: 1 x 0.95 = 0.95 ,

  • expected cost of leaving the umbrella: 9 x 0.05 = 0.45 .

You conclude.

1
On

To expand on Yves Daoust answer, the expected number of times it will rain is 0.95. Clearly this is nonsensical, as it will either rain or it will not. It seems like it should be either be $1$ or $0$. But this is not what we mean when we say expected value. We mean to say that if we performed the action, in this case seeing whether or not it would rain, an arbitrarily large number of times, what's the average number of days that rain. Even though we can be certain that it will not rain 0.95 out of the next 1 day, we can use this to decide what to do. Which way would you rather be wrong? Would you rather lug an umbrella around all day and not need it, on the tiny off-chance it doesn't rain, or leave it when you need it, an almost certainty.

0
On

Yes, probability does matter when dealing with a single trial. Probability is (or can be interpreted as) a "subjective" measure of confidence that some event will or will not happen.

It is this confidence $P$, to which you assign a numerical value, that then enters in an expected-value calculation as in Yves Daoust's answer. The question of what actions to choose given uncertain (probabilistic) information is the central subject of decision theory.

0
On

Leaving aside the fact that evolution somehow modeled our brains to take decisions using some concept that we could relate to probability, I'll turn your question upside down.

You (well, me too) feel relatively comfortable with the notion of probability as a limit of relative frequencies of occurrence of certain result for a random experiment that may be reproduced an infinity number of times in the same conditions (which is quite a lot to assume).

Now suppose we're gonna do the experiment of throwing a fair coin a hundred times and we'll count the number of heads, say $X$. Then we'll check whether $X\in [40,60]$ (in which case you win) or not (you loose).

We know (by exact calculations, or using the Central Limit Theorem, or whatever) that $P(40\le X\le 60)\approx 0.954$. From the frequentist point of view, yes: if we repeat the whole experiment a lot of times, we 'expect' (whatever that means) that more or less 95% of the times you'll win.

But what if you had only one chance in your lifetime to play this game? Assume also that there were bets implicated: certainly that $0.0456$ probability will enter in the equation of what is reasonable to bet in relation to what you could win, and if your bet is risky enough suddenly that 4.6% won't seem that neglectable.

And we could go on. What's the probability that if you play the same game 50 times you'll win more than 45? What does that number mean if you won't play again after those 50 games... and so on.

I've been thinking about this for some time and your very interesting (and apparently non-mathematical) question made me want to put it in the written. But the fact is that suddenly those different interpretations of the probability don't seem to me so different —even less opposed—, but I find them somehow complementary.