I am learning about the St Petersburg Paradox https://en.wikipedia.org/wiki/St._Petersburg_paradox - here is my attempt to summarize it:
- A fair coin is tossed at each stage.
- The initial stake begins at 2 dollars and is doubled every time tails appears.
- The first time heads appears, the game ends and the player wins whatever is the current stake
As we can see, this game will have an expected reward of infinite dollars:
$$E(X) = \sum_{i=1}^{\infty} x_i \cdot p_i$$
$$E = \sum_{n=1}^{\infty} \frac{1}{2^n} \cdot 2^n = \frac{1}{2} \cdot 2 + \frac{1}{4} \cdot 4 + \frac{1}{8} \cdot 8 + \frac{1}{16} \cdot 16 + ... = 1 + 1 + 1 + 1 + ... = \sum_{n=1}^{\infty} 1 = \infty$$
The paradox is that even though the game has an infinite reward, in real life simulations, the game usually ends up with a finite reward. Although seemingly counterintuitive, this does seem logical. Even more, we can write computer simulations to see that large number of games will have finite rewards.
My question is about applying the insights of the St Petersburg Paradox to the first passage/first hitting times of Brownian Motions.
For example, consider the generic Brownian Motion:
$$Y_t = y_0 + \mu t + \sigma W_t$$
- $Y_t$ is the value of the process at time $t$.
- $y_0$ is the initial value of the process at time $t=0$.
- $\mu$ is the drift rate, representing the deterministic trend.
- $\sigma$ is the diffusion coefficient, scaling the Brownian motion $W_t$.
- $W_t$ is a standard Brownian motion.
Now, consider the following situations (all with 1 dimensional continuous Brownian Motions):
- Situation 1: An unconstrained Brownian Motion without Drift starting at $y=1$ at time $t=0$ : When will it be expected to first reach the point $y=5$?
- Situation 2: A Brownian Motion without Drift starting at $y=1$ at time $t=0$ : If the stopping condition is when it reaches the point $y=5$ or $y=-5$ for the first time - when will it be expected to stop?
- Situation 3: A Brownian Motion without Drift starting at $y=1$ at time $t=0$: If it can only go between the points $y=5$ or $y=-5$ , when will it be expected to first reach the point $y=3$? (i.e. if it reaches $y=-5$, it "bounces back" - however I am not sure how to mathematically encapsulate this "bouncing back" behavior)
- Situation 4: How will the first passage times for Situations 1,2,3 change if we use Brownian Motions with Drift? (i.e. I know that the Inverse Gaussian Distribution is used here)
- Situation 5: How will the first passage times for Situations 1,2,3,4 change if the Brownian Motion can only take discrete values? (i.e. a random walk)
Naively, using the logic from the St Petersburg Paradox, I could argue that the expected stopping times for the above 5 questions will all be infinite. That is, technically, there are very small probabilities that all these Brownian Motions can get stuck in an infinite loop of going back and forth, and never reach their stopping conditions. Thus, each of these infinitely long paths are weighted with infinitely small probabilities - and since there are an infinite number of these paths : the expected value would be infinite. Is the St Petersburg Paradox fundamentally at odds with the concept of First Passage/Hitting Times?
Yet this is clearly not the case. I can repeatedly simulate any of the above situations and see that all of them have finite stopping times (even though some of them might be long). However, now it seems to me that in theory, the more simulations you do, the probability of encountering a very long simulation increases, thus the average stopping time would statistically increase as the number of simulations increase?
Can someone please help me understand how to mathematically analyze the probability distributions and expected first passage/hitting times of the above situations? Why will some of them be finite and some of them be infinite? In situations where there is an infinite answer - is it wrong to just simulate the situation many times and take the expectation of the empirical distribution of these simulations as the average hitting time?
It seems to me that by virtue of the St Petersburg Paradox, all hitting time distributions should be infinite - yet this is clearly not the case?
Note:
The St Petersberg game and the first passage time to a barrier do have the similarity that in the first case, the mean payoff is infinite while the probability of an infinite payoff is zero, and in the second case (for a driftless random walk/brownian motion) the mean first passage time is infinite while the probability of it being infinite is zero.
In both cases the idea is the same. There is nonzero probability of unbounded large payoffs/passage times, and while they are increasingly unlikely as they get large, the probabilities don't go down fast enough for there to be a finite mean. In the first case the probability goes down like the inverse payoff (the payoff for lasting $n$ rounds is $2^n$ and the probability is $\frac{1}{2^{n}}$) and in the second case the probability goes down asymptotically as $\tau^{-3/2},$ as you can see from the inverse gaussian distribution in your other question. In order for there to be a finite mean, the probability would have to decay faster than inverse square.
When we add another barrier on the other side of the starting point in the random walk (say we start at zero and have barriers at $1$ and $-5$), the expected passage time is no longer infinite. Even though it is "possible" for the particle to meander between the barriers forever, we could even see in the one-barrier case that that doesn't contribute a positive probability. What changes with regard to the expected value is now there can't be arbitrarily long excursions away from the barrier (since there is another barrier on the other side that contains it). As we move the barrier at $-5$ further and further away from the starting point, to $-10$, to $-1000,$ to $-1000000,$ etc, the expected time diverges to infinity (and the probability that you will end by hitting the left barrier and not the right barrier at 1 goes to zero).
We could similarly alter the St. Petersberg game with a cap, which would render the expected value finite. For instance, if we said there could only be 30 rounds, then the mean payoff would be 31. (Note as a practical matter, 30 rounds is still rather fantastic, as the max payoff is 2^30 which is just a shade over a trillion dollars, albeit occuring with only a 2 in a trillion chance). Note if you paid 31 to play the game, you'd only come out ahead if you lasted 5 rounds, which only has a $1/32$ chance of happening, and only by 1 dollar at that. So still seems intuitively not a great deal, since we are still having some very unlikely very large payoffs contribute to that expected value.
When you add drift to the random walk, things change for the one barrier problem. If the drift is toward the barrier, the expected hitting time becomes finite. If it is away from the barrier then the expected time remains infinite and there is a nonzero probability that the particle never hits the barrier.
In answer to your last question what will happen if you simulate is that the empirical mean will keep increasing as you do more and more simulations, not stabilizing as usual. (And as this inherently involves arbitrarily large numbers if you keep going, you will certainly run into numerical difficulties and/or simulations that won't finish in a practical amount of time.)