How does one apply "risk tolerance" to a random walk?

53 Views Asked by At

Suppose we play a game in which you gamble on the outcome of several successive events. You bet $b$ dollars each time and you (a) lose the bet if you're wrong or (b) gain $wb$ dollars if you are right. (Example: With $b=100$ and $w=1.5$, after being correct once you would have $100 + 1.5 \times 100 = 250$ dollars in total.)

Now suppose you have an initial balance $B_0$, and you are forced to bet the same amount every time; this amount must be a percentage of $B_0$ for every bet that you place. My question: If your probability of correctness is $p$, then what should you choose as $b$, given $w$ and some risk tolerance $r$?

Here's an example of what I mean. Suppose $p=0.51, w=1, r<20\%$. Even though in the long term, I should expect to make a profit, I can't make $b=0.5 \cdot B_0$ because then the probability I go bankrupt is at least $0.2401$, which is higher than my risk tolerance $r$.

I'm pretty sure this is a random walk situation, and I think it might involve finding a confidence interval at an $r$ significance level, but I'm not really sure what we'd be taking a confidence interval of. I also know that calculating your probability of bankruptcy $\beta$ is central to the problem, but it seems to me that $\beta$ isn't the result of some geometric series because you can win-lose-lose-win-lose-lose-lose, which doesn't fit the $\sum (1-p)^{n-1} p$ model (since order matters).

1

There are 1 best solutions below

0
On BEST ANSWER

$\newcommand{\bm}[1]{\boldsymbol{#1}}$I assume there will be a total of $n$ bets. No matter what choice of $b$, note that your money will always be in some state that is $(1 - kb)B_0$ where $k$ is the net number of bets you lost (and can be negative for won bets).

Now when $(1 - kb)B_0 < bB_0$ we say that you are bankrupt because you can no longer bet. Or:

$$k > \frac{1 - b}{b}$$

Note that it makes no sense to make this inequality not tight. You're essentially putting yourself in a scenario where you're bankrupt but with money left over that could've been used to increase our expected profit while not increasing our chance at bankruptcy. Therefore we know we should choose $b$:

$$b = \frac{1}{k+1}$$

For some integer $k$, which indicates the (net) number of times you can lose without going bankrupt. Once you've done this you can construct a Markov chain with $k + n + 1$ states. The initial state is the nearly all-zero vector $\bm{x}$ with only $x_{k+1} = 1$.

The transition function is $x_i = px_{i-1} + (1-p)x_{i+1}$ with the exception for $x_1$, which is absorbing (the bankrupt state). Then you can use this to calculate your expected outcome and chance to go bankrupt for each $1 \leq k \leq n$ and choose the best one that fits within your acceptable risk.