Hedging with Kelly Criterion on a single event

618 Views Asked by At

Let's suppose I'm betting on the flip of a coin. Each outcome has equal probability $p$. Fractional odds per single unit stake on each outcome are:

$heads=h=6/5$

$tails=t=7/5$

If I was to bet on a single outcome, the optimal portion $x$ of my bankroll should be wagered on $tails$ given that it has the greatest positive long term expectation of the two choices. According to the Kelly Criterion in this case: $$x=\frac{pt-(1-p)}{t}$$ However, the following conditions are met for hedging:

  1. All outcomes belong to a single event.
  2. All outcomes are disjoint (none can't occur simultaneously along with any other).
  3. All have positive long term expectation.
  4. The probabilities of each respective outcome add up to 1 when taken all together.

Consequently, I can wager my entire bankroll on both outcomes (the entire event) to profit from either one of these ocurring. What should $x_t$ and $x_h$ be according to the Kelly Criterion?

In order to avoid daunting mathematics, I simplified the problem utilizing Kelly's derivation in the following manner with a single unknown variable $x$:

$$f(x):=p\ln(1+hx-(1-x))+(1-p)\ln(1+t(1-x)-x)$$ $$=\ln((hx+x)^p(1+t-tx-x)^{(1-p)})$$ Obtain the derivative for $f(x)$, then equal to $0$: $$\frac{df(x)}{dx}=0$$ Finally, solve for $x$: $$x=p$$

Implying $x$ should be apportioned for each outcome in strict correspondence with its probability (i.e. if outcome $h$ has probability $p$ with positive expectation, apportion $p$ of your bankroll on that outcome; and so on, succesively, with the rest of the outcomes).

Is there a rigorous demonstration of the same principle being always optimal on more than 2 outcomes that meet the 4 stated conditions above?

1

There are 1 best solutions below

0
On BEST ANSWER

Yes, at least if you accept the principle of logarithmic utility. What you do is that you express the expected utility as a function of your bets:

$$u(x) = \sum_j p_j \ln \left(1+(1+b_j)x_j - \sum_k x_k\right)$$

and then you differentiate it:

$$\partial_l u(x) = \sum_j {p_j ( (1+b_j)\delta_{jl} - 1) \over \left(1+(1+b_j)x_j - \sum_k x_k\right)} \\= {p_l (1+b_l) \over \left(1+(1+b_l)x_l - \sum_k x_k\right)} - \sum_j {p_j \over \left(1+(1+b_j)x_j - \sum_k x_k\right)} $$

Now of course you realize that the gradient is never zero inside the region $\sum x_j\le 1$, as you always can expect more utility if you bet more in some way - the only thing that limits that is your bank roll.

So we're looking for a optimum on the edge $\sum x_j = 1$. To find that we have to project the gradient on the tangent plane and it has to vanish. This means that the gradient of $u$ must be parallell to $(1,1,\cdots,1)$ that is all partial derivates has to be the same.

Since only the first term formally depends on $l$ we reach that:

$${p_l (1+b_l) \over \left(1+(1+b_l)x_l - \sum_k x_k\right)} = {p_l\over x_l}$$

Should not depend on $l$ and the only way that can happen is if $x_l\propto p_l$ and as $\sum x_j = \sum p_j = 1$ that would mean $x_l = p_l$.