Continuous Approximation for The Kelly Criterion

1.7k Views Asked by At

I am trying to follow the derivation of Kelly Criterion, the continuous case. Dr. Thorp shows the basics of the derivation here, pg. 22. With initial capital $V_0$, betting fraction $f$, and $X$ is a random variable representing returns where

$$ P(X = m+s) = P(X = m-s) = 0.5$$

The final capital is,

$$ V(f) = V_0 (1 + (1-f)r + fX) $$

$$ V(f) = V_0 (1 + r + f(X - r)) $$

His eventual goal is to find the $f$ for the maximum $E[\log(V_f)]$, and do this on a continuous scale. So he subdivides the time into $n$ pieces, $m$, $s^2$, and $r$ are replaced by $m/n$, $s^2/n$ and $r/n$ respectively,

$$ P(X_i = m/n + s/\sqrt{n}) = P(X_i = m/n - m/\sqrt{n}) = 0.5$$

$$ V_n(f)/V_0 = \prod _{i=1}^{n} (1 + r + f(X_i - r)) $$

then says take log of both sides, and apply the expectation operator. I did that,

$$ \log V_n(f)/V_0 = \sum _{i=1}^{n} \log (1 + r + f(X_i - r)) $$

$$ E[\log V_n(f)/V_0] = \sum _{i=1}^{n} E[\log (1 + r + fX_i - fr)] $$

This is where I get stuck, Thorp mentions "we expand the result in a power series", and I've seen a similar trick in a different book, pg 137, where the author reaches a statement like $1/1+fg$ after the derivative on a log, and he turned that into $1 - fg + ..$. However I am not able to reach a similar statement.

$$ = n E[\log (1 + r + fX_n - fr)] $$

Thorp eventually reaches a formula like

$$ g(f) = r + f(m-r) - s^2f^2/2 + O(n^{-1/2})$$

Any ideas?

Thanks,

1

There are 1 best solutions below

1
On BEST ANSWER

Per @Did's comment, the derivation goes somewhat like this:

$$ P(X = m+s) = P(X = m-s) = 0.5$$

which is compatible with the fact that returns are normal.

With capital $V_0$, allocation $f$, risk-free return $r$

$$ V(f) = V_0 (1 + (1-f)r + fX) $$

Rearrange

$$ V(f) = V_0 (1 + r + f(X - r)) $$

Divide time into $n$ pieces,

$$ P(X_i = m/n + s/\sqrt{n}) = P(X_i = m/n - m/\sqrt{n}) = 0.5$$

$$ V_n(f)/V_0 = \prod _{i=1}^{n} (1 + r/n + f(X_i - r/n)) $$

We are trying to maximize the log expetation,

$$ E[\log V_n(f)/V_0] = g(f) = n E[\log (1 + (r/n) + f(X_n - (r/n)))] $$

In order to get rid of the log, we use power series

$$ \log(1+u) = u - \frac{u^2}{2} + \frac{u^3}{3} + .. $$

It's sufficient to use the first two terms.

$$ \log \bigg( 1 + \frac{r}{n} + f \big( \frac{m}{n} + \frac{Us}{\sqrt{n}} - \frac{r}{n} \big) \bigg) $$

Let's look inside the log, the $u$ part

$$ u = \frac{r}{n} + f \big( \frac{m}{n} + \frac{Us}{\sqrt{n}} - \frac{r}{n} \big) $$

.. and what happens to the square of $u$

$$ u^2 = \frac{r^2}{n^2} + f^2(..)^2 + 2 \frac{2}{n}f(...) $$

We are not interested in terms smaller than $O(1/n \sqrt{n})$. So from the statement below for example only $\frac{f^2U^2s^2}{n}$ will "make it". The rest is gone.

$$ \big( \frac{m}{n} + \frac{Us}{\sqrt{n}} - \frac{r}{n} \big)^2 = \frac{m^2}{n^2} + \frac{Usm}{\sqrt{n}n} - ... $$

We end up with

$$ g(f)/n = E\big[ \frac{r}{n} + f \big( \frac{m}{n} + \frac{Us}{\sqrt{n}} - \frac{r}{n} \big) + \frac{f^2U^2s^2}{2n} + O(1/n \sqrt{n}) \big] $$

$$ = \frac{r}{n} + f \big( \frac{m}{n} + \frac{E[U]s}{\sqrt{n}} - \frac{r}{n} \big) + \frac{f^2E[U^2]s^2}{2n} + O(1/n \sqrt{n}) \big] $$

Since $E[U] = 0, E[U^2] = 1$

$$ = \frac{r}{n} + f \big( \frac{m}{n} - \frac{r}{n} \big) + \frac{f^2s^2}{2n} + O(1/n \sqrt{n}) $$

$$ g(f) = r + f(m-r) + f^2s^2/2 + O(1/n \sqrt{n}) $$

As $n \to \infty$

$$ g(f) = r + f(m-r) + f^2s^2/2 $$

Sources

The Kelly Criteron

Blog