Say we have a gambler who makes money through sports betting. My aim is to develop a model to help our gambler maximise his winnings and minimize losses.
In my model, rather than betting a fixed amount of money, the gambler bets a certain fraction $0 < r < 1$ of his current betting fund. He continues betting that fraction as his betting fund increases or decreases until he cashes out after a certain number of sessions $n$.
The gambler's initial fund shall be $F_0$. His fund after $i$ sessions shall be $F_i$.
His probability of making a correct prediction shall be $0 < p < 1$. If our gambler had a $p$ of $0$ or $1$, then the entire model would be useless.
The average odds with which our gambler deals with is $a > 1$.
The gambler's minimum desired profit upon cash out is $T$.
$$T \le F_n - F_0 \tag{1}$$
If we expressed everything as a multiple of $F_0$, $(1)$ can be rewritten as:
$$T \le F_n - 1 \tag{1.1}$$
It follows that the following are known: $T$, $a$, $F_0$, $p$.
Should our gambler lose a particular session say $i+1$,
$$F_{i+1} = (1-r)F_i \tag{2.1}$$
Should he win that particular session
$$F_{i+1} = F_i(1-r + ra) \tag{2.2}$$
Given that the gambler plays $n$ sessioms before cashing out.
His expected number of wins = $p*n$ $(3.1)$
His expected number of losses = $(1-p)*n$ $(3.2)$
Now there are many different ways to distribute the gambler's losses and wins{$n \Bbb P pn$} and while calculating all scenarios and finding average $F_n$ may be ideal, it is computationally very expensive. So I decided to model the problem assuming the losses take place in the worst way possible( back to back at the very beginning of the match).
The gambler's revenue after $n$ matches is given by the formula:
$F_n = (1-r)^{(1-p)n}\{(1-r)+ra\}^{pn}$ $(4)$
Now we know that our gambler wants to make a minimum profit of $T$ so we transform $(4)$ into an inequality using $(1.1)$
We get:
$(1-r)^{(1-p)n}\{(1-r)+ra\}^{pn}$ $ \ge T + 1$ $(4.1)$
Taking the Natural logarithm of both sides, I get:
$ln(1-r)*(1-p)(n) + ln(1-r + ra)*pn \ge ln(T+1)$ $(4.2)$
$n\{ln(1-r)(1-p) + ln(r(a-1)+1)(p) \} \ge ln(T+1)$ $(4.3)$
Giving the constraints on the variables and constants, I want to determine the minimum value of $n$ and maximum value of $r$ that satisfies $(4.1) / (4.3)$ (whichever is easier to solve) for any given $T$, $a$, $p$.
MAJOR EDIT
Thanks to @Rodrigo de Azevedo, I discovered Kelly's Criterion. I was sold on it, and decided to implement it into my gambling method.
For the purposes of my method Kelly's criterion is given by:
$r_i = p - $ ${1 - p}\over{a_i - 1}$ $(5)$
Where:
$r_i$ is the ratio at session $i$
$a_i$ is the odds at session $i$
Now $r: 0 \lt r \lt 1$ $(5.1)$
Applying $(5.1)$ to $(5)$ we get:
${p(a - 1) - (1 -p)}\over{a - 1}$ $ \gt \frac{0}{1}$
Cross multiply.
$p(a-1) - (1 - p) \gt 0(a-1)$
$pa - p - 1 + p \gt 0$
$pa - 1 > 0$
$pa > 1$
$p > 1/a$ $(5.2)$
Now that that's out of the way, we still have the problem of determining minimum $n$ such that we make a profit $ \ge T$.
In order to do this, we'll assume a "mean" value for $a$ then find the minimum value for $n$ that satisfies $(4.1)$
Due to the fact, that you do not know the odds for the matches in advance, your mean odds at $i$ say $a_{\mu i}$ may not be the mean odds at $n$ $a_{\mu n}$. In order to protect against this(and because I'm not a very big risk taker), I'll assume a value for $a_{\mu}$, that is less than $a_{\mu}$ called $a_{det}$.
$a_{det} = a_{\mu} - k\sigma$
Where $a_{\mu}$ is the Geometric Mean as opposed to the arithmetic mean of the odds and $\sigma$ is associated S.D
Using Chebyshev's Inequality, at least $k^{2} - 1 \over k^2$ of the distribution of the odds lie above $a_{det}$.
Picking a $k$ of $2.5$
$2.5^{2}-1\over 2.5^{2}$
$0.84$
So our $a_{det}$ is lower than at least $84$% of the distribution of the odds. This is safe enough for me.
$a_{det} = a_{\mu} - 2.5\sigma$
Using $a_{det}$, we'll calculate the minimum $n$ that satisfies $(4.1)$
Subbing $5$ and $a_{det}$ into $(4.1)$ we get:
$\left(1-\left(p - \frac{1-p}{a_{det}-1} \right) \right)^{n - np} \cdot \left(\left(p - \frac{1-p}{a_{det}-1} \right)\cdot(a_{det} - 1)\right)^{np}$ $ \ge T + 1$ $(6.0)$
This can be simplified further to: $\left({a_{det}-1-(pa_{det}-1)}\over{a_{det}-1}\right)^{n(1-p)}\cdot\left(pa_{det}-1+1\right)^{np}$
$\left({a_{det}-pa_{det}}\over{a_{det}-1}\right)^{n(1-p)}\cdot\left(pa_{det}\right)^{np}$
$\left(\left(\frac{a_{det}*(1-p)}{a_{det}-1}\right)^{n(1-p)}\cdot\left(pa_{det}\right)^{np}\right)$ $(6.1)$
P.S due to my particularly low $a_{det}$ we'll likely make much more profit than $T$, but that's loads better than choosing a higher $a_{det}$ and making less.
To find the minimum $n$ that satisfies $(6.1)$.
$\left(\left(\frac{a_{det}∗(1−p)}{a_{det}−1}\right)^{n(1−p)}\cdot\left(pa_{det}\right)^{np}\right) \ge T+1$ $(6.1)$
On the LHS, $n$ is a common exponent. Take the natural logarithm of both sides
$ln\left(\frac{a_{det}∗(1−p)}{a_{det}−1}\right)\cdot{n(1−p)} + ln\left(pa_{det}\right)\cdot{np} \ge ln(T+1)$
Factorise LHS with $n$.
$n\left(ln\left(\frac{a_{det}∗(1−p)}{a_{det}−1}\right)\cdot(1−p) + ln\left(pa_{det}\right)\cdot p\right) \ge ln(T+1)$ $(6.2)$
Rewriting $(6.2)$
$n \ge \left({ln(T+1)}\over{ln\left(\frac{a_{det}∗(1−p)}{a_{det}−1}\right)\cdot(1−p) + ln\left(pa_{det}\right)\cdot p}\right)$ $(6.3)$
Presto.
I realised this worked because Kelly's criterion aimed to maximise expected logarithmic growth.