Number of times you have to make a bet on a uniform distribution to expect to achieve a minimal result

57 Views Asked by At

Edited for the sake of clarity:

If you have a random variable $Q$ distributed uniformly on some interval, say $[a,b]$, what is the function $f$ that describes how many times you have to draw on the distribution to expect to achieve an outcome of at least $c \in [a,b]$?

2

There are 2 best solutions below

6
On BEST ANSWER

"Throw the dice" $n$ times. Let the results be $X_1,X_2,\dots,X_n$. Let $Y=Y_n$ be the minimum. The probability that this is $\gt y$ is the probability that all the $X_i$ are greater than $y$.

Any particular $X_i$ is greater than $y$, where $a\le y\le b$, with probability $\frac{b-y}{b-a}$. So the probability they all are is $\left(\frac{b-y}{b-a}\right)^n$.

It follows that the cumulative distribution function of $Y$ is $$1-\left(\frac{b-y}{b-a}\right)^n.\tag{1}$$ Now you can evaluate any probability you want that concerns the random variable $Y=Y_n$.

The expectation: At an earlier stage of your post, you seemed to be asking for the mean of the minimum. We now proceed to calculate that.

Formula (1) shows that the density function of $Y$ is $n(b-y)^{n-1}(b-a)^{-n}$ on our interval and $0$ elsewhere.

Now we find the expectation of $Y$ in the usual way. So we want to integrate $y$ times the density from $a$ to $b$. Use $y=b-(b-y)$. So we want $$\int_a^b \left(bn(b-y)^{n-1}(b-a)^{-n}-n(b-y)^{n}(b-a)^{-n}\right)\,dy.$$ Both parts of the integral are easy to handle. We get $$b-\frac{n}{n+1}(b-a),$$ which simplifies to $$\frac{n}{n+1}a+\frac{1}{n+1}b.$$ Nice and simple! The mean of the minimum is $\frac{1}{n+1}$ of the way from $a$ to $b$.

The cdf found in Formula (1), and the mean of $Y$, will be I hope enough for you to solve your applied problem. If there are difficulties with that, just ask.

Remark: It is possible that you may need the maximum $Z$ of our $n$ random variables.

This has very nice expectation also. It is $\frac{1}{n+1}a +\frac{n}{n+1}b$.

The distribution of the maximum is marginally nicer than the distribution of the minimum, For the probability that the maximum is $\le z$ is $\left(\frac{z-a}{b-a}\right)^n$.

If you want to find the $n$ for which the expectation of the maximum is greater than $c$, then it is the distribution of the maximum that is relevant. The actual applied problem needs to be described in greater detail before we can see what the appropriate calculations are.

0
On

"Having a maximum result in $[c,b]$" is one of those statements where the language clouds the issue a bit. This event is equivalent to having at least one result in $[c,b]$, since you couldn't have a result in $[c,b]$ unless one of them is the maximum.

The complement event is easily examined: on each pull you fall outside the interval with probability $\frac{c-a}{b-a}$, so the chance that all $n$ pulls fall outside the interval is $\left(\frac{c-a}{b-a}\right)^n$.

Therefore, we can create a function $g_c(n)=1-\left(\frac{c-a}{b-a}\right)^n$ that is the probability that you have a maximum in $[c,b]$. Remember that you want to input a $c$ and find an $n$. In order to do this, we have to describe what we mean by "expect", so I'll also allow a parameter $p$ to be the certainty that we want to have that the event has occurred.

The problem has been reduced to this: Given $c$, find $f(c )$, the smallest $n$ such that $g_c(n)\geq p$. Some simple algebra gets $f(c )=\lceil\log_k(1-p)\rceil$ where $k=\frac{c-a}{b-a}$.