Find an approximation for the number of times you have to find a random variable

44 Views Asked by At

Let $a < b < c$ randomly chosen numbers such that $b > 0$.

Find an approximation for $k$, the number of times it is necessary to choose a random number in the interval $(a, b)$ so the sum of the chosen numbers will be greater than $c$.

I don't even know where to start. I wrote a script in R but I don't know how to formulate it to find a mathematical approximation.

tries <- 10^4
i <- 0
thislist <- list(0)
while (i < tries){
 i <- i + 1
 n <- runif(1, a, b)
 k <- 0
 while (n < c){
  k <- k + 1
  n <- n + runif(1, a, b)
 }
 thislist <- append(thislist, k)
 print(i)
}

print(mean(unlist(thislist)))
1

There are 1 best solutions below

3
On BEST ANSWER

I will assume $a \ge 0$.

The average of each selection is $\frac{a+b}{2}$. If we always get the average, then it will take $\frac{2c}{a+b}$ tries to exceed $c$.

If you modify the choices of $0 \le a < b < c$ below, you will find that the bulk of the distribution of $k$ is near $\frac{2c}{a+b}$, and that the mean of $k$ is also near $\frac{2c}{a+b}$.

a <- 0
b <- 2
c <- 20
tries <- 1000

thislist <- numeric(0)
for (i in 1:tries) {
  n <- 0
  k <- 0
  while (n <= c){
    k <- k + 1
    n <- n + runif(1, a, b)
  
  }
  thislist <- c(thislist, k)
}

hist(thislist)
mean(thislist)
2 * c / (a+b)
> mean(thislist)
[1] 20.679

> 2 * c / (a+b)
[1] 20