I was faced with a task of calculating the average value of numbers that satisfy the following rule.
We have an infinite sequence of numbers $num$. Also we have a sequence $f$ with values f(a) = b - a - 1, where b is the first index $i$ where num[i] < num[a] and b > a. For example, for first numbers of sequence num = {3, 4, 1, 2, 5, 0, ...}, f = {1, 0, 2, 1, 0, #, ...}.
(1 = 3 - 1 - 1,
0 = 3 - 2 - 1,
2 = 6 - 3 - 1,
1 = 6 - 4 - 1,
0 = 6 - 5 - 1,
# = c - 6 - 1), where c - index of {next number < 0}.
The minimum value is not defined due to the infinity of the sequence. The task is to calculate the average of numbers of the array f.
I calculated the expected value for each value, due to the fact that the probability of getting a number that is less than the given one is 0.5. This means that the probability of getting f (a) = 0 is 0.5. The probability of getting f (a) = 1 is 0.5 * 0.5 = 0.25, and so on. And the expected value is 1. Next, I used the law of large numbers and found that the average value that f(a) gets = 1, just like the average value of all values f = 1.
However, when I decided to program the algorithm for finding the average value through ordinary search with Python, I got that the average value of the array increases with an increase in the number of terms in the sequence, and it is much greater than 1. (for k = 1,000,000, avg(f) = 12).
What is the correct value for this task - average = 1 or greater value that the program is aiming for? Should I look for an error in the code or there is a mistake in the calculations above?
Thanks for your help