Let's say you had an algorithm which worked to find the minimum and maximum values of an array n values long by looking through each value of the array, comparing it to the current maximum value, then if it wasn't greater than that value, it compared it to the current minimum value (both initially set equal to the value of the first value of the array).
As a result, its best case efficiency function was $(n-1)$, while its worst case is $2(n-1)$. Since both are Big O(n), that means that the average case also has to be Big O(n). The material I have says that its average efficiency is $2n-ln(n)-1$.
I'm not certain why this is the case; if there is a probability p that each element is greater than the current maximum, wouldn't you get a binomial probability distribution centered around $(n-1)*p$ "successes", and an average efficiency of about $2(n-1)(p)+(n-1)(1-p)$? Where does the $ln(n)$ come in? Is it from assuming that you're less likely to find a new maximum value as the maximum value increases, or something?
Pls do what @LeeDavidChungLin said - even if this answer resolves your question. :)
Meanwhile, the answer to your last question is Yes...kinda.
There is no assumption that as you progress, Prob(current entry > current max) decreases.
Rather, the assumption is that the input array is randomly ordered to begin with, i.e. any of $n!$ permutations is equally likely.
Under this assumption, when you're examining the $k$th entry, Prob($k$th entry > current max) = $1/k$, because any permutation of the initial $k$-long subsequence is equally likely.
In other words, the decrease in probability is not an assumption by itself, but rather, a consequence of the (very reasonable) "random permutation" assumption.
Finally, $\ln n$ comes from an approximation to $\sum^n_{k=1} {1\over k}$.