This is a programming question, but I'm interested in the math behind it so I think it's better asked here.
Say I have a list of items and I'm looking for one item in the list that satisfies a particular condition. The correct item is in a random spot in the list and there's only one correct item. The only way to find out which item matches the condition is to run it through a one-way function. Hence, the naive way to approach the problem is to iterate over the entire range in the list to find the correct item.
But there must be a better way. Assume I have to perform this search with a large number of lists.
If I try the items in the list in random order, will I find the correct item sooner on average than I would if I went from zero to max?
Can I exploit some kind of statistical effect to choose a better starting point or bias my random search to a particular range? (I had heard that random events tend to cluster and not evenly distribute. So I was thinking that there may be a greater chance of finding item Y in list 2 near to where you found item X in list 1, even if the distribution is always random).
The expected number of iterations for a sequential search vs random search are the same: Let $T$ = Location of item, N = number of items, and $i \in \{0,...N\}$
$P_{sequential}(T=i) = \frac{1}{N}$ as each location has the same probability. $P_{random}(T=i) = \frac{1}{N-i}\prod_{j=0}^{i-1} \frac{N-j-1}{N-j} = \frac{1}{N}$ Since to find it on try $T=i$, you need to not find it on (i-1) tries, without replacement. The expected number of tries is $\frac{(N+1)}{2}$ since you are just adding up consecutive integers and dividing by N.
You might want to look through this link for a bigger perspective as well.
As for the idea of conditioning, even if you know you haven't found it yet, the expected value calculation above takes that into account. You'll need more info than just uniform distribution.