For example, let's say the problem is: What is the square root of 3 (to x bits of precision)?
One way to solve this is to choose a random real number less than 3 and square it.
1.40245^2 = 1.9668660025
2.69362^2 = 7.2555887044
...
Of course, this is a very slow process. Newton-Raphson gives the solution much more quickly. My question is: Is there a problem for which this process is the optimal way to arrive at its solution?
I should point out that information used in each guess cannot be used in future guesses. In the square root example, the next guess could be biased by the knowledge of whether the square of the number being checked was less than or greater than 3.
There are certainly problems where a brute force search is quicker than trying to remember (or figure out) a smarter approach. Example: Does 5 have a cube root modulo 11?
An example of a slightly different nature is this recent question where an exhaustive search of the (very small) solution space saves a lot of grief and uncertainty compared to attempting to perfect a "forward" argument.
A third example: NIST is currently running a competition to design a next-generation cryptographic hash function. One among several requirements for such a function is that it should be practically impossible to find two inputs that map to the same output (a "collision"), so if anyone finding any collision by any method automatically disqualifies a proposal. One of the entries built on cellular automata, and its submitter no doubt thought it would be a good idea because there is no nice known way to run a general cellular automaton backwards. The submission, however, fell within days to (what I think must have been) a simple guess-and-check attack -- it turned out that there were two different one-byte inputs that hashed to the same value. Attempting to construct a complete theory that would allow one to derive a collision in an understanding-based way would have been much more difficult than just seeing where some initial aimless guessing takes you.