Let's say you have some unknown quantity $$X\in [0,1]$$ We have N tries to guess the value of X - if you guess $$g_{i}\le X$$ then you capture value $$V_{i} = g_{i}$$ while if your guess is over the value of X, you get value 0. At the end of your N guesses you get the value of your best guess, V. What strategy do we use to maximize $$E[V]$$
If we assume the distribution of X is uniform, we can use binary search to get the best possible expected value. This can be shown via induction (a lot of boring computation). However, this won't be the best algorithm for every probability distribution - for example, consider a Bernoulli distribution.
What are the best methods for arbitrary distributions (assuming you know the distribution)? Is there a general method to determine the best algorithm to use? Monte-carlo techniques could point to an approximate solution - are there necessarily analytic solutions for the best algorithm to use? What classes of probability distributions admit an analytic solution?
Your question is not so clear to me. I interpret your question as that you get told if after each guess, if your guess is bigger or smaller. (Otherwise the problem is significantly easier, since the order of guesses does not matter if you are not told)
Here is the method you use without using Monte Carlo, but may still need some intensive computation - namely maximisation or minimization.
First I convert this problem to a Markovian problem. I need some notations
Let $t$ denote the number of guesses remaining, $x$ be the biggest guess so far underestimating $X$ and $y$ be the smallest guess so far exceeding $X$. Let $u(t,x,y)$ denote the expected value under the optimal strategy with $x$, $y$ being our best bounds on $X$ so far. I hope you can see that, any guesses which is smaller than $x$ or bigger than $y$ does not help us making a better prediction.
This means
$$V_t = u(t,0,1)$$
Then I hope it is not difficult to see that
$$u(0,x,y)=x$$
then what is $u(1,x,y)$? I need to make a guess $z$ between $x$ and $y$ such that, if my guess is smaller than $X$ I get $z$, else I get $x$. I now need to maximise this with respect to $z$. Explicitly, this is
$$u(0,z,y)\frac{P(z\leq X<y)}{P(x\leq X<y)}+u(0,x,z)\frac{P(x\leq X<z)}{P(x\leq X<y)}$$.
If we repeat this argument, we see that
$$u(t,x,y)=\sup\limits_{x<z<y}u(t-1,z,y)\frac{P(z\leq X<y)}{P(x\leq X<y)}+u(t-1,x,z)\frac{P(x\leq X<z)}{P(x\leq X<y)}$$
So here is the method:
calculate $u(1,x,y)$ for general $x,y$, from this calculate $u(2,x,y)$ for general $x,y$, from this calculate $u(3,x,y)$ for general $x,y$... You get the idea. The key is working backward.
This does not seem so bad especially if you can find explicit expressions for the probability or if $X$ is a discrete random variable. You can verify the formula with your uniform distribution case... For general case, you can discretise - calculate it for $x,y$ on a fixed grid. This will only give you an approximation but I am sure you can bound the errors if you work hard enough...