This is the Guess-The-Number game with a twist!
Variant 1
Take any positive integer $n$.
The game-master chooses an $n$-bit integer $x$.
The player makes queries one by one, each of the form "Is $x$ (strictly) less than $k$?".
The game-master answers each query immediately, always truthfully except at most once.
At the end the player must guess $x$ correctly to win.
What is the best strategy for the player to minimize the number of queries used in the worst-case? Note that the player can choose each query based on all the answers to the preceding queries. I have an algorithm that takes $n + O(\sqrt{n})$ queries, but I don't know whether this is optimal. Note that we can assume that the game-master does not have to choose $x$ at the start, but only has to keep his answers consistent with at least one possible $x$.
Variant 2
Also, what is the best strategy if the game-master can answer falsely to a fraction $r$ of the queries (meaning that after the first $k$ queries the game-master has answered at most $rk$ of them falsely). Clearly if $r \ge \frac12$ then the player has no definite winning strategy iff $n>2$ because the game-master can simply answer the first query truthfully by choosing $x$ to be in the larger half, and then any such strategy must work even if the game-master now tells the player two values that he guarantees $x$ is among, in which case there is only one useful question and the game-master simply answers such questions "yes" and "no" alternately (and answers all other questions truthfully), and the player cannot ever tell which of the two $x$ is. What if $r < \frac12$?
[Edit: I don't see why this should be downvoted, because I provided my own algorithm in an answer below, even though I don't know its optimality. If anyone can prove its optimality or can provide a better algorithm, please post an answer!]
Variant 1
We deal with the general case where $x \in \{1..m\}$ for any positive integer $m$, instead of just $x \in \{1..2^n\}$. Let $k$ be the minimum number of queries needed. If $m = 1$ then $k = 0$, so from now on we assume that $m \ge 2$, in which case it is easy to see that $k \ge 3$. $\def\floor#1{\left\lfloor #1\right\rfloor }$ $\def\ceil#1{\left\lceil #1\right\rceil }$
Lower bound
Consider the following strategy by the game-master:
At the start, $\sum_{i=0}^k \#(S_i) = (k+1)m$. On each query, for each $i \in \{0..k\}$ let $T_i$ and $U_i$ be the resulting $S_i$ on answering "yes" and "no" respectively. Then $S_i$ is the disjoint union of $T_i,U_i$ because each possibility remains under exactly one of the two answers. Thus $\sum_{i=0}^k \#(S_i)$ $= \sum_{i=0}^k \#(T_i) + \sum_{i=0}^k \#(U_i)$, and hence the resulting $\sum_{i=0}^k \#(S_i)$ will be at least half the previous value.
At the end, the player is supposed to know $x$, and hence also know which query was falsely answered, if at all. Thus only one of $S_{0..k}$ is non-empty, and hence $\sum_{i=0}^k \#(S_i) = 1$.
Therefore $2^k \ge (k+1)m$. From this we get $k \ge \log_2(4m) = \log_2(m)+2$ and hence:
Note that the reasoning by Martin is invalid because he claims that the player needs to distinguish $(k+1)m$ possibilities, which is never true at any point during the game; from the player's point of view the number of possible scenarios is always at most $m$. However, the above shows that Martin's idea of how the player can find $x$ efficiently can be turned into a strategy for how the game-master can delay the player from finding $x$.
Note also that this strategy for the game-master may not be optimal! It is conceivable that choosing the answer that leaves a lower $\sum_{i=0}^k \#(S_i)$ might in some rare cases be better because the subsequent 'cuts' might be sufficiently uneven to make it worse than choosing the opposite answer.
Optimal strategy
Note that in the optimal strategy, given any $2$ preceding queries "$<a$" and "$<b$", if the answers are both "yes" then $x < \max(a,b)$ because there can only be at most one false answer, and similarly if the answers are both "no" then $x \ge \min(a,b)$. Since it is useless to make a query that 'cuts' outside the possible interval for $x$, the optimal strategy would always maintain three adjacent intervals of length $p,q,r$ in that order such that $x$ is in one of them and is in the middle interval iff the queries bounding it were truthfully answered. Let $f(p,q,r)$ be the minimum number of queries needed to determine $x$.
Then we immediately get the following recursive program (in Javascript with memoization):
Briefly, the base cases are obvious, namely when there is only one possible value for $x$ or the middle interval is empty, meaning that the lie has been used up and we can perform ordinary binary search on the remaining interval. The transition cases are because it is useless to make a query that 'cuts' outside the middle interval (this requires some thought!), and also useless to make the same query the third time if the first two are consistently answered.
Of course, $k = f(0,m,0)$, and it takes $O(M^4)$ to compute all $k$ for $m \in \{1..M\}$, so I ran it for $M = 100$:
For all these, $k$ is at most $1$ more than the lower bound, so we conjecture:
But how to prove it? I tried but couldn't figure out a way. Can anyone prove or disprove it?
A simpler strategy that might be optimal
The player can (given $k$) follow the following strategy:
It is not clear whether this is the optimal strategy, though it is likely to be so. Martin claims that it is the best, but this is a classic mistake of assuming that the greedy solution is globally optimal. How would we know whether a slightly less even 'cut' at one step may not guarantee easier 'cuts' later? But if we know that the game-master's strategy above is optimal, then optimality of this strategy follows because the resulting possibilities are monotonic in the 'cut' location if the answer to the query is the same.
Let $c_t$ be $\sum_{i=0}^k \#(S_i)$ after step $t$, where $c_0$ is the initial value at the start of the game.
At step $t \in \{1..k\}$, choosing $y = 0$ will make $\sum_{i=0}^k \#(T_i) = 0$, while choosing $y = m$ will make $\sum_{i=0}^k \#(T_i) = \sum_{i=0}^k \#(S_i)$. Also increasing $y$ by $1$ will make $\sum_{i=0}^k \#(T_i)$ increase by some number in the range $[0,k-t+2]$, because $S_i,S_j$ are disjoint for any $i,j \in \{0..t-1\}$. Thus we can always choose $y$ such that $\sum_{i=0}^k \#(T_i)$ is within $\frac{k-t+2}{2}$ of $\frac12 c_{t-1}$, which would imply that $c_t \le \floor{ \frac12 c_{t-1} + \frac{k-t+2}{2} }$.
This is the tightest recurrence I can get, but it is still not good enough! If I didn't make a mistake, solving the recurrence results in the upper bound on $c_k$ always being greater than $3$, which means that we can't guarantee analytically that $k$ queries following this strategy are enough regardless of how big $k$ is! I know this seems ridiculous but I don't see a way to tighten the bounds.
Therefore, since we cannot prove optimality of the game-master's strategy or the success of this player's strategy, even implementing this algorithm as it is is useless, because we do know whether the answer with lower $c_t$ will be better for the game-master or not, and hence we end up having to perform exponential search (whether pruned or not) to ensure correctness.
Moreover, we don't know $k$ so we can't even implement this algorithm unless we find $k$ by some other means first (such as the optimal strategy).
Variant 2
The lower bound extends to this variant by a similar argument. However the optimal strategy does not. The simpler strategy does, but it is even less clear whether it is optimal, and again we still face the problem of finding $k$.