Suppose we have a random (not pseudo-random) binary sequence. For example, $Seq = \{0010110111010000101...\}$.
What are the approaches to:
generate unique numbers from $0$ to $64$ based on this sequence. I am aware of the Fisher-Yates algorithm, but it is probably not efficient in terms of computer implementation. Also, its modern version, the Durstenfeld algorithm, probably messes up the random binary sequence $Seq$, causing a bias.
generate any random numbers without restrictions based on this sequence $Seq$. I have heard that it is possible to take a hash from a part of the random sequence $Seq_{part}$. But I have never seen such a method in scientific papers, I have not seen recommendations on how many elements should be hashed, and what kind of hashing algorithm to use, what complexity such an algorithm has.
I would like to see the methods that allow us to solve these two problems, their brief description, and computational complexity. It would also be good to see their mathematical rationale. Perhaps there are links to useful articles.
By random sequence, I assume that the bits of the sequence are generated by independent fair coin flips or something equivalent.
Let's address your second point first: how to generate uniformly random integers between $0$ and $n-1$. If $n = 2^k$ is a power of two, it's easy: just take the first $k$ bits of your random sequence, interpret them as a binary number. If $n$ is not a power of $2$, just pick the next larger power of $2$, say $2^k$, pick a random integer between $1$ and $2^k$, and if the result is not less than $n$ you just delete the first $k$ bits of the sequence and try again!
There is also a slightly more complex approach that minimizes the expected amount of used bits:
To understand this algorithm, observe that $v$ is always a uniformly random value between $0$ and $m-1$. Essentially this algorithm reuses the "trim" from the last oversize random integer to generate a new sufficiently large random integer faster.
To prove this is optimal, it suffices to show that the algorithm terminates after reading at most $k$ random bits with probability $n/2^k \cdot \left\lfloor 2^k/n \right\rfloor$, and that no other algorithm can exceed this boundary.
With the ability to generate uniformly random integers in any range, you should have no trouble constructing any other random objects, including Fisher-Yates or variants.
Side note: In case your bit sequence is biased and only generates $1$s with some constant probability $0<p<1$, but the bits are still independent, here's a neat trick to construct an unbiased independent random bit sequence from it: from your biased sequence, cut off the first two bits. If they're $01$, write down a $0$. If they're $10$, write down a $1$. If they're $00$ or $11$, write down nothing. Repeat forever.