I am interested in (roughly estimating) how many random integers do I have to check before I find a prime of size say $100$ digits.
I was thinking like so:
Let the number of primes up to $X$ be denoted by $\pi(X)$. The famous Prime Number Theorem says that $$ \lim_{X\to\infty}\frac{\pi(X)}{X/\ln(X)}=1 $$ so a randomly chosen number $N$ has probability $$ P(N \ being \ prime)=\frac{1}{\ln(N)} $$ of being prime.
Now I need numbers of size $10^{100}$ which means I have $$ \frac{1}{\ln(10^{100})}=\frac{1}{100}\cdot \frac{1}{\ln(10)}\approx 0,02305 $$
so the chance of picking a prime is roughly $2,3\%$ therefore I need (in theory) to check at least $50$ numbers before I hit a prime.
Is my reasoning correct?
Thank you!
Edit:
I made a silly calculator mistake the correct value is: $$ \frac{1}{230}\approx 0,00434 $$ so I need to check a bit over $200$ numbers say $250$.
There was a Polymath Project (see link here) to come up with an efficient deterministic algorithm to find a prime larger than a given number $N$ (equivalent to a prime of size at least $\lceil \log_2 N \rceil$ bits in your notation). Terence Tao led this effort and a paper (see abstract here) was published at the end of the project.
See also Terence Tao's blog here for a discussion of the results of the project.
I quote from the introduction of the paper, which confirms your guess of the randomized complexity, as being essentially $O(\log^{O(1)}N)$:
Note: The notation $\log^c N$ is used for $(\log N)^c$ here. AKS (Agarwal-Kayal-Saxena) algorithm which is used in the paper is a deterministic primality test with original complexity $O(\log^6 N)$ for a number of bitsize $N$, and it's since been improved to $O(\log^3 N)$ later. If this algorithm states that a number is prime, the number is indeed a prime, not subject to errors.