To preface, this question is coming from a software developer so it's written from that perspective.
If I need to generate a random number with $n$ digits, I could do it in one of two ways.
a. Ask a random number generator for a digit between $0-9$ and concatenate each number retrieved until I have the required n digits
b. Ask a random number generator for a single number within the range of $100\ldots${$n$-digits}$-999\ldots${$n$-digits}, ie: for a $5$ digit number the range would be $10000-99999$
The obvious difference is that with option A I have a larger number space because the first digit could be 0. Barring that difference, is there anything statistically different between the randomness or the distribution of the resulting numbers from these two methods? Is one preferred over the other?
Mathematically, there is no difference. For example, suppose that I have an RNG1 that gives me a number with uniform distribution on $00,01,02,03,\ldots,97,98,99$. Alternatively, I can design an RNG2 that gives me a random digit uniformly distributed on $0,1,\ldots,9$. I call this RNG2 two times and concatenate the digits. The end distribution is the same as what I would obtain with RNG1. In general, RNGs on computers are however pseudo-random and at the end there might be a difference between the two methods. This website may however not be the right place to ask such a question.