Given two sets of random numbers, is it possible to say that one set of random numbers has a greater degree of randomness when compared to the other? Or one set of numbers is more random when compared to the other?
EDIT:
Consider this situation: A hacker needs to know the target address where a heap/library/base of the executable is located. Once he knows the address he can take advantage of it and compromise the system.
Previously, the location was fixed across all computers and so it was easy for the hackers to hack the computer.
There are two software S1 and S2. S1 generates a random number where the heap/library/base of the executable is located. So now, it is difficult for the hacker to predict the location.
Between S1 and S2, both of which have random number generators, which one is better? Can we compare based on the random numbers generated by each software?
There are randomness tests. Some tests are powerful enough that they will distinguish a human-generated sequence of 100 heads and tails from 100 tosses of a coin with high probability. For example, the distribution of streaks tends to change radically if you reverse every other coin in the human-generated sequence, while it stays the same for the coin flips. Some default random number generators in compilers will pass simple tests for randomness while failing more subtle tests.
There are other possible (but I think less likely) interpretations of your question. If you meant something else, such as the deviations of a random variable from the mean, please clarify.