Why do we even attempt to predict random numbers when they are by definition, "random"?

71 Views Asked by At

I've been given the task of writing an algorithm to predict the next value of the Mackey glass time series, given 4 past values using neural networks, so I want to understand how that makes sense.

I know random number generators are pseudo random. But the Mackey glass time series and this answer on StackOverflow made me curious.

I know there are ways to predict data based on the variation of related variables. That's sensible. But why do people try to predict a future value of something that's obviously random, using previous values which were obviously random? The way I see it, on a 2D graph, there's a 50% chance that the value would go up or down. So half of the time you might get a correct prediction, even if you didn't use the previous data. I wouldn't even call it a prediction. It seems so dumb to pretend to predict something like this. Yet, there are people who try to actually theorize it. Why?

2

There are 2 best solutions below

0
On BEST ANSWER

There are many versions of random. We all agree that random numbers should be equally distributed over some range unless you ask for some other distribution. That doesn't preclude correlations between successive numbers from a generator. I might create a generator that produces randoms in the range $[0,1)$ by $a_{n+1}=a_n+\pi \pmod 1$. Over the long run they will be nicely distributed. They would fail a test for the correlation of two successive numbers. The question implies that using the last four values you can predict the next value better than chance. You are being asked to demonstrate this.

0
On

If you can reduce the uncertainty by a factor of 8, you can predict the first three bits of the next number. So someone sending speech over the net doesn't have to send those first three bits. That reduces the bandwidth needed.