Measuring real (physical) events, a random variable can assume only discrete values.
Is the use of continuous random variables a mathematical trick cause: it's easier to work with real number ?
If yes, are you able to explain this in more detail ?
Measuring real (physical) events, a random variable can assume only discrete values.
Is the use of continuous random variables a mathematical trick cause: it's easier to work with real number ?
If yes, are you able to explain this in more detail ?
Copyright © 2021 JogjaFile Inc.
I agree, broadly, with the statement, but also think it rather epically misses the point. It is arguably true that all data we measure in the real world is discrete and that using models with, say, continuous space and time variables is a choice made out of convenience. Why I say the statement above misses the point is because this has nothing specifically to do with probability.
We simply don’t know if space or time are “really” continuous or discrete (but very fine from a macroscopic perspective). There are good reasons from physics to believe that they aren’t continuous microscopically, but I don’t think it matters for the sake of this discussion. All that matters is that continuous space and time are very good models for the macroscopic world.
Then measurement is another layer of abstraction that can introduce discreteness. Again, the operative question is simply how well the discreteness introduced can be approximated as continuous.
You asked about advantages to using continuous approaches, and yes there are some. For one, you don’t have to worry about details about the microscopic elements (what shape the lattice is, etc) and this often cuts down on complexity. Also, quite importantly, calculus is a very useful tool that becomes available in the continuous case. Compare the ease of computing integrals to sums. Also, in the case of probability, taking a continuum view often simplifies the model. For instance, sometimes the central limit theorem will help you.