Questions
I was wondering what the probability of rolling a consecutive $1 ,2, 3, 4, 5, 6$ on a dice is?
For realism, is there any way to calculate an 'extra' factor, such as someone kicking the table (as in reality this does happen and does effect where the die will land)? Can this 'extra' probability also be calculated using math?
I'm assuming you mean a fair, six-sided die, although this can be generalized.
In general, the probability of multiple independent (read: unrelated) events happening is the product of the probabilities of each individual event. For example, the question of "what is the probability of rolling a consecutive 1 ,2, 3, 4, 5, and 6" boils down to "what is the probability we roll a 1 and then what is the probability we roll a 2 and then what is the probability we roll a 3... and so on.
Here, this means that this boils down to $P(1) \cdot P(2) \cdot P(3) \cdot P(4) \cdot P(5) \cdot P(6)$
On a six-sided die, the probability of rolling any individual number is 1/6. Thus, this is
$\frac{1}{6} \cdot \frac{1}{6} \cdot \frac{1}{6} \cdot \frac{1}{6} \cdot \frac{1}{6} \cdot \frac{1}{6}$
If your extra event is kicking the table, however, there are some complications to this. If you know this kicking the table is designed to skew the probabilities of each die roll (say, someone "accidentally" bumps the table sometimes when a certain number is rolled), you have to deal with conditional probabilities (the probability of an outcome occurring given that some other outcome has happened)
If you have a large number of measurements, you could (in most cases) use the Law of Large Numbers, which says that as you increase your number of measurements you get close to the expected value for that measurement (i.e., if you repeat this experiment tons of times, the proportion of time the outcome occurs is the "true" probability of the event). This is known as a frequentist approach, which assumes you know nothing about things ahead of time. In practice, you would probably use what's known as a Bayesian approach, which assumes you have some expectation on the outcome already (possibly $\frac{1}{6}$ chance of a given event here) and incorporates that information along with the results you actually observe to come up with an estimate on the probability. Both of these approaches converge the more measurements you take, though.
For a more comprehensive explanation of using the Bayesian approach (but also with some sections targeted at an audience with a knowledge of discrete math), I go a bit more in depth at https://dem1995.github.io/machine-learning/curriculum/probability/probability.html#independence-and-the-product-rule