I am given a starting value of 100. Each period a % change will be applied to the starting value, with a mean of 9% and a standard deviation of 16%.
I need to create a probability table for all outcomes after 1,2,...,10 periods.
To calculate the probability table after the first period I had used a normal distribution to create a probability table for each outcome in an increment of 1%.
In an attempt to calculate the probability after the second period the only approach I could come up with was to calculate a probability table for all potential outcomes inside of the first table, multiply each probability from the new tables by the probability of the first event occurring, grouping them by the outcome, and adding all of the probabilities together.
To my knowledge I believe that this could work (Please correct me if I am wrong, I am new to this whole statistics thing), but it seems like there would likely be a much more efficient method.
Since I assume the starting value of the second period would be a also be a normal distribution centered around initial_value*(1+mean_change) I had wondered if it was possible to assume that the initial value for period 2 would be initial_value*(1+mean_change), and then modify the standard deviation to account for two periods but I can't seem to figure out a method that makes sense.
If anyone knows the best way to calculate the result I need please let me know, and if there are any problems with the logic I had tried following with my first attempts please correct my thinking.