Right now I am taking a course on financial math and we want to model the stock price.
For continuous compound interest we have the formula $B(t) = b_0 e^{rt}$ where $r$ denotes the interest rate and $b_0$ the initial principal.
Now we want to create a model for the stock price and do so by assuming that the stock price results from a random error $\epsilon$ around a bond price with another interest rate $\tilde{r}$.
This leads to the following plain model for the stock price: $S(t) = s_0 e^{\tilde{r}t + \epsilon}$
This model gets more and more refined over the lecture(especially the properties of our error $\epsilon$) but there are two things a don't get:
- Where does this assumption come from that the stock price roughly behaves like a bond price?
- Why is it $S(t) = s_0 e^{\tilde{r}t + \epsilon}$ and not for example $S(t) = s_0 e^{\tilde{r}t} + \epsilon$?