Updating a Bayesian distribution after each observation

329 Views Asked by At

Imagine that the number of points scored by basketball player $i$ is normally distributed with mean $\mu_i$ and standard deviation $\sigma_i$. Now I am particularly interested in following a new player and, given I have no other information about him, my prior distribution for the expected number of points he will score in his first game is $~N(\mu_0, \sigma_0)$, where $\mu_0$ represents the average points scored by players in their first game and $\sigma_0$ the standard deviation of points scored by players in the first game. To make the example more concrete, let us assume $\mu_0=8$ and $\sigma_0=4$

I observe his first game and he scores 12 points. Now what is my best estimate of his expected points in his next game?

I understand the basics of calculating the Bayesian posterior distribution given a sample of observations, but what if I want to continuously update my prior by adjusting for the players most recent performance? Is there a robust way to add single observations to the sample and update my Bayesian expectation?

Also, I expect that as I observe more games, subsequent performance should have a smaller effect when I perform an update on his expected points. Is this true? Is there a way of mathematically representing this?