I'm doing a linear regression master's course right now and the prof wrote "Properties of OLS estimators is that they are unbiased estimators: $$\mathbb{E}(\hat{B})=\mathbb{E}[(X^TX)^{-1}X^Ty]$$ $$=\mathbb{E}[(X^TX)^{-1}X^T(X\beta+\epsilon)]$$ $$=\beta$$
$X$ defines the inputs to the regression (They could be multidimensional that's why $X$ is denoted as a matrix) $y$ is the correct correlated values then $\beta$ is the parameters to the linear function and $epsilon$ is the error residual between $X\beta$ and $y.$
I feel like there was a bit of a jump there how did they get from the 2nd line to the last line? How did the expectation of all that suddenly vanish and just equal $\beta?$
\begin{align*} \mathbb{E}[(X^TX)^{-1}X^T(X\beta + \epsilon)] &= (X^TX)^{-1}(X^TX)\mathbb{E}[\beta]+(X^TX)^{-1}X^T\mathbb{E}[\epsilon] \\ &= \beta + 0 \end{align*} Where the first equality follows from linearity of expectation and in the second we use that the expected value of $\epsilon$ is $0$.