I am trying to self-learning linear regression and I meet a math problem,
Here is the question :
Linear Regression in vector form is
$y = \beta x + \varepsilon $
Where
\begin{equation*} y = \begin{pmatrix} y_1 \\ y_2 \\ \vdots \\ y_n \end{pmatrix} \end{equation*} \begin{equation*} x = \begin{pmatrix} x_1^T \\ x_2^T \\ \vdots \\ x_n^T \end{pmatrix} \end{equation*} \begin{equation*} \beta = \begin{pmatrix} \beta_0 \\ \beta_1 \\ \vdots \\ \beta_p \end{pmatrix} \end{equation*} \begin{equation*} \varepsilon = \begin{pmatrix} \varepsilon_1 \\ \varepsilon_2 \\ \vdots \\ \varepsilon_n \end{pmatrix} \end{equation*}
Spread :
\begin{align*} \varepsilon & = y - \begin{bmatrix} \beta_0 & \beta_1 \end{bmatrix} \begin{bmatrix} 1 \\ x \end{bmatrix} \\ \\ & = p^{KO} - \begin{bmatrix} \beta_0 & \beta_1 \end{bmatrix} \begin{bmatrix} 1 \\ p^{PEP} \end{bmatrix} \\ \\ & = p^{KO} - \beta_0 - \beta_1 p^{PEP} \end{align*}
This is from one of my course
I know $ \beta $ only have 2 because there is only 2 dimension. But why it become $\begin{bmatrix} 1 \\ x \end{bmatrix}$ and Where the '1' come from??
Many thanks!
$1$ is used for the intercept term.
$$\begin{bmatrix} \beta_0 & \beta_1 \end{bmatrix} \begin{bmatrix} 1 \\ x \end{bmatrix} =\beta_0(1)+\beta_1x=\beta_0+\beta_1x$$