The population OLS coefficient for some $X_i \in \mathbb{R}^d, Y \in \mathbb{R}$ for the model $Y = \beta’X + e$ is defined as $$ \beta =\mathbb{E}[X_iX_i']^{-1}\mathbb{E}[X_iY_i] $$ and if $X$ is a scalar random variable, the commonly shown formula for simple regression is $$ \beta = Var(X_i)^{-1}Cov(X_i,Y_i) $$ But from the more general first equation, shouldn’t this be $$ \beta = \mathbb{E}[X_i^2]^{-1} E[X_i Y_i] $$ instead? Where is the variance / covariance coming from? (I.e. how do you derive that popular variance covariance formula from the more general vectorized version)? I am clearly missing something, but I can’t seem to see what is happening to the extra terms (i.e. since $Var(X) = E[X^2] + E[X]^2$, not just $E[X^2]$). Is there something conceptually obvious that I am overlooking?
2026-03-28 23:11:43.1774739503
Population OLS coefficient in simple regression?
40 Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail At
1
There are 1 best solutions below
Related Questions in STATISTICS
- Given is $2$ dimensional random variable $(X,Y)$ with table. Determine the correlation between $X$ and $Y$
- Statistics based on empirical distribution
- Given $U,V \sim R(0,1)$. Determine covariance between $X = UV$ and $V$
- Fisher information of sufficient statistic
- Solving Equation with Euler's Number
- derive the expectation of exponential function $e^{-\left\Vert \mathbf{x} - V\mathbf{x}+\mathbf{a}\right\Vert^2}$ or its upper bound
- Determine the marginal distributions of $(T_1, T_2)$
- KL divergence between two multivariate Bernoulli distribution
- Given random variables $(T_1,T_2)$. Show that $T_1$ and $T_2$ are independent and exponentially distributed if..
- Probability of tossing marbles,covariance
Related Questions in REGRESSION
- How do you calculate the horizontal asymptote for a declining exponential?
- Linear regression where the error is modified
- Statistics - regression, calculating variance
- Why does ANOVA (and related modeling) exist as a separate technique when we have regression?
- Gaussian Processes Regression with multiple input frequencies
- Convergence of linear regression coefficients
- The Linear Regression model is computed well only with uncorrelated variables
- How does the probabilistic interpretation of least squares for linear regression works?
- How to statistically estimate multiple linear coefficients?
- Ridge Regression in Hilbert Space (RKHS)
Related Questions in LINEAR-REGRESSION
- Least Absolute Deviation (LAD) Line Fitting / Regression
- How does the probabilistic interpretation of least squares for linear regression works?
- A question regarding standardized regression coefficient in a regression model with more than one independent variable
- Product of elements of a linear regression
- Covariance of least squares parameter?
- Contradiction in simple linear regression formula
- Prove that a random error and the fitted value of y are independent
- Is this a Generalized Linear Model?
- The expected value of mean sum of square for the simple linear regression
- How to get bias-variance expression on linear regression with p parameters?
Trending Questions
- Induction on the number of equations
- How to convince a math teacher of this simple and obvious fact?
- Find $E[XY|Y+Z=1 ]$
- Refuting the Anti-Cantor Cranks
- What are imaginary numbers?
- Determine the adjoint of $\tilde Q(x)$ for $\tilde Q(x)u:=(Qu)(x)$ where $Q:U→L^2(Ω,ℝ^d$ is a Hilbert-Schmidt operator and $U$ is a Hilbert space
- Why does this innovative method of subtraction from a third grader always work?
- How do we know that the number $1$ is not equal to the number $-1$?
- What are the Implications of having VΩ as a model for a theory?
- Defining a Galois Field based on primitive element versus polynomial?
- Can't find the relationship between two columns of numbers. Please Help
- Is computer science a branch of mathematics?
- Is there a bijection of $\mathbb{R}^n$ with itself such that the forward map is connected but the inverse is not?
- Identification of a quadrilateral as a trapezoid, rectangle, or square
- Generator of inertia group in function field extension
Popular # Hahtags
second-order-logic
numerical-methods
puzzle
logic
probability
number-theory
winding-number
real-analysis
integration
calculus
complex-analysis
sequences-and-series
proof-writing
set-theory
functions
homotopy-theory
elementary-number-theory
ordinary-differential-equations
circles
derivatives
game-theory
definite-integrals
elementary-set-theory
limits
multivariable-calculus
geometry
algebraic-number-theory
proof-verification
partial-derivative
algebra-precalculus
Popular Questions
- What is the integral of 1/x?
- How many squares actually ARE in this picture? Is this a trick question with no right answer?
- Is a matrix multiplied with its transpose something special?
- What is the difference between independent and mutually exclusive events?
- Visually stunning math concepts which are easy to explain
- taylor series of $\ln(1+x)$?
- How to tell if a set of vectors spans a space?
- Calculus question taking derivative to find horizontal tangent line
- How to determine if a function is one-to-one?
- Determine if vectors are linearly independent
- What does it mean to have a determinant equal to zero?
- Is this Batman equation for real?
- How to find perpendicular vector to another vector?
- How to find mean and median from histogram
- How many sides does a circle have?
I think you cannot get the results from a simple regression model because you did not include the intercept term in your model.
The common formula for a simple regression model is derived from the model $Y = \beta_0 + \beta_1X + \varepsilon$. This can be rewritten in matrix form as $Y = \beta^\top U + \varepsilon$, where $\beta = (\beta_0, \beta_1)^\top$ and $U = (1_n, X)$. For simplicity, let's assume we observe $n$ tuples of $(x_i,y_i)$. The OLS estimation of $\beta$ yields $\hat{\beta}=(U^\top U)^{-1}U^\top Y$ and \begin{align*} \begin{pmatrix} \hat{\beta_0} \\ \hat{\beta_1}\end{pmatrix} &= \begin{pmatrix} n & \sum_{i=1}^n x_i \\ \sum_{i=1}^nx_i & \sum_{i=1}^nx_i^2\end{pmatrix}^{-1} \begin{pmatrix}\sum_{i=1}^ny_i \\ \sum_{i=1}^nx_iy_i\end{pmatrix} \\ & = \dfrac{1}{n(\sum_{i=1}^n(x_i-\bar{x})^2}\begin{pmatrix} \sum_{i=1}^nx_i^2 & -\sum_{i=1}^n x_i \\ -\sum_{i=1}^nx_i & n\end{pmatrix} \begin{pmatrix}\sum_{i=1}^ny_i \\ \sum_{i=1}^nx_iy_i\end{pmatrix}\\ & = \begin{pmatrix}\bar{y}-\hat{\beta_1}\bar{x} \\ \dfrac{\sum_{i=1}^n (x_i-\bar{x})(y_i - \bar{y})}{\sum_{i=1}^n(x_i - \bar{x})^2}\end{pmatrix} \\ & = \begin{pmatrix}\bar{y}-\hat{\beta_1}\bar{x} \\ \dfrac{Cov(X,Y)}{Var(X)}\end{pmatrix} \ , \end{align*} where $Cov(X,Y), Var(X)$ is the sample covariance and sample variance respectively. We can also see that it holds true in a population sense (assumed both $X, Y$ are random variables): \begin{align*} Cov(X,Y) & = Cov(X,\beta_0 + \beta_1 X + \varepsilon) \\ & = 0 + \beta_1Var(X) + 0 \\ \implies\beta_1 &= \dfrac{Cov(X,Y)}{Var(X)}\end{align*}
In your case, you assume that $X$ is a scalar, which means that your model is $Y = \beta_1 X +\varepsilon$, which is different from the classic simple regression model as the intercept term is missing. In this case, the OLS estimation is \begin{align*}\hat{\beta_1} &= (X^\top X)^{-1} X^\top Y \\ & = \dfrac{\sum_{i=1}^n x_iy_i}{\sum_{i=1}^n x_i^2} \end{align*} In a population sense, we still have \begin{align*} Cov(X,Y) & = Cov(X,\beta_1 X + \varepsilon) \\ & = \beta_1 Var(X) \\ \implies\beta_1 &= \dfrac{Cov(X,Y)}{Var(X)}\end{align*}