Why do dependent regression errors in a Multiple Linear Regression model (a violation of the assumptions of the MLR model) lead to underestimated standard errors and artificially small p-values? What is the connection between these quantities?
2026-02-23 15:29:31.1771860571
Dependent errors leading to artificially small p values?
30 Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail At
1
There are 1 best solutions below
Related Questions in STATISTICS
- Given is $2$ dimensional random variable $(X,Y)$ with table. Determine the correlation between $X$ and $Y$
- Statistics based on empirical distribution
- Given $U,V \sim R(0,1)$. Determine covariance between $X = UV$ and $V$
- Fisher information of sufficient statistic
- Solving Equation with Euler's Number
- derive the expectation of exponential function $e^{-\left\Vert \mathbf{x} - V\mathbf{x}+\mathbf{a}\right\Vert^2}$ or its upper bound
- Determine the marginal distributions of $(T_1, T_2)$
- KL divergence between two multivariate Bernoulli distribution
- Given random variables $(T_1,T_2)$. Show that $T_1$ and $T_2$ are independent and exponentially distributed if..
- Probability of tossing marbles,covariance
Related Questions in REGRESSION
- How do you calculate the horizontal asymptote for a declining exponential?
- Linear regression where the error is modified
- Statistics - regression, calculating variance
- Why does ANOVA (and related modeling) exist as a separate technique when we have regression?
- Gaussian Processes Regression with multiple input frequencies
- Convergence of linear regression coefficients
- The Linear Regression model is computed well only with uncorrelated variables
- How does the probabilistic interpretation of least squares for linear regression works?
- How to statistically estimate multiple linear coefficients?
- Ridge Regression in Hilbert Space (RKHS)
Related Questions in LINEAR-REGRESSION
- Least Absolute Deviation (LAD) Line Fitting / Regression
- How does the probabilistic interpretation of least squares for linear regression works?
- A question regarding standardized regression coefficient in a regression model with more than one independent variable
- Product of elements of a linear regression
- Covariance of least squares parameter?
- Contradiction in simple linear regression formula
- Prove that a random error and the fitted value of y are independent
- Is this a Generalized Linear Model?
- The expected value of mean sum of square for the simple linear regression
- How to get bias-variance expression on linear regression with p parameters?
Related Questions in P-VALUE
- Basic but illuminating examples of statistical modeling
- Calculating p from a set of assumed probabilities and their actual outcomes
- $p$-value,the significance level
- P-value vs. Bayesian statistics
- Test hypothesis, probability of rejecting $H_0$
- p-value, hypotheses testing, type II error
- Type-I vs. type-II error in statistical hypotheses testing
- Should a variable with a high p-value exclude from the equation
- Removing variable with big p-value?
- p-value of a test statistic on a two-sided test
Trending Questions
- Induction on the number of equations
- How to convince a math teacher of this simple and obvious fact?
- Find $E[XY|Y+Z=1 ]$
- Refuting the Anti-Cantor Cranks
- What are imaginary numbers?
- Determine the adjoint of $\tilde Q(x)$ for $\tilde Q(x)u:=(Qu)(x)$ where $Q:U→L^2(Ω,ℝ^d$ is a Hilbert-Schmidt operator and $U$ is a Hilbert space
- Why does this innovative method of subtraction from a third grader always work?
- How do we know that the number $1$ is not equal to the number $-1$?
- What are the Implications of having VΩ as a model for a theory?
- Defining a Galois Field based on primitive element versus polynomial?
- Can't find the relationship between two columns of numbers. Please Help
- Is computer science a branch of mathematics?
- Is there a bijection of $\mathbb{R}^n$ with itself such that the forward map is connected but the inverse is not?
- Identification of a quadrilateral as a trapezoid, rectangle, or square
- Generator of inertia group in function field extension
Popular # Hahtags
second-order-logic
numerical-methods
puzzle
logic
probability
number-theory
winding-number
real-analysis
integration
calculus
complex-analysis
sequences-and-series
proof-writing
set-theory
functions
homotopy-theory
elementary-number-theory
ordinary-differential-equations
circles
derivatives
game-theory
definite-integrals
elementary-set-theory
limits
multivariable-calculus
geometry
algebraic-number-theory
proof-verification
partial-derivative
algebra-precalculus
Popular Questions
- What is the integral of 1/x?
- How many squares actually ARE in this picture? Is this a trick question with no right answer?
- Is a matrix multiplied with its transpose something special?
- What is the difference between independent and mutually exclusive events?
- Visually stunning math concepts which are easy to explain
- taylor series of $\ln(1+x)$?
- How to tell if a set of vectors spans a space?
- Calculus question taking derivative to find horizontal tangent line
- How to determine if a function is one-to-one?
- Determine if vectors are linearly independent
- What does it mean to have a determinant equal to zero?
- Is this Batman equation for real?
- How to find perpendicular vector to another vector?
- How to find mean and median from histogram
- How many sides does a circle have?
Okey so p-value is a quantity that shows the relation between the expected value of a coefficient and the standard error (the uncertainty of the value).
If the errors are dependent then the coefficients are still unbiased, as the derivation for this does not change. However the variance-covariance matrix of the errors is no longer a diagonal matrix. But OLS will assume it is diagonal.
This is how it should be calculated: \begin{equation} \begin{aligned} V\left[b_{\text {OLS }}\right] & =E\left[\left(X^{\prime} X\right)^{-1} X^{\prime} \varepsilon \varepsilon^{\prime} X\left(X^{\prime} X\right)^{-1}\right] \\ & =\left(X^{\prime} X\right)^{-1} X^{\prime} E\left[\varepsilon \varepsilon^{\prime}\right] X\left(X^{\prime} X\right)^{-1} \\ & =\left(X^{\prime} X\right)^{-1} X^{\prime} \Omega X\left(X^{\prime} X\right)^{-1} \end{aligned} \end{equation}
But it will apply the standard OLS equations where: \begin{equation} \Omega=\sigma^2 I \end{equation}
hence it applies: \begin{equation} V\left[b_{\text {OLS }}\right] = \sigma^2\left(X^{\prime} X\right)^{-1} \end{equation}
This quantity is smaller than the actual $V\left[b_{\text {OLS }}\right]$. Thus the standard errors of the coefficients are artificially lowered, hence also smaller p-values