It is one of the exercise in "Introduction to Multiple Linear Regression". Consider the usual linear regression model \begin{equation} y = \beta_0+ \beta_1 x +\epsilon \end{equation} where $\beta_0$ is known.
It can be calculated that the length of the interval of $\beta_1$ with known intercept is $$L_1 = 2t_{\alpha/2,n-1}\sqrt{\frac{MS_{Res1}}{\sum_{i=1}^nx_i^2}}$$ and the length of the interval of $\beta_1$ with unknown intercept is $$L_2 = 2t_{\alpha/2,n-2}\sqrt{\frac{MS_{Res2}}{\sum_{i=1}^n(x_i-\bar{x})^2}}$$ where $MS_{Res1}$ and $MS_{Res2}$ are th mean residual sum of squares of the model with known and unknown $\beta_0$ respectively.
Is the $(1-\alpha)\%$ interval of $\beta_1$ narrower than the estimator for the case where both slope and intercept are unknown?