I was told that, in linear regression, the $95\%$ CI for $\beta$ is $$\beta \in \left(\hat\beta-2 \sigma,\hat\beta+2\sigma\right), \text{ where } \sigma = \text{standard error}(\hat\beta).$$
My question: Where does the $2$ come from?
Also, would that $2$ change if, say, we were looking for a $90\%$ CI, or $40\%$ CI?
I recall that the level of confidence affects the $t$-value cutoff for hypothesis testing, but I can't recall how it changes in the above context.
It's because $$2\approx 1.96 \approx \Phi(0.975),$$
where $\Phi$ is the cumulative distribution function of the standard normal distribution $Z\sim N(0,1)$, that is,
$$\mathbb{P}(Z<1.96)\approx 0.975$$
The number even has its own wikipedia article. Note: always draw the picture.
If you want a $100(1-\alpha)\%$ confidence interval, you should use
$$\mathbb{P}\left(-\Phi\left(1-\frac{\alpha}{2}\right)<Z<\Phi\left(1-\frac{\alpha}{2}\right)\right)=1-\alpha.$$