For a linear regression problem, $$ y = x_1\beta_1 + x_2\beta_2 + x_3\beta_3 + x_4\beta_4 + b + \epsilon $$ If I have method 1, that estimate the coefficients as: $$ \beta_{1,1}, \beta_{2,1}, \beta_{3,1}, \beta_{4,1} $$ with corresponding p-values: $$ p_{1,1}, p_{2,1}, p_{3,1}, p_{4,1} $$
With another method, I estimated: $$ \beta_{1,2}, \beta_{2,2}, \beta_{3,2}, \beta_{4,2} $$ with corresponding p-values: $$ p_{1,2}, p_{2,2}, p_{3,2}, p_{4,2} $$
If I have $|\beta_{1,1}-\beta_1|<|\beta_{1,2}-\beta_1|$ (i.e. method 1 can estimate $\beta_1$ with smaller bias). Does that tell me anything about $p_{1,1}$ and $p_{1,2}$?
What if we know for sure that $\beta_1\neq 0$? (but the null hypothesis will still be $\beta_1=0$)
Is there any general relationship between the estimation of $\beta$ and the p-value? Is it just the higher estimated $|\beta|$ is, the lower the p-value is (in general)?
It depends also on their (estimated) variance, i.e., if $|\beta_{1,1}| \le |\beta_{1,2}|$, and if their estimated variance is the same, thus $p_{1,2} \ge p_{1,1}$. However, without knowing anything on the variance - it is impossible to say nothing about this relationship. Generally, recall that the p.value is defined in the following way $$ p.v = P_{H_0}\left(T_{(n-p)}\ge\left|\frac{\hat{\beta}}{\hat{\sigma}_{\hat{\beta}} } \right| \right), $$ where $p$ is the number of $\beta$s, $T_{(n-p)}$ is a random variable that follows $t$-student distribution with $n-p$ degrees and the original data $\{y_i, x_{11},..., x_{1p}\}_{i=1}^n$ were generated by $Y+X\beta+\varepsilon$, where $$ \varepsilon\sim N_{n}(0,\sigma^2I). $$