How do I test if an observed point is expected from a linear regression?

20 Views Asked by At

I have a set of $N$ observations on $x$ and $y$, where each observation is characterized as a couple of Gaussian distributions $x\sim N(x_i, {\epsilon_{x,i}}^2), y\sim N(y_i, {\epsilon_{y,i}}^2)$, independently.

To find the best linear fit model, $y=A+Bx$, I minimized following quantity: $$ \sum_{i=1}^{N} \frac{(y_i-A-Bx_i)^2}{{\epsilon_{y,i}}^2+B^2 {\epsilon_{x,i}}^2} $$ which is an expression of chi-square in such occasion. So I now have best fit parameters $A, B$ and their corresponding error $\delta_A, \delta_B$.

Then I made another observation, $x\sim N(X, {\epsilon_{X}}^2), y\sim N(Y, {\epsilon_{Y}}^2)$. I want to test if this deviates from the given fit $y=A+Bx$, and want to know how much it deviates, preferably in standard $z$-score so that I can calculate $p$-value, or in terms of probability of observing a point in such distance from the line. How can I calculate this?