Hypothesis testing with error

25 Views Asked by At

Assume two univariate, normal distributions e.g. $A \sim N(x,0,1)$ and $ B \sim N(x,1,1)$.

Assume that we receive a value e.g. $t=0.2$ (e.g. via measurements) We want to perform hypothesis testing and decide if $t$ belongs to distribution $A$ or distribution $B$. We can compute $N(0.2,0,1)$ and $N(0.2,1,1)$ and use the maximum likelihood estimator to decide.

Reduced resolution case: Now assume that the value that we receive is $t=0.2$ with known error $e$, i.e. its range is in the interval $[0.2-e \ , \ 0.2+e]$. The error $e$ occurs due to limited resolution of the measurement device. I can directly proceed and compute the effect of the error $e$ on the type I and II errors

Questions: Is this the correct way to model the error due to imprecise measurements? Is the measurement interval $[0.2-e \ , \ 0.2+e]$ uniform ? Are there any concepts/ideas/models that can be used for such situations? Can the error $e$ can be already integrated within the variances of distributions $A$ and $B$? (to avoid the extra layer of complexity)