Consider a random variable Y and the pair of simple hypotheses:
$H_0 : Y \sim N(0,10)$
$H_1 : Y \sim N(m,10)$
Here $m$ is an unknown mean which is sampled from a distribution $N(0,1)$ and then held fixed for ever.
Consider that you get to observe 100 samples of Y. For each sample $y_i$ observed, you are to decide whether $y_i$ was from $H_0$ or from $H_1$.
There can me two models here, in the batch case, you get to observe all $100$ samples and then you need to make a decision on each $y_i$, $1\leq i \leq 100$. Is this a well known detection theory problem? I am not able to find any reference on how to do this.
Alternatively, we consider a 'Sequential' testing of hypothesis (make decision on $y_i$ immediately as soon as it is sampled). Shouldn't I be able to use the knowledge of what decisions I made for say the first $10$ samples to be able to make better decisions in my $11^{th}$ sample? And yet I have not been able to find a good reference to deal with such a problem.
Note that although mean $m$ is fixed, its value is unknown. The only prior information available is that it is sampled from a gaussian.