Normality throught the time 2

23 Views Asked by At

Here is my problem (I have already spoke about it in another question but It is not the same question now):

We observe data X(t) that depends on the time t. We make these observations at given times t0,⋯,tn for a n∈N. At each of these times, we check if the data are normal using a normality test (Anderson-Darling, Shapiro-Wilk, D'Agostino) to make a capability study, i.e. to compute a Ppk (process performance index). The problem is that we observe normality for X(t0),⋯,X(tk−1), no normality for $X(tk)$ and again normality for X(tk+1),⋯,X(tn) for 0<k<n. Let us call μi and σi the mean respectively the standard deviation of X(ti) for i∈{0,⋯,n}. I observed μk+1<μk<μk−1 and σk+1<σk<σk−1, letting me assume that the Ppk at time tk has to be somewhere between the one at $t_{k−1}$ and at $t_{k+1}$.

My question is: would it make sense to use a Benjamini approach on the FDR (False coverage rate) for the family of hypotheses $(X(t_1) \ is \ normal, ..., X(t_n) \ is \ normal)$ ? Or can I use the family-wise-error-rate for this family of hypotheses?

What doesn't make the decision easy is that $X(t_k)$ represents a volume expelled from a syringe that decreases with time and perhaps some properties of the syringe can alter the observed value (but it should still be normal).

From my point of view, we are still testing the property of the device at each different times and so it is like to do a multiple simultaneous hypotheses test for different parameters.