Consider two sample means $\bar{X_1}$ and $\bar{X_2}$ from the same normal population having mean $\mu$ and standard deviation $\sigma$. The first sample mean is based on $n_1 = 10^k$ observations, while the second is based on $n_2 = 10^{k+2}$ observation for some positive integer $k$. Take any percentile, say $100 * \alpha$, from the distribution of the means for each sample size excluding the median. What is the ratio of the distance of the two percentiles from $\mu$ dividing distance 1 by distance 2?
I need help breaking this question down. I understand that we initially have two sample means from the same population. I understand that both samples are based on different sample sizes involving 10 raised to some power of k for the first sample, and k+1 for the second.
But I start to lose the thread after this. Why does the instruction take pains to "exclude the median"? Is this important, or is this just a red herring? Are we now considering two samples of sample means, with each sample either belonging to a collection of samples of size $10^k$ and $10^{k+1}$? And how would I even begin to measure the distance from the mean? Does this involve standardizing the percentile to a z-score?
To "help breaking this question down" as you asked, here are some general steps:
Editing to add some sub-steps for step 2: