Suppose we have data $(X_1, X_2, X_3)$ (I'll refer the categories as 1, 2, 3) that has a multinomial distribution with parameters $n$ and $(p_1, p_2, p_3)$ and we want to test the hypothesis that $p_1>p_2>p_3$. I am trying to figure out an exact testing procedure for this.
One idea I had would be to first condition $X_3$ on $X_1$ i.e. see how many times 3 was realized when the options were 3 and 2 only. According to the hypothesis, this should be less than 1/2 and we could construct a simple binomial test for this since $X_3|X_1$ has a binomial distribution with parameters $n-X_1$ and $\frac{p_3}{p_2+p_3}$ and reject if the realization of 3 exceeded some critical value $k$.
Secondly, we could perform a similar test for $X_2|X_3$ and if the hypothesis is true we should see 2 chosen less than half the time when the options are only 1 or 2. A similar binomial test would work for testing this. Overall, we could reject the original hypothesis if either of these tests reject. However, controlling size seems difficult to me in this case since the tests are not independent (at least they obviously seem to not be).
Is there a better approach to testing this hypothesis? I have considered confidence intervals for the $p$'s, but not sure if I should use two-side or one-sided. I have seen procedures for confidence intervals for $p_i-p_j$ for $i\neq j$, but these rely on asymptotic approximations.
Anyway, any help or suggestions or references would be greatly appreciated.
It's not a problem that your two tests are somewhat dependent because if you assume that the sum of observations is fixed, then $p_1 \leq p_2$ DECREASES the chance that $p_2 \leq p_3$. So you'll actually be conservative if you only reject when one of the tests rejects. Since you only need one test to reject, you have a multiple hypothesis testing issue so you should divide your $p$-value cutoff for each test by 2.