Suppose I have a multivariate normal random variable $Z$ which has $n$ dimensions. Suppose I have a vector $x$. Set $i$ as a number between $1$ and $n$ and $k$ as a number between $1$ and $n-1$. Can I say the following about the relationship between the following two probabilities?
\begin{align} & P(Z_i>x_i \mid Z_j>x_j \text{ for exactly $k$ components $j$}) \\[10pt] < {} & P(Z_i>x_i \mid Z_j>x_j \text{ for exactly $k+1$ components $j$}) \end{align}
This seems logical to me. Essentially we are saying that the fact $k+1$ components cross the threshold as opposed to just $k$ components increases the probability that a specific one crossed its threshold. However, I have no idea why this has to be true just why I think it should be true. I would really appreciate a formal explanation or counterproof.Thanks!
Here's an incomplete answer. In some special circumstances the guess is true, but in some others it is not.
Assuming the components $Z_j$, $j=1,\ldots,n$ are uncorrelated (and thus independent, since they're jointly normal) this reduces to a problem on Bernoulli random variables: Let $$ Y_j = \begin{cases} 1 &\text{if }Z_j>x_j, \\ 0 & \text{otherwise}. \end{cases} $$ Then $Y_j\sim\mathrm{Bernoulli}(p_j)$ for $j=1,\ldots,n$, where $p_j = \Pr(Z_j>x_j)$, and $Y_j$, $j=1,\ldots,n$ are independent. Then the question is whether $\Pr(Y_1=1\mid Y_1+\cdots+Y_n=y)$ is an increasing function of $y$.
The answer to that is "yes", and I am just realizing I don't know the most elegant way to prove that, so maybe I'll be back. I've commented out some preliminary scratchwork below and I may return to finish it later.
Here's one narrow special case in which is it easy to show the answer is "yes": suppose $Z_1,\ldots,Z_n$ are independent (which is not generally true of components of a multivariate normal random variable) and have expected value $0$ (also not generally true) and variance $1$ (also not generally true) and $x_1=\cdots=x_n$. In that case we can let $$ Y_j = \begin{cases} 1 & \text{if }Z_j>x_j, \\ 0 & \text{otherwise}. \end{cases} $$ Then \begin{align} & \Pr(Y_1=1 \mid Y_1+\cdots+Y_n=y) = \frac{\Pr(Y_1=1)\Pr(Y_2+\cdots+Y_n=y-1)}{\Pr(Y_1+\cdots+Y_n=y)} \\[10pt] = {} & \frac{p \cdot \dbinom{n-1}{y-1} p^{y-1}(1-p)^{n-y} }{\dbinom n y p^y (1-p)^{n-y}} = \frac y n, \end{align} and that certainly increases as $y$ increases.
However, one must consider negative correlations. Suppose, for example, that $W_1,\ldots,W_n\sim\mathrm{i.i.d.}\,N(0,1)$ and $\bar W=(W_1+\cdots + W_n)/n$ and $Z_j= W_j-\bar W$ for $j=1,\ldots,n$. Then the vector $(Z_1,\ldots,Z_n)$ satisfies the constraint that $Z_1+\cdots+Z_n=0$ and has a multivariate normal distribution whose variance is a singular matrix in which all off-diagonal measures are negative. So the $Z$s are negatively correlated with each other. Now suppose $x_1=\cdots=x_n=0$. Then $\Pr(Z_n>x_n) = 1/2$ but $\Pr(Z_n>x_n \mid \forall i\le n-1\ Z_i>x_i) =0$.
So in some cases the answer is "no".