I have a gaussian random variable $X$ ~ $N(\mu,\sigma^2)$. I then defined $n$ new variables $Y_j$ by $$ Y_j = a_j + (X - b_j)^2, $$ where $a_j$ and $b_j$ are some defined positive constants. So the $Y_j$ are clearly dependent. For some positive constant $C$, I have to develop the probability $P(\min_j Y_j > C)$.
I did the following steps, but I was told this is not correct.
\begin{align*} P(\min_j Y_j > C) &= P( (Y_1,...,Y_n) > (C,...,C))\\ &= P( (X-b_1)^2,...,(X-b_n)^2) > (C-a_1,...,C-a_n))\\ &= P( |X-b_1|,...,|X-b_n|) > (\sqrt{C-a_1},...,\sqrt{C-a_n}))\\ &= P( (X,...,X) > (\sqrt{C-a_1} + b_1,...,\sqrt{C-a_n} + b_n)) + P( (X,...,X) < (-\sqrt{C-a_1} + b_1,...,-\sqrt{C-a_n} + b_n))\\ &= P(X > \max_j (\sqrt{C-a_j} + b_j) + P(X < \min_j (-\sqrt{C-a_j} + b_j) \end{align*}
So, from which equality is this false? And why? I guess this is the absolute value that I cannot develop this way for a joint variable, but why not?
What would be the right way to go?
You are assuming that either $X>\sqrt {C-a_j} +b_j$ for all $j$ or $X<-\sqrt {C-a_j} +b_j$ for all $j$. It could be the first inequality for some $j$ and the other inequality for other $j$'s.