Let $U \in \mathbb{R}^k$ and $V\in \mathbb{R}^k$ be two independent standard normal vectors (i.e., $U \sim \mathcal{N}(0,I)$ and $U \sim \mathcal{N}(0,I)$ ). Define a set $S$ as \begin{align} S=\{ x \in \mathbb{R}^k: x_1 \le x_2 \le x_3 \le ... \le x_k \} \end{align}
We are interested in computing the following conditional expectation \begin{align} E\left[ \|U\|^2 \mid U+V \in S , V\in S \right]. \end{align}
My guess is that, most likely, there is no closed-form expression, so an upper bound would be also fine.
One upper bound I that I tried is via Cauchy-Schwarz: \begin{align} E\left[ \|U\|^2 \mid U+V \in S , V\in S \right]&= \frac{E\left[ \|U\|^2 1_{ \{ U+V \in S , V\in S \}} \right] }{P [ U+V \in S , V\in S ]}\\ &\le \frac{ \sqrt{E\left[ \|U\|^4 \right]} \sqrt{ P [ U+V \in S , V\in S ]} }{P [ U+V \in S , V\in S ]}\\ &= \frac{ \sqrt{E\left[ \|U\|^4 \right]} }{\sqrt{ P [ U+V \in S , V\in S ]}}. \end{align}
Now computing $E\left[ \|U\|^4 \right]$ is simple. However, $P [ U+V \in S , V\in S ]$ is not so much. I tried using inclusion-exclusion principle \begin{align} P [ U+V \in S , V\in S ]&= P [ U+V \in S ]+ P [ V\in S ]- P [ U+V \in S \text{ or } V\in S ]\\ &= \frac{2}{k!}-P [ U+V \in S \text{ or } V\in S ] \end{align} where we used that $P [ U+V \in S ]= P [ V\in S ]=\frac{1}{k!}$
This answer is just writing up the idea in @antkam's comment - I hope that's ok. I'll show:
The crucial point is that if we fix $V\in S$ and the direction $\widehat U:=U/\|U\|,$ then $\|U\|^2$ is increasing in $\|U\|,$ but the characteristic function $1_{U+V\in S}$ is decreasing in $\|U\|,$ because $S$ is convex:
$$U+V,V\in S \implies \lambda U + V = \lambda(U+V)+(1-\lambda)V\in S\text{ for $0\leq \lambda\leq 1$}$$
So we can use the result that the covariance between a decreasing function and an increasing function is non-positive. You can find proofs on this site for example at Covariance of increasing functions of random variables. It is important that the direction $\widehat{U}$ and magnitude $\|U\|$ are independent - the pdf of $U$ factorizes as a (constant) function of direction multiplied by a function of magnitude. We get
$$V\in S\implies\operatorname{Cov}(\|U\|^2,1_{U+V\in S}\mid \widehat U, V)\leq 0\text{ a.e.}$$
More explicitly, $$V\in S\implies\mathbb E[\|U\|^21_{U+V\in S}\mid \widehat U, V)\leq k\mathbb P[U+V\in S\mid \widehat U, V]\text{ a.e.}$$ Both sides can then be integrated over the event $V\in S$ and divided by $\mathbb P[V\in S]$ to give $$\mathbb E[\|U\|^21_{U+V\in S}\mid V\in S)\leq k\mathbb P[U+V\in S\mid V\in S].$$
(Alternatively, use the law of total covariance conditioned on $V\in S,$ which happens to reduce here to the law of total expectation because $\|U\|^2,\widehat U,V$ are independent. This gives $\operatorname{Cov}(\|U\|^2,1_{U+V\in S}\mid V\in S)\leq 0,$ which is the same thing.)
This means $$\mathbb E[\|U\|^2 \mid U+V\in S, V\in S]=\frac{\mathbb E[\|U\|^21_{U+V\in S}\mid V\in S]}{\mathbb P[U+V\in S\mid V\in S]}\leq k.$$