I'm afraid I may be overlooking an obvious answer to this question, but perhaps someone can provide some assistance, as probability is not my area of expertise.
Suppose we have two independent random variables, $X$ and $Y$, with $X$ finite almost surely, but $\mathbb{E}[X]=\infty$ and $\mathbb{E}[Y]<\infty$. I'm trying to understand the quantity $\mathbb{E}[X\mid X<Y]$. In particular, I would like to know if this conditional expectation is finite.
I feel like I should be able to say $\mathbb{E}[X\mid X<Y] < \mathbb{E}[Y]$, which gives the result, but then I got a bit caught up in the details.
Any help, even just a nudge in the right direction, would be greatly appreciated.
Edit: so sorry to have left this out, but $X$ and $Y$ are non-negative RV.
Since we're not supposed to answer in comments, I'll write this out. Note that this is an example where the measure theoretic definition of conditional probability is too restrictive and we need to work with the classical definition. Since there seems to be a bit of confusion here, I'll write out more details than I normally would.
On our original probability space $(\Omega,\mathcal{F})$ for $B \in \mathcal{F}$ and $\omega \in \Omega$, define the indicator function of a measurable set $B$ to be
$$ 1_B(\omega) = \begin{cases} 1 & \omega \in B \\ 0 & \omega \notin B \end{cases}. $$
We have $\mathbb{P}(X\geq0) = 1,$ $\mathbb{P}(Y\geq0) = 1$, $\mathbb{P}(Y>X)>0$, and $\mathbb{E}[Y]<\infty$. We want to show that under $\tilde{\mathbb{P}}(\cdot) = \mathbb{P}(\cdot|X<Y)$ (i.e. on the new space $(\Omega,\mathcal{F},\tilde{\mathbb{P}}))$ $X$ will have a finite mean. I wrote it in this way because it is helpful to think of a classical conditional probability as being a completely different probability measure on the original space. It shouldn't be surprising that changing the reference measure can change some integrability properties.
We have
$$ \tilde{\mathbb{P}}(A) = \mathbb{P}(A|X<Y) = \frac{\mathbb{P}(A \cap \{X<Y\})}{\mathbb{P}(X<Y)} = \frac{\mathbb{E}[1_A1_{\{X<Y\}}]}{\mathbb{P}(X<Y)}. $$ The conditional expectation is similarly given by $$ \tilde{\mathbb{E}}[X] = \mathbb{E}[X|X<Y] = \frac{\mathbb{E}[X1_{\{X<Y\}}]}{\mathbb{P}(X<Y)} $$ Since $Y \geq 0$ and $X \geq 0$ we have almost surely $$ 0 \leq X1_{\{X<Y\}} \leq Y 1_{\{X < Y\}} \leq Y $$ Taking expectations, we see that $$ 0 \leq \mathbb{E}[X1_{\{X<Y\}}] \leq \mathbb{E}[Y1_{\{X<Y\}}] \leq \mathbb{E}[Y]. $$ Dividing by $\mathbb{P}(X<Y)$, we see that $$ 0 \leq \mathbb{E}[X|X<Y] \leq \mathbb{E}[Y|X<Y] \leq \frac{\mathbb{E}[Y]}{\mathbb{P}(X<Y)}. $$ Now, you might be wondering why we picked up that extra factor of $\mathbb{P}(X<Y)^{-1}$ at the end. Intuitively, we are working on the event where $Y$ is larger than something we know can be quite large (since $\mathbb{E}[X] = \infty$ in your original statement). We should not be surprised that $Y$ is typically quite large relative to its usual size if it is larger than something which has an unconditionally infinite mean.