Given a random variable $X$ with CDF $F(X)$, mean $E(X)=0$, and variance $Var(X) =\sigma^2$, I would like to bound the tail conditional expectation where $X$ is in the tail with probability $1-p$:
$E(X|X\geq F^{-1}(p)) = \frac{1}{1-p}\int_{F^{-1}(p)}^\infty X dF(X)$
A nice way to get an upper bound is via a Chebyshev-type inequality: https://projecteuclid.org/download/pdf_1/euclid.aoms/1177697276, which suggests:
$E(X|X\geq F^{-1}(p)) \leq \sigma \sqrt{\frac{p}{1-p}}$
But how can we get a lower bound? Intuitively, since $E(X) = 0$, we must have:
$E(X|X\geq F^{-1}(p)) \geq 0$
But that doesn't use the dispersion of the RV at all. Intuitively, if $\sigma$ is large and $E(X)=0$, the lower bound should be large, so the lower bound should grow with $\sigma$. But I have no idea how to formalize it. Any ideas would be greatly appreciated.
For any $x\in\mathbb R$, we clearly have $E(X|X\ge x)\ge x$. We also have $E(X|X\ge x)\ge0$. This is a consequence of the previous fact if $x\ge0$, and if $x<0$ this follows since $$E(X\mathbf1_{\{X\ge x\}})=E(X)-E(X\mathbf1_{\{X<x\}})\ge-x\mathbb P(X<x)\ge0.$$ In fact, $E\big(X|X\ge F^{-1}(p)\big)\ge F^{-1}(p)\vee0$ is optimal. If $F^{-1}(p)<0$, we can see this easily by considering a random variable $X$ which satisfies $P(X=x)=p$, $P(X=\frac{p|x|}{1-p})=1-p$ for some $x<0$. It is straightforward to check that $E(X)=0$, $x=F^{-1}(p)$, and $X\ge x$ almost surely, so in particular $E(X|X\ge x)=E(X)=0$.
If $F^{-1}(p)\ge0$, fix $x\ge0,\epsilon>0$ and construct a random variable $X$ satisfying $$P(X=x+\epsilon)=1-p,\quad P(X=x)=\frac p2,\quad P(X=-q)=\frac p2$$ where $q$ is chosen so that $(1-p)(x+\epsilon)+px/2=pq/2$. Again, it is clear that $F^{-1}(p)=x$ and $E(X)=0$, and while you may explicitly calculate $E(X|X\ge x)$, the important thing to note is $E(X|X\ge x)\in[x,x+\epsilon]$. This shows the inequality is optimal.