I'm looking for a general lower bound on $\mathbb{P}(X \geq \mathbb{E}(X))$ (or, equivalently, on $\mathbb{P}(X \leq \mathbb{E}(X))$).
My informal (and naive) reasonning so far
I can think of 2 ways to make this probability arbitrarily small:
Stacking all the probability mass on a single point -- consider, e.g., $X_\epsilon = \begin{cases} 0 &\text{ w.p. }& 1-\epsilon, \\ 1 &\text{ w.p. }& \epsilon. \end{cases}$
Taking a small amount of probability mass to infinity -- consider, e.g., for a given random variable $Y$, the random variable $\tilde{Y}_\epsilon$ obtained from $Y$ according to $\tilde{Y}_\epsilon = \begin{cases} Y &\text{ w.p. }& 1-\epsilon, \\ Y + 1/ \epsilon^2 &\text{ w.p. }& \epsilon. \end{cases}$
These examples suggest that the desired lower bound should depend positively on the variance of $X$ (example 1.) and negatively on the ``diameter'' of $X$ (example 2.), that is, $$ \text{diam}(X) = \text{sup}~ ( \text{supp}(X)) - \text{inf}~ ( \text{supp}(X)).$$ (To put it simply, assume that $X \in [a,b]$ almost surely and make the lower bound a function of $b-a$.)
Are you aware of such result? If not, do you think that my hopes are justified, or did I miss a counter-example?
Edit: A natural candidate would be $$ \mathbb{P}(X \geq \mathbb{E}(X)) \geq \frac{\text{Var}(X)}{(b-a)^2}. $$ Do you have any specific counter-example for this inequality?
We can adjust your $X_\varepsilon$ example ever so slightly to get a counter-example to your variance suggestion: $$ Z_\varepsilon = \begin{cases} 0 & \text{wp} \ 1 - \varepsilon, \\ 1 / \sqrt \varepsilon & \text{wp} \ \varepsilon. \end{cases} $$ This has $$ \mathbb E(Z_\varepsilon) = \sqrt \varepsilon \quad\text{and}\quad \mathbb V\text{ar}(Z_\varepsilon) = \mathbb E(Z_\varepsilon^2) - \mathbb E(Z_\varepsilon)^2 = 1 - \varepsilon. $$ So, $\mathbb P(Z_\varepsilon > \mathbb E(Z_\varepsilon)) = \varepsilon$ but its variance is approximately $1$. Of course, higher moments are divergently large (as $\varepsilon \to 0$). If you want the $k$-th momet to be bounded in $\varepsilon$, just replace $1/\sqrt\varepsilon$ with $\varepsilon^{-1/k}$.
This has the propery that the only way of being larger than the mean is to be enormously larger than the mean. In particular, it does not obey your second condition of having bounded support---well, for fixed $\varepsilon$ it does, but this support as a function of $\varepsilon$ is not bounded.
Highly related, but not exactly the same, is the Paley–Zygmund inequality: $$ \mathbb P(Z \ge (1 - \theta) \mathbb E Z) \ge (1 - \theta)^2 \mathbb E(Z^2) / \mathbb E(Z)^2. $$ Obviously, this is useless when $\theta = 0$.
Regarding your candidate, which was added after my answer above, I think it can be proved for discrete-valued measures. I expect this can be extended to continuous ones with a density, maybe beyond. Of course, you can rescale to be in $[0,1]$. Also, assume that $Z$ is non-constant, otherwise the variance is $0$ and it trivially holds.
Let $Z = \sum_{i=1}^k p_i \delta_{z_i}$ with $0 \le z_1 \le \cdots \le z_k \le 1$ and $\sum_i p_i = 1$. Now, let $\mu = \mathbb E(Z)$. Choose $I$ such that $z_{I-1} < \mu \le z_I$. Then, $\mathbb P(Z \ge \mu) = \sum_{i \ge I} p_i$.
Consider the following adaptation to $Z$, which cannot decrease the variance but does not change the probability of being larger than its mean:
Then, the probability of being larger than the mean is unchanged: $$\textstyle \mathbb P(Z' \ge \mu') = \sum_{i \ge I} p_i = \mathbb P(Z \ge \mu).$$ It shouldn't be so hard to check that the variance cannot decrease, as the mass is pushed further apart: $$\mathbb V\text{ar}(Z') \ge \mathbb V\text{ar}(Z).$$
Now, $Z'$ is just a $\{0,1\}$-valued random variable. It has mean $p$ and variance $p(1-p)$, where $p := \sum_{i \ge I} p_i$. Hence, $$ \mathbb V\text{ar}(Z) \le \mathbb V\text{ar}(Z') = p(1-p) \le p = \mathbb P(Z' \ge \mu') = \mathbb P(Z \ge \mu). $$
One should be able to approximate density functions with such finitely-supported discrete random variables and take limits.
This inequality is tight up to a multiplicative constant, as can be seen by taking a $\operatorname{Bern}(p)$ random variable and letting $p \to 0$. This is natural, because this random variable is unaffected by the above procedure. So, the only inequality which is not tight is $p(1-p) \le p$. But, there does not exist a constant $c < 1$ such that $p(1-p) \le c p$ for all $p > 0$.