Consider a bounded random variable $X:\Omega\to[0,1]$. Does there always exist a transformation $T:[0,1]\to \{0, 1\}$ such that $V(T(X))\geq V(X)$?
Remarks:
$T(X)$ will be a Bernoulli variable with the weight squeezed to the sides. Intuitively, it should be possible to do this in a way so that the variance increases.
A possible transformation would be $$ T(X)=\begin{cases}1 &\text{if } X\geq m \\ 0 &\text{otherwise}\end{cases} $$ with $m=E[X]$.
I initially thought that choosing $m$ the median of $X$ would solve the problem. However, it only works for continuous variables. A counter example for discrete variables can be found here Transformation increasing variance of bounded random variable
I would give partial credit for showing that there exists a constant $c>0$ such that for all $X:\Omega\to[0,1]$ there exists $T:[0, 1]\to\{0,1\}$ such that $V(T(X))\geq c\cdot V(X)$.
Consider first the intermediate transformation $$ T_\lambda(X) = \begin{cases} \lambda X & \text{if } X \leq \mu\\ X & \text{otherwise} \end{cases} $$ moving the leftward values closer to 0, and leaving the rightward elements unchanged.
Let $\mu_\lambda = E[T_\lambda(X)] = \lambda\int_{X\leq\mu} X dP + \int_{X> \mu} X dP = \mu - (1-\lambda)\int_{X\leq \mu} X dP = \mu - (1-\lambda)a$ with $a = \int_{X\leq \mu} X dP\in[0,1]$.
The variance $V(T_\lambda(X)) = \int_{x \leq \mu} (\lambda x -\mu_\lambda)^2 dP + \int_{x> \mu} (x - \mu_\lambda)^2 dP$. Taking the derivative \begin{align*} \frac{\partial}{\partial \lambda} V(T_\lambda(X)) &= \frac{\partial}{\partial \lambda} \int_{x \leq \mu} (\lambda x -\mu_\lambda)^2 dP +\frac{\partial}{\partial \lambda} \int_{x> \mu} (x - \mu_\lambda)^2 dP\\ &= \int_{x \leq \mu} \frac{\partial}{\partial \lambda} (\lambda x - \mu_\lambda)^2 dP + \int_{x> \mu}\frac{\partial}{\partial \lambda} (x - \mu_\lambda)^2 dP\\ &= \int_{x \leq \mu} 2(x-a)(\lambda x - \mu_\lambda) dP + \int_{x> \mu}2(-a)(x - \mu_\lambda) dP\\ &= 2\int_{x \leq \mu} x(\lambda x - \mu_\lambda) dP -2a\left(\int_{x \leq \mu} (\lambda x - \mu_\lambda) dP + \int_{x> \mu} (x - \mu_\lambda) dP\right)\\ &= 2\int_{x \leq \mu} x(\lambda x - \mu_\lambda) dP -2a\int (T_\lambda(x) - \mu_\lambda) dP\\ &= 2\int_{x \leq \mu} x(\lambda x - \mu_\lambda) dP \end{align*} Thus, the derivative $\frac{\partial}{\partial \lambda} V(T_\lambda(X)) \leq 0$, since $\mu_\lambda\geq \lambda\mu$ implies that $\lambda x - \mu_\lambda \leq \lambda (x - \mu) \leq 0$ for $x \leq \mu$. Therefore $V(X) = V(T_1(X)) \leq V(T_0(X))$. We have thereby shown that the variance increases if we shift the leftmost ``half'' of $X$ to 0. We can now apply the same argument to $-T_0(X)$, getting $V(T_0(X)) = V(-T_0(X))\leq V(T_0(-T_0(X))) = V(T(X))$.