We start with $P_0\sim P_1$ probability measures on $(\Omega, \mathcal{F})$, and we define the new equivalent measure $P_\alpha=\alpha P_0+(1-\alpha) P_1$ with $\alpha\in (0,1)$.
Let $X$ be an integrable random variable mapping to $(\mathbb{R},\mathcal{B})$, and $\mathcal{G}\subset\mathcal{F}$ be a sub-$\sigma$-algebra. Does there exist a $\beta\in(0,1)$ for which $$\mathbb{E}_{P_\alpha}[X\mid\mathcal{G}]=\beta\mathbb{E}_{P_0}[X\mid \mathcal{G}]+(1-\beta)\mathbb{E}_{P_1}[X\mid\mathcal{G}]?$$
My guess would be $\beta=\alpha$, that would be nice, but I can't prove it. I'm about to present a finance related task that uses such an identity, it's not a homework for me.
I tried using this identity: $$\mathbb{E}_P(X\mid\mathcal{G})\mathbb{E}_Q\left(\frac{dP}{dQ}\mid\mathcal{G}\right)=\mathbb{E}_Q\left(\frac{dP}{dQ}X\mid\mathcal{G}\right),$$ but I only got $$\beta=1-(1-\alpha)\mathbb{E}_{P_\alpha}\left(\frac{dP_1}{dP_\alpha}\mid\mathcal{G}\right),$$ which would be fine for my application actually, but it's ugly, I'm afraid I made mistakes. It is equal to my guess iff $\mathbb{E}_{P_\alpha}\left(\frac{dP_1}{dP_\alpha}\mid\mathcal{G}\right)=1$, and I don't think that it's true in general, $\frac{dP_1}{dP_\alpha}$ being independent of $\mathcal{G}$ would imply that.
Any help would be appreciated, thanks!
For the sake of notational coherence, let us rather define $P_a=aP_1+(1-a)P_0$ for every $a$ in $[0,1]$, and $E_a$ as the expectation with respect to $P_a$, then you are hypothetizing that, for some $a$ in $(0,a)$, there might exist some $b$ such that, for every integrable random variable $X$, $$E_a(X\mid\mathcal G)=bE_1(X\mid\mathcal G)+(1-b)E_0(X\mid\mathcal G)\tag{$\ast$}$$ To get a feeling of what $(\ast)$ entails, let us assume for simplicity that $\mathcal G$ is generated by some partition $(B_n)$ with $P_0(B_n)P_1(B_n)\ne0$ for every $n$, then by definition, $$E_a(X\mid\mathcal G)=\sum_nE_a(X\mid B_n)\mathbf 1_{B_n}$$ thus, $(\ast)$ would imply $$E_a(X\mid B_n)=bE_1(X\mid B_n)+(1-b)E_0(X\mid B_n)$$ for every $n$, that is, $$\frac{aE_1(X\mathbf 1_{B_n})+(1-a)E_0(X\mathbf 1_{B_n})}{P_a(B_n)}=b\frac{E_1(X\mathbf 1_{B_n})}{P_1(B_n)}+(1-b)\frac{E_0(X\mathbf 1_{B_n})}{P_0(B_n)}$$ This holds for every integrable $X$ if and only if $$\frac{a\mathbf 1_{B_n}P_1+(1-a)\mathbf 1_{B_n}P_0}{P_a(B_n)}=b\frac{\mathbf 1_{B_n}P_1}{P_1(B_n)}+(1-b)\frac{\mathbf 1_{B_n}P_0}{P_0(B_n)}$$ that is, on each $B_n$, $P_1$ and $P_0$ are proportional with $$\left(a-b\frac{P_a(B_n)}{P_1(B_n)}\right)P_1=\left((1-b)\frac{P_a(B_n)}{P_0(B_n)}-(1-a)\right)P_0$$ But, in the general case, this corresponds to the hypothesis that $$dP_1=Y\cdot dP_0$$ for some $\mathcal G$-measurable random variable $Y$ with $P_0(Y>0)=1$ since one assumed that $P_0\sim P_1$. Conversely, if this holds, then $E_0(X\mid\mathcal G)=E_1(X\mid\mathcal G)$ hence $(\ast)$ holds.
To sum up, the identity you are interested in does not hold in general but it does when the Radon-Nykodym derivative of $P_1$ with respect to $P_0$ is $\mathcal G$-measurable.