Re-phrased question and edit:
I guess people find the question confusing, so following the suggestion of @Henry, this an attempt to rephrase.
So there are $n$ biased coins $x = \{1,2,..,n\}$, with head being $H$ and tail being $T$. At each trial $i$, one biased coin out of $n$ possible coins is chosen.
There are three probabilities in this setting. First is $P_x(H)$ and $P_x(T)$, which are independent of trial $i$ and only depend on coin $x$. That is, once biased coin $x$ is chosen, then probability of obtaining $H$ or $T$ does not change. Assume that for every $x$, $P_x(H),P_x(T) > 0$.
Next, we have $P_{i+1}(x)$, which is probability that coin $x$ would be obtained at trial $i+1$. This value changes depending on trial $i$ and $x$. It is determined, depending on whether head or tail is obtained at trial $i$: $$P_{i+1,i/H}(x) = \frac{P_i(x)P_x(H)}{P_i(H)}$$ $$P_{i+1,i/T}(x) = \frac{P_i(x)P_x(T)}{P_i(T)}$$ $i/H$ refers to head being obtained at trial $i$ and so forth.
The above has $P_i(H)$ and $P_i(T)$, which is the probability head or tail is obtained at trial $i$. It is given by: $$P_i(H) = \sum_{x=1}^n P_i(x)P_x(H)$$ $$P_i(T) = 1-P_i(H) = \sum_{x=1}^n P_i(x)P_x(T)$$
The question then is how we compute ("efficiently") averge or expectation of $P_i(x)$, which is denoted as $E[P_i(x)]$ for some given value of $i$. Additionally, how we compute its variance as well.
Possibly, we would additionally wish to prove or disrpove that $E[P_i(x=1)] \geq P_{i=1}(x=1)$ for large $i \approx \infty$ in $E[P_i(x=1)]$, if $P_{i=1}(x=1) \approx 1$ but $P_{i=1}(x=1) < 1$. $i=1$ refers to trial $i$ being the first trial.
Now the original question:
Let probability of a coin toss be determined at each trial $i$ as follows. Probability of head at $i$, $P_i(H)$, follows the following formula: $$P_i(H) = \sum_{x=1}^n P_i(x)P_x(H)$$ Then $$P_i(T) = 1-P_i(H)$$ where $P_x(H)+P_x(T) = 1$ for every $x$ and $\sum_{x=1}^n P_i(x) = 1$. Furthermore, in case head comes up in trial $i$, $P_{i+1}(x)$ evolves as: $$P_{i+1,i/H}(x) = \frac{P_i(x)P_x(H)}{P_i(H)}$$ and when tails comes up at trial $i$, $P_{i+1}(x)$ evolves as: $$P_{i+1,i/T}(x) = \frac{P_i(x)P_x(T)}{P_i(T)}$$
Now I would like to prove that if $P_{i=1}(x=1) \approx 1$ though not equal to $1$, then regardless of value $P_x(H)$, average $E[P_i(x=1)] \geq P_{i=1}(x=1)$ for large number of trials $i$, with $E[\cdot]$ understood as average. I also would like to find a good way to calculate mean and variance statistics as well for $P_i(x=1)$.
I believe I have seen this in an economics paper, but couldn't find the source thus the question. I think it was an example of Markov chains or something like that. Or maybe a curse of gambles.
Edit: also assume that $P_x(H),P_x(T) \neq 0$.
Another edit: I may be wrong about the assertion, so I just would like to know a good method to calculate average $E[P_i(x=1)]$ for some $i$.