We know the frequentist definition of probability-the probability $p$ of an event $E$ is the limiting frequency the event happens when the associated random experiment is repeated large number of times. In this definition $p$ is some fixed constant for every event $E$.
Now i wonder what we are to make of this definition if on every trial $p$ changes its value.To keep things simple lets imagine a 'magical' universe in which the probability of a coin landing up head is $|\sin{X}|$ where $X$ is a uniformly and independently generated r.v.in the interval $\left(-\frac{\pi}{2},\frac{\pi}{2}\right)$.
How can we reasonably define such a notion of probability?
Although it is perfectly reasonable to have time-varying probability distribution in a problem, if you have a probability function that depends on nothing but some other independent random variable, that can always be absorbed to look as if the actual probability had some different function.
In the example you give, the probability of heads is $2/\pi$, and although that is not $1/2$ it is certainly a reasonable ordinary probability distribution.
Things get more interesting if you say something like "the probability of heads is magically $1/2$ minus a tenth of the excess of heads over tails in the last ten tosses. But even that is just a Markov process (with the state being the sequnce of the last ten toss results).