$2$ people decide to count wins on a card game using a well shuffled standard $52$ card deck. Community (shared) cards are drawn one at a time from the deck without replacement until there is a winner for a hand. The rules are player A can initially win if $4$ triples appear (such as $KKK,444,AAA,777$). Player B wins if a single quad appears (such as $QQQQ$). As soon as there is a winner, the hand is finished, the win is awarded, all drawn cards are returned to the deck, the cards are reshuffled well, and the next hand will be drawn. There is one twist however. If B wins a hand, then next hand will have a lower win threshold for A. For example, initially the win threshold for A is $4$ triples. However, if B wins the first hand, then the new threshold will be $3$ triples. Conversely, each time A wins, A's threshold will be increased by $1$. So for example, if A wins the first hand, A's new win threshold will be $5$ triples to win. The minimum # of required triples is $1$. No more than $39$ cards will ever need to be drawn to determine a winner since even if $13$ triples are required, the $39$th card will guarantee a winner but it is likely a quad would have appeared way before then.
Note that the triples and quads need not be in any order. For example, $Q,2,Q,4,Q,7,A,10,Q$ is a quad. The requirement is not $4$ like ranks in a row however that is also a quad but very unlikely.
I am thinking since the difficulty of A winning is variable based on who wins the previous hand(s), this should reach some type of equilibrium where it is about equally likely for each to win in the longrun but how can I show this mathematically?
I ran a computer simulation and it looks like my initial hypothesis is right. There seems to be an equally likely chance for A or B to win on average. Even with as few as $10$ trials ($10$ winning hands), I am seeing mostly $5$ wins for each but sometimes $4$ vs. $6$ but I didn't even see $3$ vs. $7$ yet so it seems to hit equilibrium VERY quickly. Since the # of required trials (wins) is so low, you can probably just try this with a real deck of cards and confirm it is about $50/50$ with as few as $10$ winning hands. Just remember to update the winning threshold for A properly (for example, $4, 5, 4, 3$...). Unlike a fair coin toss where $8$, $9$, or even $10$ heads are possible, it seems almost impossible for that to happen with this type of "self adjusting" game.
So how can it be shown mathematically that this type of game will reach equilibrium where either player has about the same chance to win overall?
If you do a computer simulation of this, it is somewhat amazing how many times it will hit exactly half and half. For example, if I run $1000$ decisions, I usually get $500$ A wins and $500$ B wins. It is very consistent (yet I am using different random numbers each run). This is WAY more predictable than something like fair coin flips which could get off to a "rocky" start. I think you can call this type of game "self adjusting" in that it will reach a "fair equilibrium" very quickly.
This seems like a hard problem to state mathematically so I put a bounty of $100$ points on it as an extra incentive to those of you who want to try to solve it. If $2$ different users submit good answers then what I usually do is give one person the checkmark and the other the bounty to be more fair to both. Good luck.
By drawing a picture, you can get a really intuitive solution. Roughly speaking, yes you'll converge toward an equal probability of A and B winning in this game, and in any game where the following conditions hold (I've named them for convenience):
I like to think of the game as being played like this: A and B shuffle the deck fairly, then spread out all the cards in order and see whether A or B would have won if they had drawn the cards one-by-one. Every time they shuffle the deck, there is a clear winner— either A or B. (There is never a case where they both win, for example.)
Let $n$ denote the game's current difficulty level for $A$, meaning the number of triples that $A$ must obtain in order to win. Let $p_n$ be the probability that when the deck is fairly-shuffled, the result yields a win for A— a deck where A will find $n$ triples before B finds a single quad.
Of course, the more difficult the game, the harder it is for $A$ to win: $p_1 > p_2 > p_3 > \ldots \geq 0 $. (This is true because every deck that's a win for A at some certain high level of difficulty is also a win for A with respect to each lower level of difficulty— the win conditions are nested.)
And we know that $p_1 = 1$, because A will always find a single triple before B finds a single quad. We also know that A can never win when $n=14$ because there simply aren't that many triples in the deck—we have that $p_{14} = 0$
Now we can draw an abstract diagram of the game possibilities which looks sort of like this:
There is a node for each of the difficulty levels $n=1, 2, 3, $ and so on. Each node represents the game played at a different difficulty level for A. There is an arrow from each node to the one after it, representing the fact that the game can get more difficult if A wins. There is an arrow from each node to the one before it, representing the fact that the game can get easier if A loses. (As a special case, there is an arrow from $n=1$ to itself since we don't let the game get easier than that.)
Each of the arrows is labeled with the probability of making that particular transition. The rightward arrows are labeled with the probability of $A$ winning. The leftward probabilities are labeled with the probability of $A$ losing. (Because of the zero-sum property, if probability of going forwards is $p_n$, then the probability of going backwards is $1-p_n$.)
Now instead of playing our card game, we can simply play on this diagram: start on the $n=1$ node. With probability $p_n$, move right. With probability $1-p_{n}$, move left. We can ask about the game's equilibrium by asking whether there's an equilibrium state in this diagram.
In fact, there must be an equilibrium state. Here's why:
I hope this helps! If you want an even more formal rendition of this intuitive result, you can express our diagram in the language of Markov chains, and this equilibrium solution as the steady state of such a Markov chain.
Edit: About fast convergence
The series of games happens in roughly two phases. In the first phase, the game converges toward an equilibrium node $k$. In the second phase, the game fluctuates around this equilibrium node.
During the first phase, you are essentially playing this game:
If we were tossing a fair coin, we would expect that it would take $2k$ tosses to reach an equilibrium node. If we were tossing a coin that always came up heads, we would expect it would take exactly $k$ tosses to reach the equilibrium state. With an adaptive coin as in this game, our rough expectation is that it will take somewhere between $k$ and $2k$ tosses to reach equilibrium. In this game, $k$ might be around 7, so we would expect it to take around 11 games to reach equilibrium.
At the end of the first phase, we know for certain that $\text{# A wins} = k+\text{# B wins}$. (Think about the number of left and right transitions required to reach equilibrium state $k$.) So based on the expected number of wins, the expected win ratio is approximately somewhere between $k:k+k = \frac{1}{3}$ and $2k:2k+k = \frac{2}{3}$.
During the second phase, the games fluctuates around the equilibrium node. The game's self-corrective behavior is key— what emerges from the rules of this game is that if A has lost more games than won, A is more likely to win, and vice versa.
A theoretical fair coin toss is memoryless, meaning that the probability of getting heads does not depend on previous coin tosses. In contrast, this game is self-correcting: whenever the game moves leftward, the probability shifts to make it more likely to move rightward, and vice-versa.
The other key property is that, I expect, there is a sharp transition where at one difficulty level $k$, we have that $p_k$ is reasonably less than ½, while for the very next difficulty level $k+1$, we have that $p_{k+1}$ is reasonably more than ½.
A standard deck has thirteen faces (A, 2, 3, ..., K, Q, J). If you played with a deck of cards with more than the standard number of faces — perhaps you played with cards labeled (A, B, C, ..., X, Y, Z)— then I think this transition would become less sharp.
The sharpness of the transition controls the strength of the self-correcting behavior (when you move too far left/right, how significantly the odds change to favor correcting it.)
Edit: Calculated probabilities
The qualitative analysis above is enough to prove that convergence happens, and happens quickly. But if you want to know quantitatively how likely A is to win at each level of difficulty, here are the results:
Note: I wrote a computer program to compute these, so feel free to double-check my results. But qualitatively, the numbers do seem correct, and they agree with my hand calculations for n=$1$, n=$13$, and n=$14$.
Importantly, these probabilities show that the game is essentially fair right from the start (where initially n=$4$)— the game starts near equilibrium. And because there is a sharp transition between n=$4$ (where A is more likely to win, $0.54$) and n=$5$ (where A is more likely to lose, $0.38$) (overall difference $\approx 0.54-0.38 = 0.16$), we expect this equilibrium to be very stable and self-correcting.
Edit: Expected number of hands before reaching a given win threshold for A
Because we can model this game as a Markov chain, we can directly compute the expected number of rounds you'd have to play before reaching a certain difficulty threshold for $A$. (Feel free to use a simulation to check these formal results.)
Following the methods of this webpage: https://www.eecis.udel.edu/~mckennar/blog/markov-chains-and-expected-value, I did the following:
Here are the results: the expected number of hands to get from difficulty 4 to ...