I'm working on an urn model where two urns, say A and B, are interacting in that at each step, a ball is drawn from each of them and then replaced adding a ball of the color drawn from the other urn. So if my draw vector at time $t$ is $X_t=(X_t^a,X_t^b)=(W,B)$, I'll add a black ball to urn A and a white to urn B. Now, if $Z$ represents the proportion of black balls, I end up having the following dynamics:
\begin{cases} \mathbb{E}\left[\Delta Z_a^t|\mathcal{F}_t\right]=\cfrac{1}{S_a^t+1}\left[Z_b^t-\cfrac{1}{S_a^t}\right]\\ \mathbb{E}\left[\Delta Z_b^t|\mathcal{F}_t\right] =\cfrac{1}{S_b^t+1}\left[Z_a^t-\cfrac{1}{S_b^t}\right] \end{cases}
with S being the total number of balls in the considered urn. Quite naively, I tried to solve the following differential system:
$$ \begin{cases} x'(t)=\frac{1}{t+1}\left[y(t)-\frac{1}{t}\right] \\ y'(t)=\frac{1}{t+1}\left[x(t)-\frac{1}{t}\right] \end{cases} $$
assuming that each urn is randomly initialized with one ball. The problem is: if I solve this system I end up having functions that are not probabilities at all. I'm not proficient in stochastic calculus so I don't know whether my method is good but I need to add a constraint to my system or if I should solve SDEs rather than ODEs. Do you know how I can get around this?
EDIT: actually a mistake was hidden in the computation of the dynamics. The right system is:
$$ \begin{cases} \mathbb{E}\left[\Delta Z_a^t|\mathcal{F}_t\right]=\cfrac{1}{S_a^t+1}\left[Z_b^t-Z_a^t\right]\\ \mathbb{E}\left[\Delta Z_b^t|\mathcal{F}_t\right] =\cfrac{1}{S_b^t+1}\left[Z_a^t-Z_b^t\right] \end{cases} $$
So the ordinary differential system associated is
$$ \begin{cases} x'(t)=\frac{1}{t+1}\left[y(t)-x(t)\right] \\ y'(t)=\frac{1}{t+1}\left[x(t)-y(t)\right] \end{cases} $$
Which makes much more sense but seems tougher to solve.
Premise
I am tackling the problem in two phases, to use the results of the (more simple) first as a key for the second : - 1st assuming that the ball extracted from one urn is placed in the other, by which the total content of each urn is constant;
- 2nd assuming (as requested) that each ball extracted is replaced in its own urn, and a ball of same color is added to the other.
Phase 1
The extraction- replacement algorythm proposed can be seen as simulating a discrete diffusion among the contents of the two urns.
Assume that the first urn contains $n$ white balls and a total of $N$ balls, while the second contains $m$ white balls and a total of $M$.
Indexing with $t$ the number of extraction, we have $$ \bbox[lightyellow] { \left\{ \matrix{ n(t + 1) = n(t) - \Delta m(t) + \Delta n(t) \hfill \cr m(t + 1) = m(t) - \Delta n(t) + \Delta m(t) \hfill \cr} \right. } \tag{1} $$ with $N$ and $M$ remaining constant, as well as the sum $n(t)+m(t)=n(0)+m(0)=s$, since the number of white (and black) balls remain constant over the two urns.
Then the mechanism of extraction, swap and replace translates into the fact that the $\Delta n$ can assume only the values $(-1,0,1)$, with probability $$ \bbox[lightyellow] { \Delta n = \left\{ {\matrix{ 1 & {\left( {1 - {n \over N}} \right){m \over M} = } & {{{\left( {N - n} \right)\left( {s - n} \right)} \over {N\,M}}} \cr 0 & {{n \over N}{m \over M} + \left( {1 - {n \over N}} \right)\left( {1 - {m \over M}} \right) = } & {{{M\left( {N - n} \right) - N\left( {s - n} \right) + 2n\left( {s - n} \right)} \over {N\,M}}} \cr { - 1} & {\left( {1 - {m \over M}} \right){n \over N} = } & {{{n\left( {M + n - s} \right)} \over {N\,M}}} \cr } } \right. } \tag{2} $$ where we omitted the index $t$, and where the probability is to be understood as conditional, given $n$ (and $m$ for the first expression).
For $m$ we have an analoguous formula, just by exchanging the letters.
The sum of the above probabilities correctly returns $1$, and the expected value of $\Delta n$ results to be $$ \bbox[lightyellow] { E\left( {\Delta n\;\left| n \right.} \right) = {s \over M} - \left( {{{N + M} \over {N\,M}}} \right)n } $$
Inserting that into the first expression in (1) and averaging over the $n$ we get $$ \bbox[lightyellow] { \overline n (t + 1) = \left( {1 - {{N + M} \over {N\,M}}} \right)\overline n (t) + {s \over M} } $$ the solution to which is $$ \bbox[lightyellow] { \overline n (t) - {{N\,s} \over {N + M}} = \left( {n(0) - {{N\,s} \over {N + M}}} \right)\left( {1 - {{N + M} \over {N\,M}}} \right)^{\,t} } \tag{3} $$ which tells that the compositions of the urns converge exponentially, and with the same decay/grow ratio, to the equilibrium value, which is as expected: $$ \bbox[lightyellow] { {{\overline n (\infty )} \over N} = {{\overline m (\infty )} \over M} = {{n(0) + m(0)} \over {N + M}} } \tag{4} $$
That said for the average values, we shall look now to define the probability distribution for $n(t)$.
Returning to id. (2), that tells us that we can organize a Markov chain in which the the probabilities for $n(t)$ to attain the values ${0,1,\cdots,N}$ are collected in the vector ${\bf p}_{\,{\bf n}} (t)$ and where the $(N+1) \times (N+1)$, tri-diagonal, Transition Matrix ${\bf T^T}$ has the rows given by the probability values in (2), i.e. $$ \bbox[lightyellow] { \eqalign{ & T^{\,T} _{\,n,\,n + \Delta n} = p(\Delta n)\quad \cr & {\bf p}_{\,{\bf n}} (t + 1) = {\bf T}\;{\bf p}_{\,{\bf n}} (t) \cr} } \tag{5} $$ (we use the column vector version of the Markov chain construction).
The matrix $\bf T$ diagonalizes, and the diagonal matrix obtained does not depend on $s$.
Therefore it is easy to obtain the probability distribution for $n$ along all the steps in $t$.
We shall omit here the details of the similitude matrices, and just refer that therefrom the distribution at the equilibrium, which is independent from the initial conditions, is given by $$ \bbox[lightyellow] { \left. {p(n)\;} \right|_{\,t = \infty } = \left( \matrix{ s \cr n \cr} \right)\left( \matrix{ M + N - s \cr N - n \cr} \right)\,/\,\left( \matrix{ M + N \cr N \cr} \right) } \tag{6} $$
Phase 2
With the same notation as above now we have $$ \bbox[lightyellow] { \left\{ \matrix{ N(t + 1) = N(t) + 1 = N(0) + t \hfill \cr M(t + 1) = M(t) + 1 = M(0) + t \hfill \cr n(t + 1) = n(t) + \Delta n(t) \hfill \cr m(t + 1) = m(t) + \Delta m(t) \hfill \cr} \right. } $$ which, for the average values, gives $$ \bbox[lightyellow] { \left\{ \matrix{ \overline n (t + 1) = \overline n (t) + {{\overline m (t)} \over {M(0) + t}} \hfill \cr \overline m (t + 1) = \overline m (t) + {{\overline n (t)} \over {N(0) + t}} \hfill \cr} \right.\quad \Rightarrow \quad \left( {\matrix{ {\overline n (t + 1)} \cr {\overline m (t + 1)} \cr } } \right) = \left( {\matrix{ 1 & {1/\left( {M(0) + t} \right)} \cr {1/\left( {N(0) + t} \right)} & 1 \cr } } \right)\left( {\matrix{ {\overline n (t)} \cr {\overline m (t)} \cr } } \right) } \tag{7} $$
Let's change the variables, taking the sum and the difference of the ratios $$ \bbox[lightyellow] { \left( {\matrix{ {x(t)} \cr {y(t)} \cr } } \right) = \left( {\matrix{ {1/\left( {N(0) + t} \right)} & {1/\left( {M(0) + t} \right)} \cr {1/\left( {N(0) + t} \right)} & { - \,1/\left( {M(0) + t} \right)} \cr } } \right)\left( {\matrix{ {\overline n (t)} \cr {\overline m (t)} \cr } } \right) } \tag{8} $$ we reach to $$ \bbox[lightyellow] { \eqalign{ & \left( {\matrix{ {x(t + 1)} \cr {y(t + 1)} \cr } } \right) = \left( {\matrix{ 1 & {{{N(0) - M(0)} \over {\left( {N(0) + t + 1} \right)\left( {M(0) + t + 1} \right)}}} \cr 0 & {1 - {1 \over {\left( {N(0) + t + 1} \right)}} - {1 \over {\left( {M(0) + t + 1} \right)}}} \cr } } \right)\left( {\matrix{ {x(t)} \cr {y(t)} \cr } } \right) = \cr & = \left( {\matrix{ 1 & {{{2Q} \over {\left( {P + Q + t + 1} \right)\left( {P - Q + t + 1} \right)}}} \cr 0 & {{{\left( {P + t + \sqrt {Q^{\,2} + 1} } \right)\left( {P + t - \sqrt {Q^{\,2} + 1} } \right)} \over {\left( {P + Q + t + 1} \right)\left( {P - Q + t + 1} \right)}}} \cr } } \right)\left( {\matrix{ {x(t)} \cr {y(t)} \cr } } \right) \cr} } \tag{9} $$ where $P=(N(0)+M(0))/2$ and $Q=(N(0)-M(0))/2$.
This matrix is Upper Triangular and the recurrence can be solved in hypergeometric terms, giving $$ \bbox[lightyellow] { \eqalign{ & y(t) = C\;{{\Gamma \left( {P + \sqrt {Q^{\,2} + 1} + t} \right)\;\Gamma \left( {P - \sqrt {Q^{\,2} + 1} + t} \right)} \over {\Gamma \left( {P + \,Q + t + 1} \right)\;\Gamma \left( {P - \,Q + t + 1} \right)}} = \cr & = C\;\left( {P + \,Q + t + 1} \right)^{\,\overline {\, - \;\left( {1 - \left( {\sqrt {Q^{\,2} + 1} - Q} \right)} \right)\,} } \;\left( {P - \,Q + t + 1} \right)^{\,\overline {\, - \;\left( {1 + \left( {\sqrt {Q^{\,2} + 1} - Q} \right)} \right)\,} } \cr & C = {{\Gamma \left( {P + \,Q + 1} \right)\;\Gamma \left( {P - \,Q + 1} \right)} \over {\Gamma \left( {P + \sqrt {Q^{\,2} + 1} } \right)\; \Gamma \left( {P - \sqrt {Q^{\,2} + 1} } \right)}}y(0) \cr & x(t) = \left\{ {\matrix{ {x(0)} & {Q = 0} \cr {x(0) - {{P^{\,2} - P - Q^{\,2} } \over Q}y(0) + {{\left( {P + t} \right)^{\,2} - \left( {P + t} \right) - Q^{\,2} } \over Q}y(t)} & {Q \ne 0} \cr } } \right. \cr} } \tag{10.a} $$
From the expression for $y(t)$ in terms of Rising Factorials,
applying Stirling approximation, we obtain $$ \bbox[lightyellow] { \left\{ \matrix{ y(t) \propto {C \over {\left( {P + t + 1} \right)^{\,2} - \,Q^{\,2} }} \propto 0\quad \left| {\,t \to \infty } \right. \hfill \cr x(t) \propto x(0) + \left[ {0 \ne Q} \right]\left( { - {{P^{\,2} - P - Q^{\,2} } \over Q}y(0) + {C \over Q}} \right)\quad \left| {\,t \to \infty } \right. \hfill \cr} \right. } \tag{10.b} $$
The formulas above check with computations, as per the attached example.
It remains now to tackle with the distribution ...