The particle swarm optimization algorithm (PSO) consists of a set of $I$ particles, each having a velocity $v_i$ and position $x_i$. The algorithm keeps track of the best encountered position for each particle ($lb_i)$ separately and, in the simplest variant, the best position ever encountered $(gb)$. At the start, both the position and velocity are uniformly distributed random variables. Each iteration of the algorithm consists of two parts, the velocity $v$ is updated as $$v_i(t+1) = v_i(t) + 2rand()(lb_i-gb_i)+2Rand()(gb-x_i(t))$$ (the $rand()$s are random vectors of length two, where both elements are from $U(0,1)$ and the multiplication is point-wise) and and then the position is updated as $$x_i(t+1) = x_i(t)+v_i(t+1).$$ One also updates the best positions found.
I, in my naïvete, decided to try to do some kind of an extensive probabilistic examination of the algorithm's evolution with a simple test case. Just two particles and test function of $f(x,y) = x^2+y^2$ with start range for position and velocity within $[-1,1]^2.$
This turned out to be rather difficult.
At the start, I want to know the mean of this distribution: $\min(f(x_1(0)),f(x_2(0))),$ where $x_i(0)$ are random vectors of length two, both elements from $U(-1,1)$. This gives the the mean value produced by randomly selecting two points and, one hopes, using the algorithm's iterative process, one can obtain a smaller mean value. The sum of squares function $f$ has CDF given by $$\left\{ \begin{array}{cc} & \begin{array}{cc} 1 & x\geq 2 \\ \frac{\pi x}{4} & 0<x\leq 1 \\ \frac{1}{2} \left(2 \sqrt{x-1}-x \tan ^{-1}\left(\sqrt{x-1}\right)+x \csc ^{-1}\left(\sqrt{x}\right)\right) & 1<x<2 \\ \end{array} \\ \end{array} \right. $$ We can now calculate the mean of $\min(f(x_1(0)),f(x_2(0)))$. Noting the both of the values are IID, thus min has distribution $1-(1-F(x))^2$, and we get mean to be $-\frac{2}{45} (2 \log (2)-11) \approx 0.427276$. This is readily demonstrated by computer simulations to be correct.
But after this I cannot really go on. For the first iteration, I would need to calculate the mean of (zeroes omitted) $$\min(f(x_1),f(x_2),f(x_1+v_1+2\text{rand}()(gb-x_1)),f(x_2+v_2+2\text{Rand}()(gb-x_2))).$$ How could this be done? The variables are no longer IID. Note that $gb$ is argmin of $x_i$, meaning that one of the two simplifies to $x_i+v_i$ and that the $lb_i$ is equal to $x_i$, thus sum is simpler than in general case.
A computer simulation of this gives approximate mean as $0.31$.