Particle swarm optimization

386 Views Asked by At

I don't know where to start. Like, I don't know how to plug the info into the algorithm. Show two iterations of particle swarm optimization (neighborhood approach) method.

Mathematically show two iterations of a PSO method. For the problem the optimum is at $(0,0)$, the objective function is the distance from the optimum, and the three particles are initialised at $(4,3)$, $(-5,6)$, and $(-3, -7)$. (It is understood that everyone will have different random numbers, this is ok)

1

There are 1 best solutions below

1
On

Alright, so the idea behind particle swarm optimization is this:

We have a problem, but its hard to solve. So lets take numerous guesses at a solution and call them "particles". Lets call our particles $v_{i}$ where $i$ is an index variable for the particles. Luckily, we have a function which tells us how good our current guesses are (in this case the distance function). So lets try and improve our guesses.

We know which guess is currently best, so we should want our particles to explore that area thus we should move our particles closer to the best known solution. Lets call this variable $g$.

However, we also want to explore other areas, and thus we will also move particles based on there own "memory" which is their best known position. Lets call this variable $p_{i}$, where $i$ is referring to the $i$th particles best known position.

Also usually we want to have a random component which allows our particles to explore completely new areas, this is usually called velocity. Lets call this variable $v$.

In addition to this, we will randomize how much we consider each of these components, so individual particles will behave independently (and thus explore the solution space better). Lets call these vectors $r_{1},r_{2}$ (note these are in the range $(0,1)^{d}$ where $d$ is the dimension of the space we are working in)$.

So our when we improve our guesses (running an iteration of PSO), we are going to replace our particles with

$v_{i} = r_{1}g + r_{2}p_{i} +v$

So lets try for your example. Let's give names to our particles, $v_{1} = (4,3)$, $v_{2} = (-5,6)$, $v_{3} =(-3,7)$

From your initialized particles, $(4,3)$ is the closest to $(0,0)$ and thus it is our global best solution, so $g = (4,3)$.

Each particle has only seen one place so currently $p_{i} =v_{i}$

Now in practice you would want to take random variables to get your weighting, for my purposes I will have all the random variables $=.5$ (in all components, and $v =0$.

For notation I will say $v_{i,new}$ as the updated particle and $v_{i,old}$ for the non-updated particle.

we have

$v_{1,new} = [.5,0.5]^{t}(4,3) + [0.5,0.5]^{t}(4,3) +0 = (4,3)$ $v_{2,new} = [.5,0.5]^{t}(4,3) + [0.5,0.5]^{t}(-5,6) +0 = (-1/2,9/2)$ $v_{3,new} = [.5,0.5]^{t}(4,3) + [0.5,0.5]^{t}(-3,7) +0 = (-1/2,5)$

Now we check, did any our particles improve? Well because of how I picked the random variables, $v_{1}$ didn't change.

However both the other particles got closer to (0,0) (you can verify this with the distance function). Therefore we update $p_{2} = (-1/2,9/2)$ and $p_{3} = (-1/2,5)$

Also note that v_{2,new} is closer to (0,0) than our previous best guess $(4,3)$ therefore we update $g$ to $g= v_{2,new}$.

Now we can just keep continuing this procedure and that is particle swarm optimization. Hope that helps.