An Overview of Particle Swarm Optimization

This will start a series on the Particle Swarm Optimization algorithm.

The following topics will be covered:

  1. An overview of Particle Swarm Optimization Algorithm
  2. In-depth details of the algorithm
  3. More applications of Particle Swarm Optimization

Particle Swarm Optimization (PSO)

This algorithm is often used to optimize functions in rather unfriendly non-convex, non-continuous search spaces. The idea behind the algorithm involves a swarm of particles flying through a space both collaboratively and independently.

Instead of talking about particles, it is helpful to imagine that the swarm of particles is actually a flock of birds flying through a mountain range. Their goal is to find the best nesting site within this ‘search space’. The ideal nesting site has few predators, plenty of food sources, and many resources for building sturdy nests. Instead of the continuous motion that we often see in flying birds, each of the birds updates where it is going to head and how fast after each ‘turn’. So each of the birds makes a decision based on the birds around them and then they all move at the same time. This is repeated until some sort of stopping criterion has been satisfied (or the best nesting location has been found).

The following set of illustrations show how a swarm could find the minimum of a parabola.

The function we are trying to find the minimum for. In this case f(1.0) = 0.

We randomly place 5 particles within the search region. The best performing particle so far is seen in green at about 1.25.

All of the particle look at their own position and their neighbors and update their positions and velocities. Often they end up moving towards the best performing particle from the previous step (now seen in blue). The new best performing particle (in green) is close to 0.85.

With the next update, the particles start converging to the same positions and overlap slightly. The new best particle is close to 1.0.

Almost all of the particles converge to the correct answer in this step. However, further iterations may be necessary to determine if the correction minimum has been achieved.

How is this useful?

We can extend this example to high dimensional spaces such as a 100 dimensional paraboloid. Or the weight space for a neural network where each particle becomes a neural network that is looking for the best way to fit a set of data. Other examples could include Support Vector Machines or even the optimal choice of crops for a growing season. The applications are nearly endless.

In the next section, we will go over how the algorithm actually works and an example involving the optimization of a function.

This post is part of a series:

  1. An overview of Particle Swarm Optimization Algorithm
  2. In-depth details of the algorithm
  3. More applications of Particle Swarm Optimization

3 thoughts on “An Overview of Particle Swarm Optimization

  1. I easily understood your concepts. Can you please send simple C or C++ code for Particle swarm optimization. It will help us lot.

    Reply

    1. Hi anbu,

      At the moment, I do not have any C or C++ code for Particle Swarm Optimization but I am in the process of writing a post about some PSO code I wrote in Python. You can access the code right now at http://a-ma.us/CodeLibrary/bare-bones-pso.txt. Due to server constraints, I had to rename the file from .py to .txt so please rename the file so it has a .py file extension. I will also be posting the next blog article soon which discusses the code more thoroughly.

      Thank you for your comment and I hope this helps.
      Adam

      P.S. If you do download the code and run it, feel free to send me any bugs you run into.

      Reply

  2. Nice and easy to explain PSO. That is what I needed 🙂
    It would compelete if you place theory + code.
    Thanks a lot.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *