AI PSO Travelling Salesman

Overview

PyPi module N/A
git repository https://bitbucket.org/arrizza-public/ai-pso-salesman
git command git clone git@bitbucket.org:arrizza-public/ai-pso-salesman.git
Verification Report https://arrizza.com/web-ver/ai-pso-salesman-report.html
Version Info
  • Ubuntu 20.04 focal, Python 3.10
  • Ubuntu 22.04 jammy, Python 3.10

Summary

This is an implementation of a PSO (Particle Swarm Optimizer) based solution for the TSP (Travelling Salesman Problem)

The original code was found here https://asl.epfl.ch/education/courses/BioinspiredAdaptiveMachines/perezuribe/index.php and was in C, I converted it to C++.

How to use

  • run
# for exhaustive:
./doit
./doit -x 5
./doit -x 5
./debug/ai-pso-salesman -x 5
make ai-pso-salesman-run s="-x 5"

./doit -b     # for burma
./doit -u16   # for ulysses16
./doit -u22   # for ulysses22

# TODO fix release mode
#./doit release all -b     # for burma
#./doit release all -u16   # for ulysses16
#./doit release all -u22     # for ulysses22

Exhaustive method

I added a section to the code that does an exhaustive search for the solution. This is brute force and goes through all possible paths keeping track of the shortest route.

For example, the best solution is 3323 for the 14 city problem. This corresponds with the official value given here: https://www.informatik.uni-heidelberg.de/groups/comopt/software/TSPLIB95/STSP.html

Note:

  • 13 & 14 dimension times are with a release build, the rest are debug builds
  • The time for number of dim=14 is just over an hour. On a previous PC it was 5.5 hours. Your time may vary substantially as well.
time # dim best route
63.8m 14 3323 0-> 1-> 13-> 2-> 3-> 4-> 5-> 11-> 6-> 12-> 7-> 10-> 8-> 9-> 0
4.5m 13 3158 0-> 1-> 2-> 3-> 4-> 5-> 11-> 6-> 12-> 7-> 10-> 8-> 9-> 0
19.3s 12 3150 0-> 1-> 2-> 3-> 4-> 5-> 11-> 6-> 7-> 10-> 8-> 9-> 0
1.5s 11 3136 0-> 1-> 2-> 3-> 4-> 5-> 6-> 7-> 10-> 8-> 9-> 0
128mS 10 3114 0-> 1-> 2-> 3-> 4-> 5-> 6-> 7-> 8-> 9-> 0
12mS 9 2626 0-> 1-> 2-> 3-> 4-> 5-> 6-> 7-> 8-> 0
1mS 8 2382 0-> 1-> 2-> 3-> 4-> 5-> 6-> 7-> 0
250uS 7 2378 0-> 1-> 2-> 3-> 4-> 5-> 6-> 0
114uS 6 2336 0-> 1-> 2-> 3-> 4-> 5-> 0
136uS 5 2321 0-> 1-> 2-> 3-> 4-> 0
85uS 4 1570 0-> 1-> 2-> 3-> 0
63uS 3 1085 0-> 1-> 2-> 0
76uS 2 306 0-> 1-> 0

PSO method

The most notable error is mentioned in the TSPFAQ and can be found at this address: https://www.iwr.uni-heidelberg.de/groups/comopt/software/TSPLIB95/TSPFAQ.html

The coordinates given in the TSP library here: https://www.iwr.uni-heidelberg.de/groups/comopt/software/TSPLIB95/tsp/burma14.tsp.gz

are entered correctly in the original code, but they are not just simple Cartesian coordinates. They are "decimal degrees". That is, the value on the left of the decimal point are degrees latitude (or longitude) but the value on the right of the decimal point are minutes. Here is an excerpt from the FAQ showing how to correctly compute the distances:

Q: I get wrong distances for problems of type GEO.
A: There has been some confusion of how to compute the distances. I use the following code.

For converting coordinate input to longitude and latitude in radian:

PI = 3.141592;

deg = (int) x[i];
min = x[i]- deg;
rad = PI * (deg + 5.0 * min/ 3.0) / 180.0;

For computing the geographical distance:

RRR = 6378.388;

q1 = cos( longitude[i] - longitude[j] );
q2 = cos( latitude[i] - latitude[j] );
q3 = cos( latitude[i] + latitude[j] );
dij = (int) ( RRR * acos( 0.5*((1.0+q1)*q2 - (1.0-q1)*q3) ) + 1.0);

Using Particle Swarm Optimization (PSO)

By comparison the total time the PSO takes to find the same solution is less than a second... if it finds the solution at all. It generally does, but sometimes it takes a few runs before it finds it again.

I will be modifying this program over time to:

  • be more generic. I've converted most of the code into C++ templates. But there's still more to go. That should allow the same code to be used in other applications, e.g. with different Solution costs, with different solution representations, etc.
  • use different algorithms for ant exploration. Currently, the pheromone levels are laid down based on a few criteria. I'd like to make this part of the generic code (by, again, using templates) and then trying other possible algorithms.

  • determine why the ants sometime fail to find the best solution. Perhaps they don't find it because once they converge onto an optimum, there isn't sufficient exploration done to find the best. Perhaps the q0 parameter needs to be manipulated dynamically, e.g. if the "best" solution hasn't changed in x iterations, adjust q0, so it explores more and converges less.

I do not have statistics available showing the distribution of tour lengths across all possible tours. I believe the maximum tour length is somewhere around 6400. There are 87,178,291,200 (10**10) possible tours.

According to Dorigo, et al., q0 = 0.9 and RHO = 0.1 are good values. I've tried them, and they work ok, e.g. after 500 runs (takes approx. 30 seconds in Debug mode):

  • 26% of the runs finds the optimum value of 3323.
  • 36% find a tour less than 3400
  • 36% find a tour less than 3800
  • 1% (the remaining runs) are over 3800
  • There were no runs with tour lengths above 4000.

I tried varying q0 and RHO like so:

RHO q0 Optimum runs < 3400 <3800 others Notes
0.9 0.1 26% 36% 36% 1% Dorigo et al
0.9 0.3 0% 17% 40% 42% substantially worse
0.1 0.1 26% 37% 34% 2% similar to the first result. The algorithm may be insensitive to q0
0.9 0.15 7% 35% 48% 8 much worse. It looks like RHO = 0.1 is a good value
0.0 0.1 25% 32% 37% 5
1.0 0.1 26% 37% 32% 3

The last two results seem to imply that choosing a city at random or choosing a city based on pheromone strength is about equal.

For these results I had used 14*2/3 = 9 ants. Next I varied the number of ants, leaving q0 = 0.9 and RHO as 0.1:

# Ants Optimum runs <3400 <3800 others Notes
9 26% 36% 36% 1% Dorigo et al
14 0% 100% 0% 0% degenerate case. The result was always 3346
13 29% 62% 8% 0% Took a bit longer to execute 500 runs (around 40 seconds)
12 29% 49% 21% 0%
11 29% 45% 25% 0%
5 14% 13% 45% 26% Took only 15 seconds to execute
4 13% 17% 33% 36%
3 6% 13% 24% 54% much worse

Ulysses16

I added a 16 city problem called ulysses16. Its best route has a cost of 6859. It has 20,922,789,888,000 (10**13) total routes.

Adding this TSP had a couple of effects:

  • I had to genericize some of the code to allow a run of either Burma14 or Ulysses16.
  • I had to increase the number of iterations per run to get the same results. Burma14 needed 200 iterations to find the optimum route nearly 100% of the runs. Ulysses16 needed 6 times that number (200 iterations) to achieve the same result. Note however that Ulysses16 has around 250 times the number of possible routes that Burma14 does.

There were 14 ants:

# iterations Optimum runs <7000 <7200 others Notes
1200 99% 1% 0% 0%
1100 100% 0% 0% 0%
1000 97% 3% 0% 0% much slower: 90 seconds for 100 runs
900 98% 2% 0% 0%
800 95% 5% 0% 0%

Ulysses22

I added a 22 city problem called ulysses22. Its best route has a cost of 7013. It has 1,124,000,727,777,607,680,000 (10**21) total routes.

Added the following features:

  • now shows total elapsed time. The times were taken in Release mode version of pso.exe

  • I had to increase the number of iterations per run to get the same results. Ulysses16 needed 1200 iterations to find the optimum route nearly 100% of the runs. Ulysses22 did not find the optimum even with 32000 iterations. Note that Ulysses22 has more than 50 million times the number of possible routes that Ulysses16 does.

There were 20 ants:

# iterations Optimum runs <7100 <7200 others Time/run(sec) Notes
32000 0% 100% 0% 0% 23.0 most routes were 7035, with one 7019
16000 0% 100% 0% 0% 11.0 most routes were 7035
8000 0% 97% 2% 0% 6.0
4000 0% 83% 17% 0% 3.0
2000 0% 54% 46% 0% 1.5
1500 0% 52% 48% 0% 1.1
1000 0% 35% 65% 0% 0.8
500 0% 17% 80% 3% 0.4
100 0% 4% 53% 43% 0.08

Reinitialize if stagnant

I have modified the algorithm slightly resulting in substantially better results.

I modified AntSwarm::find_best_solution() to detect if the algorithm became stuck in a local optima. If after 10 iterations the best_solution_cost has not changed then it has become stuck. I modified AntSwarm::run() to reinitialize the ant swarm.

When I first tried this change, it did not work. After stepping through the code, I realized that I also had to save the current best solution and reinitialize that as well. The solutions the ant swarm was coming up with were not better than the current best solution and so were discarded.

The existence of a known local optima "prevented" the investigation of other routes. Since other routes started at worse values than the current local optima, they were not investigated further. They could have led to even better routes, but they were abandoned prematurely. The ant swarm needs a chance to try other potential routes for a while even if they currently were not the better solutions than the global best.

There is some analogy to natural ant swarms. An entire ant hill does not go looking for food in one and only one area. There are many troops of ants that search various places. Even if a good source of food is found, other troops continue searching other areas. If at first these forays are not as successful as the current best, that's ok, that ant troop keeps searching. If later on the ant troop finds an even better source of food, it carries that information back to the ant hill.

There is also some analogy to human market driven strategies. Even if the current best solution to a problem is well-known, there is still investment in research looking at other areas of promise even though that research is not delivering a better solution than is currently available. An example can be found in computer architectures. The current "global" best is that the Von Neumann architecture, but there are companies investigating other architectures. Currently, those architectures may not be better than Von Neumann machines, but eventually they may lead to better products.

With the changes in place, I got these results (12 ants). I varied the number of iterations:

# iterations Optimum runs <3400 <3800 others Notes
200 100% 0% 0% 0% bingo!
175 100% 0% 0% 0%
160 100% 0% 0% 0%
150 93% 7% 0% 0%
100 93% 7% 0% 0% number of iterations used in previous versions
90 83% 17% 0% 0%
80 86% 14% 0% 0%
70 83% 17% 0% 0%
60 83% 17% 0% 0%
50 65% 34% 1% 0%

- John Arrizza