A problem in which a certain sequence must be defined by a set of tasks (or jobs) can occur in many areas, for example, the execution of programs, the waiting of aircraft for take-off and landing, logistics planning, communication planning or the manufacturing process by machines. The sequence is a relative position of a job i to all other jobs.
In general, such a plan looks like the one shown in the following figure:
However, in my master thesis, I concentrated on a certain problem, which M. Rabiee, M. Zandieh and A. Jafarian describe in their paper Scheduling of a no-wait two-machine flow shop with sequence-dependent setup times and probable rework using robust meta-heuristics (AICA). This is a problem, which is limited to the fact that there are only two machines or processing steps, which require preparation and, if necessary, rework. Shown in the following picture:
This more detailed problem can also occur in many areas. For example, in chemical reactions. Here, the machine must be cleaned beforehand, and the reaction may take longer than planned. The next step is to find a nearly optimal sequence over several jobs, in which the average completion time is lowest. Alternatively, the total completion time can also be regarded as an optimization target.
An exact procedure to find the optimal sequence is practically impossible to calculate from certain sizes. The computing power and time are not sufficient. Here we are not talking about thousands, but only hundreds of jobs. In order to calculate larger quantities, optimization methods are needed. Examples are NEH, ant colony optimization and genetic algorithms. I considered these three in my master thesis and tried to beat the NEH and a adopted genetic algorithm with an adapted ant algorithm.
Ant Colony Optimization (ACO)
The ACO was the core of my research. The idea is relatively simple and copied from nature. The algorithm has the name of the ant and is supposed to simulate the behaviour of these. The search can be pictured like the following image:
In the simplest case (A), the ant runs from the food to the nest and has no obstacles in its way. But this is unrealistic. So, if you place an obstacle in the way of the ants as in (B), they first run around by coincidence. When they run, they permanently leave a trail of pheromones. These evaporate with time.
If we now consider two ants start at the time t=0. Ant 1 is at the time t=1 with the food and ant 2 at t=2, because its trail is longer. At t=2 ant 1 is already back at the nest. If now ant 3 now wants to start, she searches for the pheromone with the most smell. Now the trace of ant 1 is already twice as strong in odour, so it has a higher chance to be chosen. The more often a trail is walked, the more interesting it becomes. Shorter trails are run more often and longer ones evaporate faster and disappear with time. That’s it already.
These trails we now can simulated in the computer with matrices. Thus the evaporation of the ways is simulated with a decrease of all the values in the matrix and afterwards increasing the values of good ways. Agents simulate ants and choose the path based on the values in the matrix. When the algorithm is started, the matrix is initialized with a placeholder value and the best solutions are allowed to update (more details can be found in my master thesis).
I implemented the ACO in Kotlin, i also reimplemented NEH and AICA to make a fair comparison. The NEH is deterministic, AICA and ACO have termination criteria which must be specified. The code is publicly available on my GitHub.
The NEH is an extremely fast algorithm. The results are almost instantly available, but they are not very optimal with smaller problems. As can be seen in the figure above, NEH was only able to achieve better solutions for bigger problems. In all others, the ACO found the best solution. The AICA was beaten in any comparison. At the running time, the ACO was the slowest in this case but should be faster than the NEH in case of much bigger problems. But this is only theoretical. In my tests the ACO, it was always much slower but found better results in smaller problems.
I tried with different optimizations and initializations to adopt the ACO to the problem and to improve it. But the bare ACO was the most efficient for the problem. Only the adjustment of parameters like the number of ants, evaporation rate and probabilistic could improve the ACO. In the following figure, for example, the number of ants has been changed. Few ants find an inferior result, but many do not necessarily find better ones.
If now the duration of the calculation is also taken into consideration, one sees that a compromise must be made between the quality of the result and the duration of the calculation.
ACOs should also be an interesting optimization method in the economy. Especially for tasks that cannot be solved in finite time, it is important to compare different optimizations. If the optimizations have to be repeated very often, it is more interesting to choose a method like the NEH and to do without a little quality of the solution in order to keep a high frequency of calculations. If a high solution quality is more important, because the calculation happens for example at night, you should choose an ACO. Here a lot of quality can be achieved by optimizing the parameters for the corresponding problem. For the master thesis, the parameters were optimized for *50* jobs. Here you can certainly get out some more accuracy if you adjust them exactly to a corresponding problem. All in all, ACOs are an exciting topic and I hope to be able to use them in practice soon.
More information can be found in my master thesis, if you have any questions, please feel free to comment on the article or send me an email.