Question

I'm doing a comparison of two conservation area design tools, namely Marxan and ConsNet, both of which use metaheuristic algorithms to solve a version of the minimum set cover problem. Marxan uses Simulated Annealing and ConsNet uses Tabu Search. Although my background is in Biology, I think I was able to grasp some of the concepts of optimization through metaheuristics.

However, there are two things I still haven't figured out about Tabu Search. The first is how it escapes local optima. I know it can't reverse its moves, and that stops it from cycling, but I don't know what makes it leave a local optimum once it finds it. I can understand how Simulated Annealing does it - it has a certain probability of accepting a worse solution which decreases over time until it no longer accepts a worse solution - but I don't know how TS does it.

The second issue is, in the ConsNet manual, the following statement is found

The search is completely deterministic, but it can make decisions about how to proceed based on the current state of the solution archive or the current state of the objective

Is TS always deterministic? From reading some sources I got the idea that moves could be random, like in SA. But then there are some papers that talk about "Deterministic Tabu Search". How does deterministic tabu search know which moves to take and how does it escape local optima? It must accept worse solutions sometimes, right?

Many thanks in advance

Was it helpful?

Solution

Multiple questions, so multiple answers :)

  • Tabu Search gets out of local optima, here is an image showing how. It accept worse solutions if all improving solutions are tabu (and not aspired) and either all moves have been evaluated or a selectionLimit or acceptedLimit has been reached.

  • Both TS and SA can be written "reproducible", which means that running it twice will produce the same results. I presume that's what wanted to imply by saying "completely deterministic" (which is a different property). The trick to get reproducible is simple: just reuse the same Random instance everywhere and initiate it with a fixed randomSeed.

  • TS, when scaling out, will be unable to evaluate all possible moves per step on a reasonable sized dataset. So it needs to resort to random selection too.

  • TS, SA (and Late Acceptance for that matter) are competitive. It depends on the use case which performs best (which is mostly impossible to predict in advance). So the required constraints can impact which performs best, but the actual dataset has far less impact.

Note: I am affiliated with OptaPlanner (java, open source).

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top