Question

So I've written an implementation of the ant colony optimization (ACO) meta-heuristic, and I'd like to write some unit tests. However, I'm not sure of the best way to test a method whose ability to return "correct" answers varies depending on various settings.

How does one unit test a heuristic algorithm?

Code lives at https://github.com/rhgrant10/pants by the way.

Was it helpful?

Solution

I test my TSP implementation with this integration test class, which does 2 tests:

  • Asserts that it reaches a certain score within 600 seconds. I get that score within 10 seconds on my machine, so the long timeout is only for really slow jenkins slaves. If it doesn't reach that score in that time limit, it's probably never going to reach it. The point of this test is that no Exceptions are thrown (= smoke test) and that at least it improves the score in reasonable time. So it's better than no test :)
  • Puts the solver in an assertionMode (FAST_ASSERT) and asserts that it reaches a certain, easier score within 600 seconds. In assertionMode, the Solver enables sanity checks in it's deepest loops (at a performance cost). This is to flush out bugs in incremental score calculation (= delta score calculation) etc.

On top of that, I have unit tests to test the specific components of my algorithm, to check if they behave as expected, for example this unit test class.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top