Question

I have a few optimization algorithms (for finding function minimum) and I'd like to check how good they are. Suppose I build test cases and compare the actual results to theoretical ones. What measures should I use to estimate if the function minimum was actually found?

I thought about:

  • mean number of function evaluations (± standard deviation)
  • success rate (how often it actually finds minimum)

Are there any others that I have missed (let's say algorithm finishes 1e-4 from known solution. so is it success already or not?)

My main concern is not time complexity. It is algorithm accuracy in cases when exact solution might never be found (eg. multidimensional solution spaces). How do I calculate the rate of convergence?

Was it helpful?

Solution

The simple way to tell how "close" one solution is to another is to determine its standard error.

Standard error is the standard deviation of answers you get. Then you divide the distance between two solutions by the standard error. If the answer is less than 1, that's pretty close, because you could get that big of a difference just by randomness. If less than 2, that's still respectable. Beyond 2, you say no, not close.

Another way is to get the second derivative of the solution with respect to whatever you're varying. That second derivative represents the "sharpness" of the solution, and the standard error is proportional to the inverse square root of that.

If you like math, look here and here.

OTHER TIPS

  1. Figure out the complexity of the algorithm (its Big O)
  2. Measure the time it takes to perform one iteration (if it is constant), or measure several iterations and amortize the time it takes.

That's it.

To get the running time for a particular n, just take the amount of time for 1 iteration, and plug it into your big O. So, for example, if your Big O is O(n^2), and one iteration takes 100 milliseconds, then running time for n=6 will be 10 seconds.

Licensed under: CC-BY-SA with attribution
scroll top