Question

Say we have 2 stochastic optimization algorithms (Genetic Algorithms, Particle Swarm Optimization, Cuckoo Search, etc.), A and B, and we want to find the global maxima of a function. Then, if algorithm A performs better than algorithm B at optimizing function F on a 1-dimensional search space does it also perform better than B at optimizing function F on a N-dimensional search space?

I shall refer to function F in N dimensions by F_ND. Note that F_1D and F_ND is the same function, except in a different number of dimensions; the "landscape" is exactly the same, only of different dimensionality.

Ex: for the DeJong function we have:

F_1D(x) = x[0]*x[0]
F_5D(x) = x[0]*x[0] + x[1]*x[1] + x[2]*x[2] + x[3]*x[3] + x[4]*x[4]

F_1D and F_5D have the same "type"/"aspect"

...put otherwise:

If general_performance(A,F_1D) > general_performance(B,F_1D) then does general_performance(A,F_ND) > general_performance(B,F_ND) (for a larger N, of course) hold also?

Was it helpful?

Solution

It is currently not known if such a property would hold. The No Free Lunch theorem (NFL) does not fully apply here since you're talking about a very restricted set of problems. The problem that you have drawn is still independent in higher dimensions (one can optimize every variable separately and reach the global optimum). In this case one could argue that you could divide that problem into 5 problems of 1 dimension and solve each dimension separately. This should be always cheaper than solving them combined (assuming that no dimensions are free).

I think it depends a lot on the type of problem, but in general I would not believe that such a property holds and that for some problem and some N you can find an algorithm B such that A is better than B <=> dimension < N and B is better than A <=> dimension >= N.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top