I'm exploring the world of linear genetic programming and I find myself stuck with this one issue. It seems to me that the error landscape of even the simplest problem is extremely non smooth. In particular, the error landscape seems to always contain these huge gaps of constant error (gaps where the fitness of a solution is just zero). This deteriorates the evolutionary algorithm to a random search over the space of programs and renders a solution almost impossible to discover. Does anyone out there have an explanation for how people get around this? What am I missing?

有帮助吗?

解决方案

This observation is, unfortunately, quite typical in the world of GP.

You may find this article interesting: J. Lehman, K. O. Stanley: Efficiently evolving programs through the search for novelty

其他提示

It's about not to choose a too high selection pressure. a too high selection pressure leads to a loss of diversity which makes it much harder to find a hard reachable global optima. under a weak pressure also unfit individuals have a chance of creating offspring which could lead to the discovering of new optimas. an other influence is the mutation step width. if you have a high selection pressure you should at least ensure that also wide mutation steps are possible even though they have a smaller probability to happen. some even suggest to give the mutation operator the power to reach every part of the searchspace within a single step: http://www.lehmanns.de/shop/nocategory/3400811-9783826597008-anwendungsorientierter-entwurf-evolutionaerer-algorithmen

许可以下: CC-BY-SA归因
不隶属于 StackOverflow
scroll top