Question

So I'm working on a little side-project for the purpose of experimenting with genetic algorithms. The project involves two classes, Critters and Food. The critters gain hunger each tick and lose hunger when they find food. The Critters can move, and the Food is stationary. Each Critter has a genome which is just a string of randomly generated characters. The goal is that after several generations, the Critters will evolve specialized movement patterns that result in optimal food consumption.

Right now, each Critter is governed by a neural network. The neural network is initialized with weights and biases derived from the Critter's genome. The first input into the neural network is [0,0]. The neural network produces two outputs which dictate the direction of the Critter's x and y movement respectively. This output is used as the input for the neural network at the next tick. For example:

1: [0,0]->NN->[.598.., -.234...] // Critter moves right and up
2: [.598...,-.234...]->NN->[-.409...,-.232...] // Critter moves left and up
3: [-.409...,-.232...]->NN-> etc.

The problem is that, regardless of how the weights are initialized, the neural network is finding a sort of "fixed point." That is, after two or three iterations the output and input are practically the same so the Critter always moves in the same direction. Now I'm not training the neural net and I don't really want to. So what I'm looking for is an alternative method of generating the output.

More specifically, let's say I have n random weights generated by the genome. I need a relation determined by those n weights that can map (in the loosest sense of the word) 2 inputs in the range [-1,1] to two outputs in the same range. The main thing is that I want the weights to have a significant impact on the behavior of the function. I don't want it to be something like y=mx+b where we're only changing y and b.

I know that's a pretty vague description. At first I thought the Neural Network would be perfect, but it seems as though the inputs have virtually no affect on the outputs without training (which is fair since Neural Networks are meant to be trained).

Any advice?

Was it helpful?

Solution

Just an idea.

You have f(genome) -> (w_1, w_2, ..., w_n), where f generates w based on the genome.

You could for example use a hash function h and compute [h(w_1, ..., w_(n/2)), h(w_(n/2+1), ..., w_n))].

Normally a hash function should give very distinct outputs for small change in inputs. But not always. You can look for hash functions that are continuous (small change in input, small change in output). This kind of function would be used for similarity search, http://en.wikipedia.org/wiki/Locality_sensitive_hashing might provide some ideas. This way you could actually use the hash directly on the genome.

Otherwise, you could try to split the genome or the weights and give the splits different purposes. Let's say n = 4.

(w_1, w_2) affects x

(w_3, w_4) affects y

You could then compute x as (w_1 + w_2*Random_[-1,1])/2, where Random_[-1,1] is a random number from the interval [-1,1] and assuming that w_i \in [-1,1] for all i.

Similarly for y.

Your genetic algorithm would then optimize how fast and how randomly the Critters move in order to optimally find food. If you have more weights (or longer genome), you could try to come up with a fancier function in a similar spirit.

This actually shows that with genetic algorithms the problem solving shifts to finding a good genome representation and a good fitness function so don't worry if you get stuck on it a bit.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top