Question

I am working on a FF Neural network (used for classification problems) which I am training using a PSO. I only have one hidden layer and I can vary the amount of neurons in that layer.

My problem is that the NN can learn linearly separable problems quite easily but can not learn problems that are not linearly separable(like XOR) like it should be able to do.

I believe my PSO is working correctly because I cans see that it tries to minimises the error function of each particle (using mean squared error over the training set).

I have tried using a sigmoid and linear activation function with similar(bad) results. I also have a bias unit(which also doesn't help much).

What I want to know is if there is something specific that I might be doing wrong that might cause this type of problem, or maybe just some things I should look at where the error might be.

I am a bit lost at the moment

Thanks

Was it helpful?

Solution

PSO can train a neural network to non-solve linearly separable problems, like XOR. I've done this before, my algorithm takes about 50 or so iterations at most. Sigmoid is a good activation function for XOR. If it does converge for non-separable problems then my guess is somehow your hidden layer is not having an effect, or is bypassed. As the hidden layer is what typically allows non-separable.

When I debug AI I find it often useful to determine first if my training code or evaluation code (the neural network in this case) is at fault. You might want to create a 2nd trainer for your network. Then you can make sure your network code is calculating the output correctly. You could even do a simple "hill climber". Pick a random weight and change by a random small amount (up or down). Did your error get better? Keep the weight change and repeat. Did your error get worse, drop the change and try again.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top