C# Encog Neural Network - Expected output is very far off actual error despite low overall error of neural network

StackOverflow https://stackoverflow.com/questions/22967360

  •  30-06-2023
  •  | 
  •  

Question

I've been trying to get Encog going for a few days now.

My data consists of 4 input variables (between 1 and 1000), and 1 output variable (between -30 and 30). I am training with around 50,000 rows of data.

The data is normalised (between -1 and 1 for the tanh activation function), before passing it into a neural network with the following structure and training:

 Network.AddLayer(new BasicLayer(null, true, 4));
 Network.AddLayer(new BasicLayer(new ActivationTANH(), true, 8));
 Network.AddLayer(new BasicLayer(new ActivationTANH(), false, 1));
 Network.Structure.FinalizeStructure();
 Network.Reset();

 IMLDataSet trainingData = new BasicMLDataSet(Input.ToArray(), ExpectedOutput.ToArray());

 IMLTrain train = new ResilientPropagation(Network, trainingData);

 int epoch = 1;

 do
 {
     train.Iteration();
     Console.WriteLine(@"Epoch #" + epoch + @" Error:" + train.Error);
     epoch++;
 } while (train.Error > 0.024);

The program then outputs each row's expected output along with the actual output from the neural network. Here is a screenshot of the output (a few rows): http://i.imgur.com/UVWCOis.png

As you can see, the error (which is the average error of the neural network for all of the rows) must be below 0.024 before this output is printed. But many of the expected and actual outputs have a huge amount of error.

I believe that the neural network is not sensitive enough to the propagation. It seems that the actual outputs of the neural network are all very close together, I believe because they haven't deviated far from the initial random weights.

Can anyone suggest how I can fix this?

I have tried reducing the size of the inputs (I used 50), I also tried removing biases, and both of these lead to a similar result.

Was it helpful?

Solution

Without seeing your data I cannot really say where the discrepancy is coming from. I suspect some of the data elements have much higher errors than others. If you use this method, it will give you the detail for how the error is calculated and what the "current error" is at each element as it progresses through your data. This will generate quite a bit of output if your data set is large.

    public static void errorDiagnostic(BasicNetwork network, MLDataSet dataSet) {
    int count = 0;
    double totalError = 0;

    System.out.println("Network error: " + network.calculateError(dataSet));


    for(MLDataPair pair : dataSet) {
        MLData actual = network.compute(pair.getInput());
        System.out.println("Evaluating element " + count + " : " + pair.getInput().toString());

        for(int i=0;i<pair.getIdeal().size();i++) {
            double delta = Math.abs(actual.getData(i) - pair.getIdeal().getData(i));
            totalError += delta*delta;
            count++;
            double currentError = totalError/count;
            System.out.println("\tIdeal: " + pair.getIdeal().getData(i) + ", Actual: " + actual.getData(i) + ", Delta: " + delta + ", Current Error: " + currentError);

        }
    }
}

For example, the output for a trained XOR (from the Encog hello world app) is:

    Network error: 0.009643582111728128
Evaluating element 0 : [BasicMLData:0.0,0.0]
    Ideal: 0.0, Actual: 0.10384251352940682, Delta: 0.10384251352940682, Current Error: 0.01078326761610504
Evaluating element 1 : [BasicMLData:1.0,0.0]
    Ideal: 1.0, Actual: 0.9109458503325736, Delta: 0.08905414966742642, Current Error: 0.009356954594546711
Evaluating element 2 : [BasicMLData:0.0,1.0]
    Ideal: 1.0, Actual: 0.8914073581830911, Delta: 0.10859264181690886, Current Error: 0.01016875701528963
Evaluating element 3 : [BasicMLData:1.0,1.0]
    Ideal: 0.0, Actual: 0.08982236581744897, Delta: 0.08982236581744897, Current Error: 0.009643582111728128

This lets you see the degree to which each element is contributing to the error.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top