I have this Backpropagation implementation in MATLAB, and have an issue with training it. Early on in the training phase, all of the outputs go to 1. I have normalized the input data(except the desired class, which is used to generate a binary target vector) to the interval [0, 1]. I have been referring to the implementation in Artificial Intelligence: A Modern Approach, Norvig et al.

Having checked the pseudocode against my code(and studying the algorithm for some time), I cannot spot the error. I have not been using MATLAB for that long, so have been trying to use the documentation where needed.

I have also tried different amounts of nodes in the hidden layer and different learning rates (ALPHA).

The target data encodings are as follows: when the target is to classify as, say 2, the target vector would be [0,1,0], say it were 1, [1, 0, 0] so on and so forth. I have also tried using different values for the target, such as (for class 1 for example) [0.5, 0, 0].

I noticed that some of my weights go above 1, resulting in large net values.

%Topological constants
NUM_HIDDEN = 8+1;%written as n+1 so is clear bias is used
NUM_OUT = 3;

%Training constants
ALPHA = 0.01;
TARG_ERR = 0.01;
MAX_EPOCH = 50000;

%Read and normalize data file.
X = normdata(dlmread('iris.data'));
X = shuffle(X);
%X_test = normdata(dlmread('iris2.data'));
%epocherrors = fopen('epocherrors.txt', 'w');

%Weight matrices.
%Features constitute size(X, 2)-1, however size is (X, 2) to allow for
%appending bias.
w_IH = rand(size(X, 2), NUM_HIDDEN)-(0.5*rand(size(X, 2), NUM_HIDDEN)); 
w_HO = rand(NUM_HIDDEN+1, NUM_OUT)-(0.5*rand(NUM_HIDDEN+1, NUM_OUT));%+1 for bias

%Layer nets
net_H = zeros(NUM_HIDDEN, 1);
net_O = zeros(NUM_OUT, 1);

%Layer outputs
out_H = zeros(NUM_HIDDEN, 1);
out_O = zeros(NUM_OUT, 1);

%Layer deltas
d_H = zeros(NUM_HIDDEN, 1);
d_O = zeros(NUM_OUT, 1);

%Control variables
error = inf;
epoch = 0;

%Run the algorithm.
while error > TARG_ERR && epoch < MAX_EPOCH
    for n=1:size(X, 1)
        x = [X(n, 1:size(X, 2)-1) 1]';%Add bias for hiddens & transpose to column vector.
        o = X(n, size(X, 2));

        %Forward propagate.
        net_H = w_IH'*x;%Transposed w.
        out_H = [sigmoid(net_H); 1]; %Append 1 for bias to outputs
        net_O = w_HO'*out_H;
        out_O = sigmoid(net_O); %Again, transposed w.

        %Calculate output deltas.
        d_O = ((targetVec(o, NUM_OUT)-out_O) .* (out_O .* (1-out_O)));

        %Calculate hidden deltas.
        for i=1:size(w_HO, 1);
            delta_weight = 0;
            for j=1:size(w_HO, 2)
                delta_weight = delta_weight + d_O(j)*w_HO(i, j);
            end
            d_H(i) = (out_H(i)*(1-out_H(i)))*delta_weight;
        end

        %Update hidden-output weights
        for i=1:size(w_HO, 1)
            for j=1:size(w_HO, 2)
                w_HO(i, j) = w_HO(i, j) + (ALPHA*out_H(i)*d_O(j));
            end
        end

        %Update input-hidden weights.
        for i=1:size(w_IH, 1)
            for j=1:size(w_IH, 2)
                w_IH(i, j) = w_IH(i, j) + (ALPHA*x(i)*d_H(j));
            end
        end
        out_O
        o
        %out_H
        %w_IH
        %w_HO
        %d_O
        %d_H
    end  
end

function outs = sigmoid(nets)
    outs = zeros(size(nets, 1), 1);
    for i=1:size(nets, 1)
        if nets(i) < -45
            outs(i) = 0;
        elseif nets(i) > 45
            outs(i) = 1;
        else
            outs(i) = 1/1+exp(-nets(i));
        end
    end
end
有帮助吗?

解决方案 2

After the discussion it turns out the problem lies within the sigmoid function:

function outs = sigmoid(nets)
%...
            outs(i) = 1/1+exp(-nets(i)); % parenthesis missing!!!!!!
%...
end

It should be:

function outs = sigmoid(nets)
%...
            outs(i) = 1/(1+exp(-nets(i)));
%...
end

The lack of parenthesis caused that the sigmoid output was larger than 1 sometimes. That made the gradient calculation incorrect (because it wasn't a gradient of this function). This caused the gradient to be negative. And this caused that the delta for the output layer was most of the time in the wrong direction. After the fix (the after correctly maintaining the error variable - this seems to be missing in your code) all seems to work fine.


Beside that, there are two other main problems with this code:

1) No bias. Without the bias each neuron can only represent a line which crosses the origin. If data is normalized (i.e. values are between 0 and 1), some configurations are inseparable.

2) Lack of guarding against high gradient values (point 1 in my previous answer).

其他提示

From what we've established in the comments the only thing that comes in my mind are all recipes written down together in this great NN archive:

ftp://ftp.sas.com/pub/neural/FAQ2.html#questions

First things you could try are:

1) How to avoid overflow in the logistic function? Probably that's the problem - many times I've implemented NNs the problem was with such an overflow.

2) How should categories be encoded?

And more general:

3) How does ill-conditioning affect NN training?

4) Help! My NN won't learn! What should I do?

许可以下: CC-BY-SA归因
不隶属于 StackOverflow
scroll top