Question

I wrote a code to implement steepest descent backpropagation with which I am having issues. I am using the Machine CPU dataset and have scaled the inputs and outputs into range [0 1]

The codes in matlab/octave is as follows:

steepest descent backpropagation

%SGD = Steepest Gradient Decent

function weights = nnSGDTrain (X, y, nhid_units, gamma, max_epoch, X_test, y_test)

  iput_units = columns (X);
  oput_units = columns (y);
  n = rows (X);

  W2 = rand (nhid_units + 1, oput_units);
  W1 = rand (iput_units + 1, nhid_units);

  train_rmse = zeros (1, max_epoch);
  test_rmse  = zeros (1, max_epoch);

  for (epoch = 1:max_epoch)

    delW2 = zeros (nhid_units + 1, oput_units)'; 
    delW1 = zeros (iput_units + 1, nhid_units)';

    for (i = 1:rows(X))

      o1 = sigmoid ([X(i,:), 1] * W1); %1xn+1 * n+1xk = 1xk
      o2 = sigmoid ([o1, 1] * W2); %1xk+1 * k+1xm = 1xm

      D2 = o2 .* (1 - o2);
      D1 = o1 .* (1 - o1);
      e = (y_test(i,:) - o2)';

      delta2 = diag (D2) * e; %mxm * mx1 = mx1
      delta1 = diag (D1) * W2(1:(end-1),:) * delta2;  %kxm * mx1 = kx1

      delW2 = delW2 + (delta2 * [o1 1]); %mx1 * 1xk+1 = mxk+1  %already transposed
      delW1 = delW1 + (delta1 * [X(i, :) 1]); %kx1 * 1xn+1 = k*n+1  %already transposed

    end

    delW2 = gamma .* delW2 ./ n;
    delW1 = gamma .* delW1 ./ n;

    W2 = W2 + delW2';
    W1 = W1 + delW1';

    [dummy train_rmse(epoch)] = nnPredict (X, y, nhid_units, [W1(:);W2(:)]);
    [dummy test_rmse(epoch)] = nnPredict (X_test, y_test, nhid_units, [W1(:);W2(:)]);
    printf ('Epoch: %d\tTrain Error: %f\tTest Error: %f\n', epoch, train_rmse(epoch), test_rmse(epoch));
    fflush (stdout);

  end

  weights = [W1(:);W2(:)];
%    plot (1:max_epoch, test_rmse, 1);
%    hold on;
  plot (1:max_epoch, train_rmse(1:end), 2);
%    hold off;
end

predict

%Now SFNN Only

function [o1 rmse] = nnPredict (X, y, nhid_units, weights)

  iput_units = columns (X);
  oput_units = columns (y);
  n = rows (X);

  W1 = reshape (weights(1:((iput_units + 1) * nhid_units),1), iput_units + 1, nhid_units);
  W2 = reshape (weights((((iput_units + 1) * nhid_units) + 1):end,1), nhid_units + 1, oput_units);

  o1 = sigmoid ([X ones(n,1)] * W1); %nxiput_units+1 * iput_units+1xnhid_units = nxnhid_units
  o2 = sigmoid ([o1 ones(n,1)] * W2); %nxnhid_units+1 * nhid_units+1xoput_units = nxoput_units

  rmse = RMSE (y, o2);
end

RMSE function

function rmse = RMSE (a1, a2)
  rmse = sqrt (sum (sum ((a1 - a2).^2))/rows(a1));
end

I have also trained the same dataset using the R RSNNS package mlp and the RMSE for train set (first 100 examples) are around 0.03 . But in my implementation I cannot achieve lower RMSE than 0.14 . And sometimes the errors grow for some higher learning rates, and no learning rate gets me lower RMSE than 0.14. Also a paper i referred report the RMSE in for the train set is around 0.03

I wanted to know where is the problem i the code. I have followed Raul Rojas book and confirmed that things are okay.

Was it helpful?

Solution

In backprobagation code the line

  e = (y_test(i,:) - o2)';

is not correct, because the o2 is the output from the train set and i am finding the difference from one example from the test set y_test. The line should have been as below:

  e = (y(i,:) - o2)';

which correctly finds the difference between the predicted output by the current model and the target output of the corresponding example.

This took me 3 days to find this one, I am fortunate enough to find this freaking bug which stopped me from going into further modifications.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top