The single-layer perceptron is a linear binary classifier that does not converge when the data is not linearly separable. If we plot the data, we get both classes are overlapping.
We can solve this use, by adding a tolerance to your function generateRandomData.m
function f = generateRandomData(points)
% generates random data that can be lineary seperated (silent)
% generate random function y = mx + n
m = 2 * rand * sign(randn); % in (-2,2)/0
n = 10 * rand + 5; % in (5,15)
% generate random points
x = 20 * rand(points,2); % in ((0,20), (0,20))
% tolerance
tol = 0.5;
% labeling
f = [x, -ones(points,1)];
for ii=1:size(f,1)
y = m*f(ii,1) + n;
if f(ii,2) > y+tol
f(ii,3) = 1;
elseif f(ii,2) < y-tol
f(ii,3) = 0;
else
f(ii,1) = f(ii,1)+2*tol;
f(ii,3) = 1;
end
end
end
However, your code still does not converge because your errorFunction.m has switched signs. It should be like this:
function f = errorFunction(c,d)
% w has been classified as c - w should be d
if c < d
% reaction too small
f = +1;
elseif c > d
% reaction too large
f = -1;
else
% reaction correct
f = 0;
end
end
Once, we do these changes, we get a nice linear classification:
Code to plot the hypothesis:
% Plot
idx = logical(D(:,3));
Xax = 0:20; Yax=-(b*w(1)+Xax*w(2))/w(3);
figure;
hold on;
scatter(D(idx,1),D(idx,2),'bo')
scatter(D(~idx,1),D(~idx,2),'rx')
plot(Xax,Yax,'k--')