Citing Wikipedia:
The decision boundary of a perceptron is invariant with respect to scaling of the weight vector; that is, a perceptron trained with initial weight vector \mathbf{w} and learning rate \alpha \, behaves identically to a perceptron trained with initial weight vector \mathbf{w}/\alpha \, and learning rate 1. Thus, since the initial weights become irrelevant with increasing number of iterations, the learning rate does not matter in the case of the perceptron and is usually just set to 1.