Вопрос

I'm reading through Sebastian Raschka's Python Machine Learning, and I see something confusing that is not explained in the text.

In the code on this page: https://github.com/rasbt/python-machine-learning-book/blob/master/code/ch02/ch02.ipynb

Implementing a perceptron learning algorithm in Python

In the training process, in addition to updating weights, I see this happening:

 self.w_[0] += update

Then later on, during "prediction," when weights are applied to input, I see self.w_[0] being used:

def net_input(self, X):
    """Calculate net input"""
    return np.dot(X, self.w_[1:]) + self.w_[0]

It looks like this is a bias being added into the perceptron, but the book says that net_input is simply calcuating "weights transpose dot x" and mentions nothing about this + self.w_[0] part...

Can anyone take a look at the linked code and make sense of what's going on with the self.w_[0] part? Or has anyone else got this book that might explain why that's there?

Нет правильного решения

Лицензировано под: CC-BY-SA с атрибуция
Не связан с cs.stackexchange
scroll top