문제

I am having trouble understanding the terminology with perceptron learning. Is my current understanding correct? Let's say I have some data that classifies what type of flower a particular flower is. And let's say the factors taken into consideration are petal size, petal coloring, and leaf size. So my current understanding is that we take every pair of inputs and make them the axis of a graph(e.g. leaf size vs petal coloring). So in this case, we would have 3 graphs. Now, we plot the data points and see if it is linearly separable. That is, we can draw a line called a "decision boundary" that separates the data into two regions to be able to differentiate which inputs correlate with which outputs. This line is defined by $w^T \cdot x = y$. However, my first confusion is the following. How are 3 different graphs each with pairs of inputs (the 2 axis) represented using a equation of a single line that is used as the input layer of the neural network?

My second confusion is, is the objective function what defines how the $w^T \cdot x$ is to be interpreted? For instance, something like $y = 1 $ if $w^T \cdot x \geqslant 0$ and else $y = 0$. Also, as I have understood, the learning rule is supposed to be how to update consecutive weights.

Can someone please explain the situation I have laid out. I am reading Bishop's book on Neural Networks. But I am confused by the text.

올바른 솔루션이 없습니다

라이센스 : CC-BY-SA ~와 함께 속성
제휴하지 않습니다 cs.stackexchange
scroll top