質問

If a use a feature extraction method on images, do I then get a feature representation or is there a different meaning behind feature representation?

To my understanding, when I use a CNN on an image the convolutional layers perform feature extraction. Can I then also say that I get a new feature representation after each of these layers?

Thanks in advance! I am relatively new to machine learning...

役に立ちましたか?

解決

Yes, you are right. As a result of the extraction of features, new feature representations are created. For example, SIFT and HARR are algorithms to extract the crucial features from the images and create feature representation. CNN does it automatically with the help of convolution layers. In an image, smooth area, edge, corners are considered features that may represent it best. The importance of the smooth area, edge, and corners (there are others too) are normally accepted in ascending order.

In a simple CNN, if you can visualize the layers you can see that what it extracts from the image are mainly, corners and edge. It is also able to learn more complex features than we know (or defined). So, as you said, the convolution layers try to create a representation of features by extracting them. Then, this features representations (important information from the image) are used in the let's say prediction process.

ライセンス: CC-BY-SA帰属
所属していません datascience.stackexchange
scroll top