문제

I was following the following article with regards to doing transfer learning:

https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html

In the section, Using the bottleneck features of a pre-trained network: 90% accuracy in a minute, the authors mentioned that: "Note that this prevents us from using data augmentation"

I am not very clear about this; is there a rule that discourages the use of data augmentation when the pre-trained model is totally frozen?

올바른 솔루션이 없습니다

라이센스 : CC-BY-SA ~와 함께 속성
제휴하지 않습니다 datascience.stackexchange
scroll top