Question

I was following the following article with regards to doing transfer learning:

https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html

In the section, Using the bottleneck features of a pre-trained network: 90% accuracy in a minute, the authors mentioned that: "Note that this prevents us from using data augmentation"

I am not very clear about this; is there a rule that discourages the use of data augmentation when the pre-trained model is totally frozen?

No correct solution

Licensed under: CC-BY-SA with attribution
Not affiliated with datascience.stackexchange
scroll top