Question

I am trying to fine tune some code from a Kaggle kernel. The model uses pretrained VGG16 weights (via 'imagenet') for transfer learning. However, I notice there is no layer freezing of layers as is recommended in a keras blog. One approach would be to freeze the all of the VGG16 layers and use only the last 4 layers in the code during compilation, for example:

for layer in model.layers[:-5]:
    layer.trainable = False

Supposedly, this will use the imagenet weights for the top layers and train only the last 5 layers. What are the consequences of not freezing the VGG16 layers?

from keras.models import Sequential, Model, load_model
from keras import applications
from keras import optimizers
from keras.layers import Dropout, Flatten, Dense

img_rows, img_cols, img_channel = 224, 224, 3

base_model = applications.VGG16(weights='imagenet', include_top=False, input_shape=(img_rows, img_cols, img_channel))

add_model = Sequential()
add_model.add(Flatten(input_shape=base_model.output_shape[1:]))
add_model.add(Dense(256, activation='relu'))
add_model.add(Dense(1, activation='sigmoid'))

model = Model(inputs=base_model.input, outputs=add_model(base_model.output))
model.compile(loss='binary_crossentropy', optimizer=optimizers.SGD(lr=1e-4, momentum=0.9),
              metrics=['accuracy'])

model.summary()

No correct solution

Licensed under: CC-BY-SA with attribution
Not affiliated with datascience.stackexchange
scroll top