Question

unfortunately I am having subjectively bad results in inference with pre-trained models of both MobileNet v1 and v2:

from keras.applications.mobilenet_v2 import MobileNetV2
ConvNet = MobileNetV2(input_shape = None, include_top = True, weights = 'imagenet', input_tensor = None, pooling = None, classes = 1000)

I have a local copy of these networks for the corresponding image size (224x224), depth multiplier 1.0 and weights trained for ImageNet.

After loading a model of MobileNetV2, I am exectuing a classification on random images from ImageNet or Google Images. Almost always the top-1 classification does not make any sense, for example very often I get the suggestions of a "Shower Curtain" or a "Pillow", although this is obviously not the case.

Testing with other models (VGG16, ResNet50) and changing only the model type (keeping the same parameters), I obtained correct or at least more understandable results that were also consistent among these two different models:

ConvNet = VGG16(input_shape = None, include_top = True, weights = 'imagenet', input_tensor = None, pooling = None, classes = 1000)

Having correct results here with the other models, I assume that my script is working correctly.

My question is: Has anybody ever experienced these issues with inference with MobileNet or MobileNetV2? And/or do you have any idea why this error occurs and how to solve it?

I appreciate any answer, please also consider seemingly trivial solutions since I am still quite a newbie ;)

Thanks a lot, Tim

No correct solution

Licensed under: CC-BY-SA with attribution
Not affiliated with datascience.stackexchange
scroll top