Question

I am performing human awareness detection and have trained my model using transfer learning with MobileNetV2. This model expects a tensor of dimension [Null,224,224,3].

I have applied face detection using BlazeFace which uses an input of [128,128,3] on the input video stream and cropped the detected faces in order to send the cropped faces to my custom model but I am not sure what to do as the cropped images are all of varying sizes and smaller than what my model expects.

Example of a cropped face tensor

Array [
  1,
  43,
  111,
  3,
]
Was it helpful?

Solution

The issue was fixed by resizing the Tensor to fit into the model. I was reshaping them instead of resizing them.

Licensed under: CC-BY-SA with attribution
Not affiliated with datascience.stackexchange
scroll top