Question

I would like to know if my opinion is correct or no:

If we consider a specific model that is able to perform several complex computations in order to compute the accuracy representing the correct classification rate of a big image database input. Note: all the images have the size: 300 x 200 pixels.

  • FIRST

    The images' size is reduced to 180 x 180, so the model is then computed by using these resized images database.

  • SECONDLY

    The images' size is reduced to 120 x 120, so the model is then computed by using these resized images database.

In this case, is it correct that when the size of the images increases, the accuracy also increases? (sure that the time complexity increases)

And when the size of the images decreases (like in second point: from 180x180 to 120x120), the accuracy also decreases? (but sure that the time complexity decreases).

I need your opinions, with brief explanation. Any help will be very appreciated!

Was it helpful?

Solution

The answer is "it depends". It depends on the specific problem you are trying to solve. If you are training a classifier to determine whether or not an image contains a face, you can get away with reducing the size of the image quite a bit. 32x32 is a common size used by face detectors. On the other hand, if you are trying to determine whose face it is, you will most likely need a higher-resolution image.

Think about it this way: reducing the size of the image removes high-frequency information. The more of it you remove, the less specific your representation becomes. I would expect that decreasing image size would decrease false negatives and increase false positives, but again, that depends on what kind of categories are you trying to classify. For any particular problem there is probably a "sweet spot", an image size that yields the maximum accuracy.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top