Question

I want to build a clothing classifier that takes a photo of an item of clothing and classifies it as 'jeans', 'dress', 'trainers' etc.

Some examples:

jeans trainer enter image description here

These images are from retailer websites, so are typically taken from the same angle, typically on a white or pale background -- they tend to be very similar.

I have a set of several thousand images whose category I already know, which I can use to train a machine-learning algorithm.

However, I'm struggling for ideas of what features I should use. The features I have so far:

def get_aspect_ratio(pil_image):
    _, _, width, height = pil_image.getbbox()

    return width / height


def get_greyscale_array(pil_image):
    """Convert the image to a 13x13 square grayscale image, and return a
    list of colour values 0-255.

    I've chosen 13x13 as it's very small but still allows you to
    distinguish the gap between legs on jeans in my testing.

    """
    grayscale_image = pil_image.convert('L')
    small_image = grayscale_image.resize((13, 13), Image.ANTIALIAS)

    pixels = []
    for y in range(13):
        for x in range(13):
            pixels.append(small_image.getpixel((x, y)))

    return pixels


def get_image_features(image_path):
    image = Image.open(open(image_path, 'rb'))

    features = {}
    features['aspect_ratio'] = get_aspect_ratio(image)

    for index, pixel in enumerate(get_greyscale_array(image)):
        features["pixel%s" % index] = pixel

    return features

I'm extracting a simple 13x13 grayscale grid as a crude approximation of shape. Howerver, using these features with nltk's NaiveBayesClassifier only gets me 34% accuracy.

What features would work well here?

Was it helpful?

Solution

This is a tough problem and therefore there are many approaches.

On common method (although complicated) is taken an input image, superpixelate the image and compute descriptors (such as SIFT of SURF) of those superpixels building a bag-of-word representation by accumulating histograms per superpixel, this operation extracts the key information from a bunch of pixels reducing dimensionality. Then a Conditional Random Field algorithm searches for relationships between superpixels in the image and classifies the group of pixels inside a known category. For pixelating images scikit-image package implements SLIC algorithm segmentation.slic, and for the CRF you should take a look to PyStruct package. SURF and SIFT can be calculated using OpenCV.

enter image description here

Another simple version would be computing descriptors of a given image (SIFT, SURF, borders, histogram etc) and use them as inputs in a classifier algorithm, you might want start from here, maybe scikit-learn.org is the easiest and most powerful package for doing this.

OTHER TIPS

HOG is commonly used in object detection schemes. OpenCV has a package for HOG descriptor:

http://docs.opencv.org/modules/gpu/doc/object_detection.html

You can also use BoW based features. Here's a post that explains the method: http://gilscvblog.wordpress.com/2013/08/23/bag-of-words-models-for-visual-categorization/

Using all the raw pixel values in an image directly as features aren't great, especially as the number of features increase, due to the very large search space (169 features represents a large search space, which can be difficult for any classification algorithm to solve). This is perhaps why moving to an 20x20 image actually degrades performance compared to 13x13. Reducing your feature set/search space might improve performance since you simplify the classification problem.

A very simple (and generic) approach to accomplish this is to use pixel statistics as features. This is the mean and standard deviation (SD) of the raw pixel values in a given region of the image. This captures the contrast/ brightness of a given region.

You can choose the regions based on trial and error, e.g., these can be:

  • a series of concentric circular regions, increasing in radius, in the centre of the image. The mean and SD of four circular regions of increasing size gives eight features.
  • a series of rectangular regions, either increasing in size or fixed size but placed around different regions in the image. The mean and SD of four non-overlapping regions (of size 6x6) in the four corners of the image and one in the centre gives 10 features.
  • a combination of circular and square regions.

Have you tried SVM? It generally performs better than Naive Bayes.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top