Pregunta

I'm using AForgeNet to grab playing cards from a video stream. I grab the whole card and also the section with the actual rank of the card and my problem is that the Template Matching is not really working out for me as its either to sensitive or making to much mistakes.

So I had this idea of splitting the image into sections like this:

enter image description here

If a section has more than 50% black it will represent a 1 otherwise a 0. This will generate a binary representation which I can compare against my "templates". As its a playing deck its only the chars: AKQJ1098765432 and I think they are unique and few enough to get work. By this it wont matter if the images are 1-2 pixels off.

I do suspect there is something similar like this already that I could reuse, any ideas?

¿Fue útil?

Solución

I think that a more robust solution is to extract scale and rotation invariant features from the card number and rank. You can try, for example, image moments.

After you extract those image features, you can train some classifier (e.g. a neural network) to predict the card number and rank.

Otros consejos

You can create an image fingerprint by downscaling—to no less than 10% of the original size—with interpolation. For a black-and-white image, the fingerprint will be in shades of grey. If you subtract the fingerprints of two images, you get a metric of their similarity—you can experimentally determine a threshold for consistently determining matches.

Licenciado bajo: CC-BY-SA con atribución
No afiliado a StackOverflow
scroll top