Question

I'm trying to process YUY2 image (Y, Cb, Cr). I need to filter the image and eliminate all the colors but the red coloring. I used Image Processing Lab to find the values of the elements Y, Cb, Cr. I found the values (you can see the image with the filtering):Image Processing Lab

Edited: trying to figure out why the images doesn't show - anyway, the image has some values of YUY element to filter as requested.

I wrote a code of reading video from a camera with a filter using this values but it didn't filter good. This is a image of the filtered image:

result of running the filter Edited: the image some of the blue color in the image filtered, while the rest of the picture didn't change (has white, blue, gray colors) - the 1st image filtered all the colors but red.

This is my code:

#define _WIDTH 640
#define _HEIGHT 480

#define _MinY .173
#define _MaxY .737
#define _MinB -.189
#define _MaxB .106
#define _MinR .084
#define _MaxR .324

typedef struct YUY2_struct
{
    UINT8 y0, b, y1, r;
}YUY2;

HRESULT SampleGrabberCallback::LineDetection(YUY2 *pD ,float arr[])
{
int LineCount = 0;
int LineSum = 0;
YUY2 *tempLocation = pD;

UINT8 MinY = (UINT8)(255 * _MinY);
UINT8 MaxY = (UINT8)(255 * _MaxY);
UINT8 MinR = (UINT8)((_MinR + 0.5) * 255);
UINT8 MaxR = (UINT8)((_MaxR + 0.5) * 255);
UINT8 MinB = (UINT8)((_MinB + 0.5) * 255);
UINT8 MaxB = (UINT8)((_MaxB + 0.5) * 255);

for (int i = 0; i < _WIDTH * _HEIGHT / 2; ++i)
{
    if ((tempLocation->b > MinB) && (tempLocation->b < MaxB) && (tempLocation->r > MinR) && (tempLocation->r < MaxR))
    {
        //for 1st pixel
        if ((tempLocation->y0 < MaxY) && (tempLocation->y0 > MinY))
        {
            tempLocation->b = 128;
            tempLocation->r = 128;
            tempLocation->y0 = 0;
        }

        //for 2nd pixel
        if ((tempLocation->y1 < MaxY) && (tempLocation->y1 > MinY))
        {
            tempLocation->b = 128;
            tempLocation->r = 128;
            tempLocation->y1 = 0;
        }
    }

    tempLocation += 1;
}
return NOERROR;
}

Any tips why does it doesn't filter me the image as the image in Image Processing Lab?

Was it helpful?

Solution

Too long to put in a comment...

This is how YUY2 is packed

YUY2 packing

  • Y is luma
  • Cb or U is the blue component
  • Cr or V is the red component

I don't see you correctly indexing the chrome-planes above...

This loop in c (and SDL) sets all Cb pixels to value 0x80:

for (Uint32 i = 1; i < W * H * 2; i += 4) {
    *(my_overlay->pixels[0] + i) = 0x80;
}

The allowed range for pixel-values are integers 0-255 (a lot more complex in reality thought). Program above seems to use normalized values between 0-1. How are you performing the normalization?

Hope this helps, if not, drop me a comment.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top