Question

When converting from RGB to grayscale, it is said that specific weights to channels R, G, and B ought to be applied. These weights are: 0.2989, 0.5870, 0.1140.

It is said that the reason for this is different human perception/sensibility towards these three colors. Sometimes it is also said these are the values used to compute NTSC signal.

However, I didn't find a good reference for this on the web. What is the source of these values?

See also these previous questions: here and here.

Was it helpful?

Solution

The specific numbers in the question are from CCIR 601 (see the Wikipedia link below).

If you convert RGB -> grayscale with slightly different numbers / different methods, you won't see much difference at all on a normal computer screen under normal lighting conditions -- try it.

Here are some more links on color in general:

Wikipedia Luma

Bruce Lindbloom 's outstanding web site

chapter 4 on Color in the book by Colin Ware, "Information Visualization", isbn 1-55860-819-2; this long link to Ware in books.google.com may or may not work

cambridgeincolor : excellent, well-written "tutorials on how to acquire, interpret and process digital photographs using a visually-oriented approach that emphasizes concept over procedure"

Should you run into "linear" vs "nonlinear" RGB, here's part of an old note to myself on this. Repeat, in practice you won't see much difference.


RGB -> ^gamma -> Y -> L*

In color science, the common RGB values, as in html rgb( 10%, 20%, 30% ), are called "nonlinear" or Gamma corrected. "Linear" values are defined as

Rlin = R^gamma,  Glin = G^gamma,  Blin = B^gamma

where gamma is 2.2 for many PCs. The usual R G B are sometimes written as R' G' B' (R' = Rlin ^ (1/gamma)) (purists tongue-click) but here I'll drop the '.

Brightness on a CRT display is proportional to RGBlin = RGB ^ gamma, so 50% gray on a CRT is quite dark: .5 ^ 2.2 = 22% of maximum brightness. (LCD displays are more complex; furthermore, some graphics cards compensate for gamma.)

To get the measure of lightness called L* from RGB, first divide R G B by 255, and compute

Y = .2126 * R^gamma + .7152 * G^gamma + .0722 * B^gamma

This is Y in XYZ color space; it is a measure of color "luminance". (The real formulas are not exactly x^gamma, but close; stick with x^gamma for a first pass.)

Finally,

L* = 116 * Y ^ 1/3 - 16

"... aspires to perceptual uniformity [and] closely matches human perception of lightness." -- Wikipedia Lab color space

OTHER TIPS

I found that this publication referenced in an answer to a previous similar question. It is very helpful:

http://cadik.posvete.cz/color_to_gray_evaluation/

It shows 'tons' of different methods to generate grayscale images with different outcomes!

Heres some code in c to convert rgb to grayscale. The real weighting used for rgb to grayscale conversion is 0.3R+0.6G+0.11B. these weights arent absolutely critical so you can play with them. I have made them 0.25R+ 0.5G+0.25B. It produces a slightly darker image.

NOTE: The following code assumes xRGB 32bit pixel format

unsigned int *pntrBWImage=(unsigned int*)..data pointer..;  //assumes 4*width*height bytes with 32 bits i.e. 4 bytes per pixel
unsigned int fourBytes;
        unsigned char r,g,b;
        for (int index=0;index<width*height;index++)
        {
            fourBytes=pntrBWImage[index];//caches 4 bytes at a time
            r=(fourBytes>>16);
            g=(fourBytes>>8);
            b=fourBytes;

            I_Out[index] = (r >>2)+ (g>>1) + (b>>2); //This runs in 0.00065s on my pc and produces slightly darker results
            //I_Out[index]=((unsigned int)(r+g+b))/3;     //This runs in 0.0011s on my pc and produces a pure average
        }

Check out the Color FAQ for information on this. These values come from the standardization of RGB values that we use in our displays. Actually, according to the Color FAQ, the values you are using are outdated, as they are the values used for the original NTSC standard and not modern monitors.

Here's a paper on how these numbers (or similar ones) were derived:

https://web.archive.org/web/20160303201512/http://www.cis.rit.edu/mcsl/research/broadbent/CIE1931_RGB.pdf

What is the source of these values?

The "source" of the coefficients posted are the NTSC specifications which can be seen in Rec601 and Characteristics of Television.

The "ultimate source" are the CIE circa 1931 experiments on human color perception. The spectral response of human vision is not uniform. Experiments led to weighting of tristimulus values based on perception. Our L, M, and S cones1 are sensitive to the light wavelengths we identify as "Red", "Green", and "Blue" (respectively), which is where the tristimulus primary colors are derived.2

The linear light3 spectral weightings for sRGB (and Rec709) are:

Rlin * 0.2126 + Glin * 0.7152 + Blin * 0.0722 = Y

These are specific to the sRGB and Rec709 colorspaces, which are intended to represent computer monitors (sRGB) or HDTV monitors (Rec709), and are detailed in the ITU documents for Rec709 and also BT.2380-2 (10/2018)

FOOTNOTES (1) Cones are the color detecting cells of the eye's retina.
(2) However, the chosen tristimulus wavelengths are NOT at the "peak" of each cone type - instead tristimulus values are chosen such that they stimulate on particular cone type substantially more than another, i.e. separation of stimulus.
(3) You need to linearize your sRGB values before applying the coefficients. I discuss this in another answer here.

These values vary from person to person, especially for people who are colorblind.

is all this really necessary, human perception and CRT vs LCD will vary, but the R G B intensity does not, Why not L = (R + G + B)/3 and set the new RGB to L, L, L?

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top