Question

I am trying to make an HDR rendering pass in my 3D app.

I understand that in order to get the average light on a scene, you'd need to downsample the render output down to 1x1 texture. This is where I'm struggling with at the moment.

I have set up a 1x1 resolution render target to which I'm going to draw the previous render output. I have tried drawing the output to that render target using a simple SpriteBatch Draw call and using target rectangle. However it was too much to hope for I'd discover something nobody else thought of, the result of this was not actually entire scene downsampled to 1x1 texture, it appears as if only the top left pixel was being drawn, no matter how much I played with target rectangles or Scale overloads.

Now, I'm trying to do another screen-quad render pass, using a shader technique to sample out the scene, and render a single pixel into the render target. So to be fair, title is a bit misleading, what I'm trying to do is sample a grid of pixels spread evenly across the surface, and average those out. But this is where I'm stumped.

I have come across this tutorial: http://www.xnainfo.com/content.php?content=28

In the file that can be downloaded, there are several examples of downsampling and one that I like most is using loop that goes through 16 pixels, averages them out and returns that.

Nothing I did so far managed to produce viable output. The downsampled texture is rendered to the corner of my screen for debugging purposes.

I have modified the HLSL code to look like this:

pixelShaderStruct vertShader(vertexShaderStruct input)
{
    pixelShaderStruct output;

    output.position = float4(input.pos, 1);
    output.texCoord = input.texCoord + 0.5f;

    return output;
};

float4 PixelShaderFunction(pixelShaderStruct input) : COLOR0
{
   float4 color = 0;
   float2 position = input.texCoord;

   for(int x = 0; x < 4; x++)
   {
        for (int y = 0; y < 4; y++)
        {
            color += tex2D(getscene, position + float2(offsets[x], offsets[y]));
        }
   }

   color /= 16;

   return color;
}

This particular line here is where I believe I'm making the error:

color += tex2D(getscene, position + float2(offsets[x], offsets[y]));

I have never properly understood how the texCoord values, used in tex2D sampling, work. When making motion blur effect, I had to forward such infinitismal values that I was afraid it'd get rounded off to zero, in order to produce a normal looking effect, while other times, forwarding large values like 30 and 50 was neccessary to produce effects that occupy maybe one third of the screen.

So anyway, my question is:

Given a screen quad (so, flat surface), how do I increment or modify the texCoord values to have a grid of pixels, evenly spread out, sampled across it?

I have tried using:

color += tex2D(getscene, position + float2(offsets[x] * (1/maxX), offsets[y] * (1/maxY)));

Where maxX and maxY are screen resolution,

color += tex2D(getscene, position + float2(offsets[x] * x, offsets[y] * y));

...and other "shots in the dark", and all results have ended up being the same: the final resulting pixel appears to be identical to the one in the exact middle of my screen, as if that's the only one being sampled.

How to solve that?

Also, how do those texture coordinates work? Where is (0,0)? What's the maximum?

Thanks all in advance.

Was it helpful?

Solution

I have solved the problem.

I believed that using a dozen renderTargets which half the resolution each step would be expensive to do, but I was wrong.

On a middle-end GPU in 2013, nVidia GTX 560, the cost of rerendering to 10 render targets was not noticeable, specific numbers: from 230 FPS the performance dropped to some 220 FPS.

The solution follows. It is implied you already have your entire scene processed and rendered to a renderTarget, which in my case is "renderOutput".

First, I declare a renderTarget array:

public RenderTarget2D[] HDRsampling;

Next, I calculate how many targets I will need in my Load() method, which is called between the Menu update and Game update loops (transition state for loading game assets not required in menu), and initialize them properly:

    int counter = 0;
    int downX = Game1.maxX;
    int downY = Game1.maxY;

    do
    {
        downX /= 2;
        downY /= 2;
        counter++;

    } while (downX > 1 && downY > 1);

    HDRsampling = new RenderTarget2D[counter];
    downX = Game1.maxX / 2;
    downY = Game1.maxY / 2;

    for (int i = 0; i < counter; i++)
    {
        HDRsampling[i] = new RenderTarget2D(Game1.graphics.GraphicsDevice, downX, downY);
        downX /= 2;
        downY /= 2;
    }

And finally, C# rendering code is as follows:

        if (settings.HDRpass)
        {   //HDR Rendering passes
            //Uses Hardware bilinear downsampling method to obtain 1x1 texture as scene average

            Game1.graphics.GraphicsDevice.SetRenderTarget(HDRsampling[0]);
            Game1.graphics.GraphicsDevice.Clear(ClearOptions.Target, Color.Black, 0, 0);
            downsampler.Parameters["maxX"].SetValue(HDRsampling[0].Width);
            downsampler.Parameters["maxY"].SetValue(HDRsampling[0].Height);
            downsampler.Parameters["scene"].SetValue(renderOutput);
            downsampler.CurrentTechnique.Passes[0].Apply();
            quad.Render();

            for (int i = 1; i < HDRsampling.Length; i++)
            {   //Downsample the scene texture repeadetly until last HDRSampling target, which should be 1x1 pixel
                Game1.graphics.GraphicsDevice.SetRenderTarget(HDRsampling[i]);
                Game1.graphics.GraphicsDevice.Clear(ClearOptions.Target, Color.Black, 0, 0);
                downsampler.Parameters["maxX"].SetValue(HDRsampling[i].Width);
                downsampler.Parameters["maxY"].SetValue(HDRsampling[i].Height);
                downsampler.Parameters["scene"].SetValue(HDRsampling[i-1]);
                downsampler.CurrentTechnique.Passes[0].Apply();
                quad.Render();

            }
            //assign the 1x1 pixel
            downsample1x1 = HDRsampling[HDRsampling.Length - 1];

            Game1.graphics.GraphicsDevice.SetRenderTarget(extract);
            //switch out rendertarget so we can send the 1x1 sample to the shader.
            bloom.Parameters["downSample1x1"].SetValue(downsample1x1);

        }

This obtains the downSample1x1 texture, which is later used in the final pass of the final shader.

The shader code for actual downsampling is barebones simple:

texture2D scene;

sampler getscene = sampler_state
{
    texture = <scene>;
    MinFilter = linear;
    MagFilter = linear;
    MipFilter = point;
    MaxAnisotropy = 1;
    AddressU = CLAMP;
    AddressV = CLAMP;
};

float maxX, maxY;

struct vertexShaderStruct
{
    float3 pos      : POSITION0;
    float2 texCoord : TEXCOORD0;
};

struct pixelShaderStruct
{
    float4 position : POSITION0;
    float2 texCoord : TEXCOORD0;
};

pixelShaderStruct vertShader(vertexShaderStruct input)
{
    pixelShaderStruct output;

    float2 offset = float2 (0.5 / maxX, 0.5/maxY);
    output.position = float4(input.pos, 1);
    output.texCoord = input.texCoord + offset;

    return output;
};

float4 PixelShaderFunction(pixelShaderStruct input) : COLOR0
{
   return tex2D(getscene, input.texCoord);
}

technique Sample
{
    pass P1
    {
        VertexShader = compile vs_2_0 vertShader();
        PixelShader = compile ps_2_0 PixelShaderFunction();
    }
}

How you implement your scene average luminosity is up to you, I'm still experimenting with all that, but I hope this helps somebody out there!

OTHER TIPS

The tutorial you linked is a good demonstration of how to do Tonemapping, but it sounds to me like you haven't performed the repeated downsampling that's required to get to a 1x1 image.

You can't simply take a high resolution image (1920x1080, say) take 16 arbitrary pixels, add them up and divide by 16 and call that your luminance.

You need to repeatedly downsample the source image to smaller and smaller textures, usually by half in each dimension at every stage. Each pixel of the resulting downsample is an average of a 2x2 grid of pixels on the previous texture (this is handled by bilinear sampling). Eventually you'll end up with a 1x1 image that is the average colour value for the entire original 1920x1080 image and from that you can calculate the average luminance of the source image.

Without repeated downsampling, your luminance calculation is going to be a very noisy value since its input is a mere 16 of the ~2M pixels in the original image. To get a correct and smooth luminance every pixel in the original image needs to contribute.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top