Pregunta

So I've built a downsampling algorithm in XNA (3.1. Yes, I'm aware it's horribly outdated. Yes, I have reasons for using it) and HLSL. Essentially how it works is by applying a Gaussian blur to the original texture, and then resizing it with the default nearest neighbour rescaling, built into XNA. My thinking was that a Gaussian blur would give an approximation to the average of a region of colours, so it would be essentially a cheap way to reduce aliasing. It works well, and very quickly, but produces some interesting artefacts - it seems to very slightly stretch the image. This is not noticeable usually, but some of the things I'm downsampling are sprite sheets, and when animated, it is very clear that the sprites have not been put into the correct position. I'm wondering if a different resampler (also built into HLSL for the speed of the GPU) might be a better option, or if there's an error with this one I can fix. I'll post my code up here, and see if there's anyone who could enlighten me.

First is my HLSL Gaussian effects file:

#define RADIUS  7
#define KERNEL_SIZE (RADIUS * 2 + 1)

float weightX[KERNEL_SIZE];
float weightY[KERNEL_SIZE];
float2 offsetH[KERNEL_SIZE];
float2 offsetV[KERNEL_SIZE];

texture colorMapTexture;

sampler TextureSampler : register(s0);

void BlurH(inout float4 color : COLOR0, float2 texCoord : TEXCOORD0)
{
    float4 c = float4(0.0f, 0.0f, 0.0f, 0.0f);

    for (int i = 0; i < KERNEL_SIZE; ++i)
        c += tex2D(TextureSampler, texCoord + offsetH[i]) * weightX[i];

        color = c;
}

void BlurV(inout float4 color : COLOR0, float2 texCoord : TEXCOORD0)
{
    float4 c = float4(0.0f, 0.0f, 0.0f, 0.0f);

    for (int i = 0; i < KERNEL_SIZE; ++i)
        c += tex2D(TextureSampler, texCoord + offsetV[i]) * weightY[i];

        color = c;
}

technique GaussianBlur
{
    pass
    {
        PixelShader = compile ps_2_0 BlurH();
    }
    pass
    {
        PixelShader = compile ps_2_0 BlurV();
    }
}

And my code for initialising the Gaussian effect (note that gaussianBound is set to 8, i.e. 1+ the radius found in the HLSL file):

 public static Effect GaussianBlur(float amount, float radx, float rady, Point scale)
        {
            Effect rtrn = gaussianblur.Clone(MainGame.graphicsManager.GraphicsDevice);

            if (radx >= gaussianBound)
            {
                radx = gaussianBound - 0.000001F;
            }
            if (rady >= gaussianBound)
            {
                rady = gaussianBound - 0.000001F;
            }
            //If blur is too great, image becomes transparent,
            //so cap how much blur can be used.
            //Reduces quality of very small images.

            Vector2[] offsetsHoriz, offsetsVert;
            float[] kernelx = new float[(int)(radx * 2 + 1)];
            float sigmax = radx / amount;
            float[] kernely = new float[(int)(rady * 2 + 1)];
            float sigmay = rady / amount;
            //Initialise kernels and sigmas (separately to allow for different scale factors in x and y)

            float twoSigmaSquarex = 2.0f * sigmax * sigmax;
            float sigmaRootx = (float)Math.Sqrt(twoSigmaSquarex * Math.PI);
            float twoSigmaSquarey = 2.0f * sigmay * sigmay;
            float sigmaRooty = (float)Math.Sqrt(twoSigmaSquarey * Math.PI);
            float totalx = 0.0f;
            float totaly = 0.0f;
            float distance = 0.0f;
            int index = 0;
            //Initialise gaussian constants, as well as totals for normalisation.

            offsetsHoriz = new Vector2[kernelx.Length];
            offsetsVert = new Vector2[kernely.Length];

            float xOffset = 1.0f / scale.X;
            float yOffset = 1.0f / scale.Y;
            //Set offsets for use in the HLSL shader.

            for (int i = -(int)radx; i <= radx; ++i)
            {
                distance = i * i;
                index = i + (int)radx;
                kernelx[index] = (float)Math.Exp(-distance / twoSigmaSquarex) / sigmaRootx;
                //Set x kernel values with gaussian function.
                totalx += kernelx[index];
                offsetsHoriz[index] = new Vector2(i * xOffset, 0.0f);
                //Set x offsets.
            }

            for (int i = -(int)rady; i <= rady; ++i)
            {
                distance = i * i;
                index = i + (int)rady;
                kernely[index] = (float)Math.Exp(-distance / twoSigmaSquarey) / sigmaRooty;
                //Set y kernel values with gaussian function.
                totaly += kernely[index];
                offsetsVert[index] = new Vector2(0.0f, i * yOffset);
                //Set y offsets.
            }

            for (int i = 0; i < kernelx.Length; ++i)
                kernelx[i] /= totalx;

            for (int i = 0; i < kernely.Length; ++i)
                kernely[i] /= totaly;

            //Normalise kernel values.

            rtrn.Parameters["weightX"].SetValue(kernelx);
            rtrn.Parameters["weightY"].SetValue(kernely);
            rtrn.Parameters["offsetH"].SetValue(offsetsHoriz);
            rtrn.Parameters["offsetV"].SetValue(offsetsVert);
            //Set HLSL values.

            return rtrn;
        }

Beyond this, my function simply draws to a texture each pass of the effect, then draws the result to a new texture at a different scale. This looks really nice, but as I said produces these artefacts of things not quite being in the right place. Some help here would be appreciated.

Artefacts showing

¿Fue útil?

Solución

Well, I've discovered something: It has nothing to do with the Gaussian blur. The issue is that I'm scaling down with Nearest Neighbour, which produces these artefacts due to the loss of data (for example, when something needs to be essentially at pixel 5.5, it just puts it at pixel 5, giving a position error). Thanks for everyone trying to help with this, but it looks like I'm just going to have to slightly rethink my algorithm.

I've fixed it by adding an extra constraint that the resampling will only work for integer resamples. Anything else will resample to the nearest available integer sample, and then scale the rest with NN. It's pretty much exactly what I had working before, but now a hell of a lot faster thanks to HLSL. I was hoping to get an arbitrary scaling algorithm, but it works well enough for my needs. It's not perfect, as I'm still getting offset errors (which is almost impossible to avoid entirely when downsampling thanks to the loss of data), but they are now clearly less than a pixel, and so not noticeable unless you're looking for them.

Otros consejos

I have some doubts. The second pass should use the first pass' result. Otherwise you can just combine BlurH and BlurV together, the result will be same. I don't find any code that make use of the first pass result or transfer it from the first pass to the second pass.

Licenciado bajo: CC-BY-SA con atribución
No afiliado a StackOverflow
scroll top