Question

Hello and sorry for the obscure title :} I`ll try to explain the best i can.

First of all, i am new to HLSL but i understand about the pipeline and stuff that are from the fairy world. What i`m trying to do is use the gpu for general computations (GPGPU).

What i don`t know is: how can i read* the vertices (that have been transformed using vertex shaders) back to my xna application? I read something about using the texture memory of the gpu but i can't find anything solid...

Thanks in advance for any info/tip! :-)

*not sure if possible bacause of the rasterizer and the pixel shader (if any), i mean, in the end it's all about pixels, right?

Was it helpful?

Solution

As far as I know this isn't generally possible.

What exactly are you trying to do? There is probably another solution

EDIT:: Taking into account the comment. If all you want to do is general vector calculations on the GPU try doing them in the pixel shader rather than the vertex shader.

So for example, say you want to do cross two vectors, first we need to write the data into a texture

//Data must be in the 0-1 range before writing into the texture, so you'll need to scale everything appropriately
Vector4 a = new Vector4(1, 0, 1, 1);
Vector4 b = new Vector4(0, 1, 0, 0);

Texture2D dataTexture = new Texture2D(device, 2, 1);
dataTexture.SetData<Vector4>(new Vector4[] { a, b });

So now we've got a 2*1 texture with the data in, render the texture simply using spritebatch and an effect:

Effect gpgpu;
gpgpu.Begin();
gpgpu.CurrentTechnique = gpgpu.Techniques["DotProduct"];
gpgpu.CurrentTechnique.Begin();
spriteBatch.Begin();
gpgpu.CurrentTechnique.Passes[0].Begin();
spriteBatch.Draw(dataTexture, new Rectangle(0,0,2,1), Color.White);
spriteBatch.end();
gpgpu.CurrentTechnique.Passes[0].End();
gpgpu.CurrentTechnique.End();

All we need now is the gpgpu effect I've shown above. That's just a standard post processing shader, looking something like this:

sampler2D DataSampler = sampler_state
{
    MinFilter = Point;
    MagFilter = Point;
    MipFilter = Point;
    AddressU = Clamp;
    AddressV = Clamp;
};

float4 PixelShaderFunction(float2 texCoord : TEXCOORD0) : COLOR0
{
    float4 A = tex2D(s, texCoord);
    float4 B = tex2D(s, texCoord + float2(0.5, 0); //0.5 is the size of 1 pixel, 1 / textureWidth
    float d = dot(a, b)
    return float4(d, 0, 0, 0);
}

technique DotProduct
{
    pass Pass1
    {
        PixelShader = compile ps_3_0 PixelShaderFunction();
    }
}

This will write out the dot product of A and B into the first pixel, and the dot product of B and B into the second pixel. Then you can read these answers back (ignoring the useless ones)

Vector4[] v = new Vector4[2];
dataTexture.GetData(v);
float dotOfAandB = v[0];
float dotOfBandB = v[1];

tada! There are a whole load of little issues with trying to do this on a larger scale, comment here and I'll try to help you with any you run into :)

OTHER TIPS

If you turn on the "Stream Output Stage," the outputs of your vertex shader will be stored in a memory buffer. Later these values can be read from the GPU or CPU as desired.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top