Question

I have implemented a YUV to RGB conversion via a fragment shader written in Nvidia's shader language. (Y, U and V are stored in separate textures that are combined via multi texturing in my fragment shader). It works great under OpenGL, but under Direct3D I just can't get the output image to look right. I'm starting to suspect that Direct3D is somehow modifying the Y, U and V samples before I get a chance to do my YUV conversion thing. Does anyone know if Direct3D makes any modifications to the values stored in textures before the fragment shader is run and how to disable them>?

Was it helpful?

Solution

We figured it out. :) Basically the problem was that while our YUV to RGB equations were correct, we weren't properly sampling the V data! So no amount of futzing with the equations would have helped!

In the end, I would recommend the following strategy for anyone attempting to do this:

1) Set R, G, and B to the value from Y. You should get a grayscale image (as Y contains just luminance). 2) Next, set R, G, and B to U. You should get funny colors! 3) Finally set R, G, and B to V. Again, you should get funny colors.

Also, properly normalizing the values is critical. Check our fourcc.org for a discussion of proper YUV normalization.

OTHER TIPS

The only suggestion that comes to mind is that the textures are in an inappropriate format (low-precision or compressed).

Can you describe in what way the output looks wrong? Any chance of a right vs wrong screenshot?

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top