Question

Today I removed my dependence on OpenTK; my application no longer relies on OpenTK for creating an OpenGL context. However, my code no longer functions acceptably when running in Mesa 3D (the software-rendered implementation of OpenGL); my code runs, but the FPS is about 0.001 FPS (compared to about 16+FPS using OpenTK's context creation) and stuff that is normally drawn to a FBO is shown on the window, piece by piece as it is composed. Although running my code without Mesa 3D results in normal performance (on Windows 7), I'm worried it may just be coincidene that it works well. glGetError checks is showing no errors, which makes me think perhaps I'm doing something wrong in my context creation?

m_controlHandle = m_winForm.Handle; /* HWND */
m_controlDC = Win32.GetDC(m_controlHandle); /* HWND's DC*/
Win32.PixelFormatDescriptor pixelFormat = new Win32.PixelFormatDescriptor();
pixelFormat.Size = (short)Marshal.SizeOf(typeof(Win32.PixelFormatDescriptor));
pixelFormat.Version = 1;
pixelFormat.Flags =
    Win32.PixelFormatDescriptorFlags.DRAW_TO_WINDOW |
    Win32.PixelFormatDescriptorFlags.SUPPORT_OPENGL |
    Win32.PixelFormatDescriptorFlags.DOUBLEBUFFER;
pixelFormat.PixelType = Win32.PixelType.RGBA;
pixelFormat.ColorBits = 32;
pixelFormat.DepthBits = 0; /* yes, I don't use a depth buffer; 2D sprite game */
pixelFormat.LayerType = Win32.PixelFormatLayerType.MAIN_PLANE;
int formatCode = Win32.ChoosePixelFormat(m_controlDC, ref pixelFormat);
if (formatCode == 0)
    throw new Win32Exception(Marshal.GetLastWin32Error());
if (!Win32.SetPixelFormat(m_controlDC, formatCode, ref pixelFormat))
    throw new Win32Exception(Marshal.GetLastWin32Error());
m_openGLContext = Win32.wglCreateContext(m_controlDC);
if (m_openGLContext == IntPtr.Zero)
throw new Win32Exception(Marshal.GetLastWin32Error());
if (!Win32.wglMakeCurrent(m_controlDC, m_openGLContext))
    throw new Exception("Could not wglMakeCurrent.");

Is this correct? Any suggestions for tracking down what might be causing Mesa3D to suddenly go nutts?

Was it helpful?

Solution 2

Ok... I hope no one has to ever go through what I did trying to figure this out, so here's the solution:

Although OpenTK will attempt to use wglChoosePixelFormatARB, it fails on Mesa 3D and falls back to ChoosePixelFormat. However, the function that OpenTK calls ChoosePixelFormat actually DllImports wglChoosePixelFormat and NOT ChoosePixelFormat.

Yes, there are two versions of ChoosePixelFormat: one prefixed with wgl and one not prefixed. From OpenGL.org documentation:

On the Win32 platform a number of platform specific function calls are duplicated in the OpenGL ICD mechanism and the GDI. This may cause confusion as they appear to be functionally identical, the only difference being whether wgl precedes the rest of the function name. To ensure correct operation of OpenGL use ChoosePixelformat, DescribePixelformat, GetPixelformat, SetPixelformat, and SwapBuffers, instead of the wgl equivalents, wglChoosePixelformat, wglDescribePixelformat, wglGetPixelformat, wglSetPixelformat, and wglSwapBuffers. In all other cases use the wgl function where available. Using the five wgl functions is only of interest to developers run-time linking to an OpenGL driver. Not using the functions as described may result in a black OpenGL window, or a correctly functioning application in Windows 9x that produces a black OpenGL window on Windows NT/2000.

As I'm attempting to to run-time link to an OpenGL driver (Mesa 3D), the five wgl functions are of interest to me. Once I replaced my ChoosePixelFormat, SetPixelformat, and SwapBuffers with wglChoosePixelFormat, wglSetPixelformat, and wglSwapBuffers Mesa 3D worked excellently! Mystery solved.

OTHER TIPS

You have a major flaw in your logic to choose pixel format if you want to avoid a software implementation. Recall (or learn for the first time) that WGL uses a pattern-matching heuristic that looks for the set of all pixel formats that minimally satisfy your requested parameters. The more requested parameters you leave set to 0, the more ambiguity it leaves when it comes time to resolve the "best" (closest) match.

If you want to understand why a 0-bit depth buffer combined with a 32-bit color-buffer could be a bad idea you might enumerate all of the pixel formats offered by your display driver and inspect which ones are fully-hardware accelerated (they will not have the flag: PFD_GENERIC_FORMAT or PFD_GENERIC_ACCELERATED set). There is software that will do this for you, though I cannot think of any off the top of my head - be prepared to look through a list of hundreds of pixel formats if you actually decide to do this...

Many oddball combinations of parameters are implemented in software (GDI) but not hardware. One of the best bit-depths to try in order to get a hardware format is 32-bit RGBA, 24-bit Depth, 8-bit Stencil. On modern hardware this is almost always what the driver will pick as the "closest" match for most sane combinations of input parameters. The closer you get to picking an exact hardware pixel format, the more likely you are to keep Win32 / WGL from giving you a GDI pixel format.


Based on my own observations:

I have had to manually enumerate pixel formats and override the behavior of ChoosePixelFormat (...) to work-around driver bugs in the distant past; the pattern-matching behavior does not always work in your favor.

In the case of OpenTK, I suspect it actually uses wglChoosePixelFormatARB (...), which is an altogether more sophisticated interface for selecting pixel formats (and necessary in order to support multisample anti-aliasing). wglChoosePixelFormatARB is implemented by the Installable Client Driver (ICD), so it never tries to match input parameters against GDI pixel formats.

I suspect that GDI provides a 32-bit RGBA + 0-bit Z pixel format but your ICD does not. Since the closest match always wins, and ChoosePixelFormat sees GDI pixel formats, you can see why this is an issue.

You should try 32-bit RGBA (24-bit Color, 8-bit Alpha) + 24-bit Z + 8-bit Stencil and see if this improves your performance...


Update:

There is another issue that might be tripping up some stupider drivers. ColorBits is supposed to be the number of RGB bits in an RGBA pixel format (24 generally). AlphaBits is supposed to be the number of A bits (8 generally). Many drivers will see 32-bit ColorBits combined with 0 AlphaBits and understand the implied behavior (24-bit RGB + 8-bit Padding), but the way you have your code written could be troublesome. Technically, this sort of pixel format would be called RGBX8 (where the X indicates unused bits).

24 ColorBits and 8 AlphaBits might give more portable results.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top