Question

The performance of a Direct3D application seems to be significantly better in full screen mode compared to windowed mode. What are the technical reasons behind this?

I guess it has something to do with the fact that a full screen application can gain exclusive control for the display. But why the application cannot gain exclusive control for part of the screen (i.e. window) and have the same performance benefits?

Was it helpful?

Solution

Here are the cliff notes on how things work underneath.

Monitor screen always needs to be associated with so-called primary surface to be able to display anything, i.e. videocard can only scan out of one surface in video memory.

When application is fullscreen (and everything was set up correctly to enable flipping), primary surface is just one of the application backbuffers, and flipped to another backbuffer every frame. It is the most efficient way of presenting on the screen, but it requires application to own the entire monitor area (i.e. entire primary surface).

When there's no fullscreen application and DWM is off, primary surface is owned by OS, and every windowed application performs a blit from application backbuffer to a primary surface. This blit takes some GPU time to complete (as well as blits from the other applications visible on the screen), so it's not as efficient as fullscreen presentation. XP worked that way.

When DWM is composing the screen, things get even more complicated. Here, DWM owns the primary surface and needs to draw application windows there. To make it possible, every window has an associated surface holding its contents, called redirection surface (which allows DWM to enable window ghosting, glass effects, and all that good stuff). Every time D3D application issues a frame, it adds a blit to a redirection surface. That way, several blits need to happen: blit to a redirection surface by the app, blit from a redirection surface to the primary by DWM, which is, again, some overhead compared to fullscreen.

Note all of that additional work is on the GPU, so it doesn't affect CPU performance.

Stuff to read further:

http://blogs.msdn.com/greg_schechter/archive/2006/03/19/555087.aspx

http://blogs.msdn.com/greg_schechter/archive/2006/05/02/588934.aspx

http://blogs.msdn.com/greg_schechter/archive/2006/03/05/544314.aspx

OTHER TIPS

There's a bit on MSDN that says full screen mode uses buffer flipping, if set up correctly, as opposed to blitting. It makes sense.

Of course you can (and in a way, do) give exclusive control for part of the screen to an application, but what happens to the rest of the screen? You still have to blit, do occlusion checking, etc. on the rest of the windows, and I think that's what causes the performance hit.

I'll add to @aib's answer that the rest of the screen is being managed by the OS. So, if anything else needs to be drawn/worked upon simultaneously, there has to be a performance hit.

For example, if you have a video playing in Windows Media Player in one window, then start Civilization in another, when Civ starts doing its fancy graphics, it will need to share screen space with everything else (like the video.

Whereas if the DirectX app has the full-screen, everything else might be "updating" or "playing", but not being drawn.

Basically, the video hardware is completely dedicated to the exclusive mode application.

There is no contention for video resources (pipeline, texture memory, etc...)

In particular, texture upload can be a big bottleneck. The less you have to do it (because you have it all), the better.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top