سؤال

In a real-time graphics application, I believe a frame buffer is the memory that holds the final rasterised image that will be displayed for a single frame.

References to deep frame buffers seem to imply there's some caching going on (vertex and material info), but it's not clear what this data is used for, or how.

What specifically is a deep frame buffer in relation to a standard frame buffer, and what are its uses?

Thank you.

هل كانت مفيدة؟

المحلول

Google is your friend.

It can mean two things:

  1. You're storing more than just RGBA per pixel. For example, you might be storing normals or other lighting information so you can do re-lighting later.

  2. You're storing more than one color and depth value per pixel. This is useful, for example, to support order-independent transparency.

نصائح أخرى

A z buffer is similar to a color buffer which is usually used to store the "image" of a 3D scene, but instead of storing color information (in the form a 2D array of rgb pixels), it stores the distance from the camera to the object visible through each pixel of the framebuffer.

Traditionally, z-buffer only sore the distance from the camera to the nearest object in the 3D for any given pixel in the frame. The good thing about this technique is that if 2 images have been rendered with their z-buffer, then they can be re-composed using a 2D program for instance, but pixels from the image A which are in "front" of the pixels from image "B", will be composed on top of the re-composed image. To decide whether these pixels are in front, we can use the information stored in the images' respective z-buffer. For example, imagine we want to compose pixels from image A and B at pixel coordinates (100, 100). If the distance (z value) stored in the z-buffer at coordinates (100, 100) is 9.13 for image A and 5.64 for image B, the in the recomposed image C, at pixel coordinates (100, 100) we shall put the pixel from the image B (because it corresponds to a surface in the 3D scene which is in front of the object which is visible through that pixel in image A).

Now this works great when objects are opaque but not when they are transparent. So when objects are transparent (such as when we render volumes, clouds, or layers of transparent surfaces) we need to store more than one z value. Also note, that "opacity" changes as the density of the volumetric object or the number of transparent layers increase. Anyway, just to say that a deep image or deep buffer is technically just like a z-buffer but rather than storing only one depth or z values it stores not only more than one depth value but also stores the opacity of the object at each one of these depth value.

Once we have stored this information, it is possible in post-production to properly (that is accurately) recompose 2 or more images together with transparencies. For instance if you render 2 clouds and that these clouds overlap in depth, then their visibility will be properly recomposed as if they had been rendered together in the same scene.

Why would we use such technique at all? Often because rendering scenes containing volumetric elements is generally slow. Thus it's good to render them seprately from other objects in the scene, so that if you need to make tweaks to the solid objects you do not need to re-render the volumetrics elements again.

This technique was mostly made popular by Pixar, in the renderer they develop and sell (Prman). Avatar (Weta Digital in NZ) was one of the first film to make heavy use of deep compositing.

See: http://renderman.pixar.com/resources/current/rps/deepCompositing.html

The cons of this technique: deep images are very heavy. It requires to store many depth values per pixels (and these values are stored as floats). It's not uncomon for such images to be larger than a few hundred to a a couple of gigabytes depending on the image resolution and scene depth complexity. Also you can recompose volume object properly but they won't cast shadow on each other which you would get if you were rendering objects together in the same scene. This make scene management slightly more complex that usual, ... but this is generally dealt with properly.

A lot of this information can be found on scratchapixel.com (for future reference).

مرخصة بموجب: CC-BY-SA مع الإسناد
لا تنتمي إلى StackOverflow
scroll top