Question

Usually in computer science when I have something from a to b the interval is [a, b). Is this true when rasterizing geometric primitives?

For example when I have a line that starts at position (0, 0) and ends at position (0, 10) will the line contain point (0, 10) when using parallel projection with 1 GPU unit mapped on 1 pixel on screen?

EDIT: Same question in the same conditions, but for textures:

If I have a texture 2x2 mapped on a quad from (0, 0) to (2, 2) using (0, 0) to (1, 1) mapping will it be "pixel perfect", one pixel from texture on one pixel on screen or will be the texture scaled? If the the interval is [0, 2] the quad will be 3x3 and the texture has to be scaled...

LATER EDIT: This might help: http://msdn.microsoft.com/en-us/library/windows/desktop/bb219690%28v=vs.85%29.aspx

Was it helpful?

Solution

First of all this entirely depends on the particular rasterization framework used (e.g. OpenGL, Direct3D, GDI, ...), so I'll base this answer on the question being tagged opengl.

This is not such an easy question, because usually the actual window coordinates of a drawn primitive (or rather a fragment thereof) are not integral but floating point or fixed point coordinates, resulting after a bunch of (possibly inexact) transformations (which not all can be customized to identity using shaders, especially the fixed-function viewport transformation from normalized device coordinates to window coordinates). So even if you configure your transformation pipeline to specify the vertex coordinates directly in window space don't expect your resulting fragments' window coordinates to be perfect integers in all cases. Take a look at this question and its answers for some more insights into OpenGL's transformation pipeline, if needed.

Then it depends on the primitive type. For a polygon (triangle/quad), the rasterizer checks for each fragment if the fragment's center lies within the polygon boundaries as defined by the polygon vertices' window coordinates. So if we have a rectangle (well, two triangles, but let's just take it as rectangle) that spans from window coordinates (0,0) to (2,2), it would cover a 2x2 region, because only fragments (0,0) (with center coordinate (0.5,0.5)) and (1,1) (with center coordinate (1.5,1.5)), and combinations thereof lie withing the rectangle.

For lines it is a bit more complicated, using the so-called "diamond exit rule". I won't discuss this here in detail, but for horizontal or vertical lines this essentially means that the last pixel is also exclusive. But in fact this also means that integer window coordinates are the worst to use for lines, because due to rounding issues in the transformation pipeline it is hard to decide to which pixel such a fragment belongs, since the "decision threshold" for lines is at the integer fragment boundaries, rather than its center as it is for polygons.

But when considering texturing, there comes another problem into play, namely the interpolation. While a quad from (0,0) to (2,2) would cover a 2x2 pixel region, the values of the varyings (the vertex attributes interpolated across the primitive, like the color or texture coordinates) are interpolated from the actual window coordinates. Thus the pixel (0,0) (corresponsing to the fragment with center coordinate (0.5,0.5)) won't have the exact values of the lower left quad vertex, but the values interpolated into the interior by half the size of a fragment (and likewise for the other corners).

What also matters then is how OpenGL filters textures. When using linear filtering, the exact texel color is returned for texture coordinates at texel centers (i.e. (i+0.5)/size) and integer divisors of the texture size (i.e. i/size) would result in a half-way blend between neighbouring texel colors. And when using nearest neighbour sampling instead (which is advisable when trying to do something pixel- or texel-accurate), the floating point texture coordinate is rounded down (floor-operation) and thus the "decision threshold" that decides if the color is from one texel or its neighbor is at the texel borders (thus integer divisors of the texture size as texture coordinates). So sampling at texel centers is advisable both with linear filtering (which is in turn not advisable when working pixel-exact) and with nearest filtering, since that reduces the chance of "flipping" from one texel to another due to inexactnesses and rounding errors in the texture coordinate interpolation.


So let's look at your particular example. we got a quad with coordinates

(0,2) - (2,2)
  |       |
(0,0) - (2,0)

and texture coordinates

(0,1) - (1,1)
  |       |
(0,0) - (1,0)

So if those positions are already given in window/viewport space, this results in the fragments with centers

(0.5,1.5) (1.5,1.5)
(0.5,0.5) (1.5,0.5)

covered, representing the 2x2-[0,1]-pixel square. The texture coordinates of those fragments after interpolation would be

(0.25,0.75) (0.75,0.75)
(0.25,0.25) (0.75,0.25)

And for a 2x2 texture those indeed are at the texel centers. So everything plays out nicely. There could be rounding and precision errors resulting in e.g. coordinates of 1.999 or texCoords of 0.255, but this is still not a problem, since we're far from the points where we would "snap over" to the neighbouring pixels or texels (assuming we use nearest filtering, but even with linear filtering you wouldn't usually notice a difference from the exact texel color).

Though for the line example it is hard to say, due to precision, rounding and implementation issues, if it will be from either (0,0) to (0,9) or from (-1,0) to (-1,9) (thus clipped away), or even skewed and you should rather use (0.5,0.5) and (0.5,10.5), which will definitely result in a line from (0,0) to (0,9).


So to sum up, OpenGL is not really made for completely pixel exact operations, but with some care it can be achieved. But to achieve the best results, you should first configure your transformations to specify the vertex positions directly in window coordinates, e.g.

glViewport(0, 0, width, height);
glOrtho(0, width, 0, height, -1, 1); //or something similar when using shaders

Then for polygons use integer positions and integer divisors of the texture size as texture coordinates (and use GL_NEAREST filtering). But for lines use half-pixel positions (i.e. i+0.5) and texel centers as texture coordinates (i.e. (i+0.5)/size). This should give you pixel- and texel-exact rasterization of your primitives and the half-open intervals described in your question. But always keep in mind that in this case the coner pixels of a rasterized polygon don't match the vertex-corners of it (are half a pixel shifted inward). For texturing this plays out nicely with the filtering rules, but for other attributes, like colors, this means the lower left pixel of a rectangle won't have exactly (what is "exactly" in this limited-precision context anyway?) the color of the lower left vertex. For lines, though, they will indeed match (as far as possible).

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top