Question

I have a problem when joining two antialiased lines when using a blending mode, I get a dip at the point where they join. By blending mode I mean that I draw my antialiased line by calculating the ratio of line colour vs background colour, so when the ratio for a pixel is for instance 70% the new pixel is 0.7*line colour + 0.3*background colour. My antialiasing function for lines is basically made from an error function (though I suppose the same problem arises for most antialiasing functions), like this:

0.5+0.5erf(-x)

So when two lines meet, one drawn after the other, you get a dip, the joint of the two lines dips to 75% of the intensity it should be at because at that point 50% of the background was kept for the first line and then 50% of those 50% remained after the second line was drawn when 0% should be left:

1 - (0.5erfc(-x) * 0.5erfc(x))

I can only assume that it's a common problem in drawing antialiased raster graphics with joined lines so it must have a common solution, but I have no idea what this is. Thanks!

Also: Just to be clear on how the lines are drawn, in width the lines are made with a Gaussian function (e^-x*x) and both ends are rounded off using raised error functions. You can see an example of what a 10 px long horizontal line looks like by entering '0.5erfc(-x-5) * 0.5erfc(x-5) * e^(-y*y)' in WolframAlpha.

Was it helpful?

Solution 4

Eventually I found the answer to that problem. There's no sensible way to do it by directly drawing one line after the other directly onto the main image, because you don't want the lines to be blended together, you want them to be added together then have the result of that sum of lines blended into the main image.

However that's unwieldy if you have to draw all these lines to a separate buffer and then blend that whole buffer onto the main buffer, which is what I considered and dismissed as unsuitable before asking this question. Thankfully since then I've totally changed my approach, instead of having one buffer on which to draw element after element instead I use a per-pixel approach where each pixel is calculated directly by going through a list of elements to draw, for the sake of parallelisation (with OpenCL). So instead of having to use extra buffers I simply need a small array that can hold a few extra pixel values, and in my list of elements to draw I have elements that serve as brackets, so for instance instead of having:

image => (blend) line1 => (blend) line2 => (blend) line3

I can have:

image => (blend) [0 => (add) line1 => (add) line2 => (add) line3]

which is done by replacing the use of a single pixel value by having an array of values for each depth level of brackets so in this case at v[0] you'd have the pixel from image, then you'd start with v[1] at 0 and add each line to it, and when all the lines are added the closing of the bracket would make v[1] be blended into v[0] and the correct resulting pixel value would be there.

So that's it, it's pretty simple, it's only a problem if you don't allow yourself to use the equivalent of a group of layers in Photoshop.

OTHER TIPS

Doing good-looking continuous lines composed of blended segments is in general not going to be possible if you think of them as segments - if you just think of the case of having one line and then drawing the next segment either at the same angle and then at a 90 degree angle. The pixel colors for one line depend on the angle it joins with the next line

What you then need to be thinking is segments with angled ends.

To draw it, look for literature on miter of bevel linejoin (miter is probably easier).

If you draw the adjoining lines with blending this is a pretty impossible problem.

A good way to think about this is a distance function to an ideal shape. A pixel intensity maps (via some function) to distance to the shape. With two perfect lines that would just be the minimum.

Unfortunately this means that you need the distance to every line that might influence a pixel. This is what some text rasterizers do.

Alternatively you just do not weight lines. They are either on or off per pixel. Then you let super sampling take care of the rest. That is what software vector renderers like flash or svg do.

The idea of thang may be a good starting point: in short to control the center of the "brush" instead of the edges. Ideally with nice round endpoints you would see nice round corners by this approach.

The truth however won't be this nice. The problem is that you first alpha blend a line to your target surface, and then you alpha blend the second line on that surface which already has a line "burned in". The end result would be a fatter blob at the corner where two translucent pixels are blit over each other (you may observe this effect in real if for example you try to draw connecting line segments in Gimp).

I think this can not be worked around using this simple one-line-at-a-time approach on this setting (so you would need to go in the direction other answers proposed, using polyline algorithms or super sampling). However depending on your goals, you may have a viable solution.

This is pre-rendering your graphic object to a separate surface which has alpha. On this you may combine the alphas of the individual lines (such as taking the largest from the target pixel and the pixel plotted) which will give you the intended result (no fat blobs on the corners).

The drawback is that you need a separate surface which you have to blit on your target when the object is complete: this needs both extra memory and processing time.

You may work around this if you just need to render to some flat (single colour) target: then you simply don't necessarily need to perform proper alpha blending, and may do the combining alpha calculations in place. This solution may be workable if the background is something easily calculated (such as a coordinate grid), so in overall when you may easily get the original pixel value of the background, and you are able to combine against that (truly this will also work if you keep the background over which you render in a separate surface, but then again you have an another surface in the memory, so probably nothing won).

If your problem is of some other nature, it may also be workable if you keep these rendered separate surfaces around, that is essentially you pre-render your objects of lines, and later only use them as textures or tiles.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top