Question

I have this algorithm that calcuates the mandelbrot value of a point (x0,y0) (x0 and y0 are somewhere between -1 and 1 i thought, not very important). This is all going very well when scale isn't getting too big, but at higher values of scale, the values returned are very inaccurate and my graphic output starts to go freaky. How do i predict from what value of scale this occurs?

    public static byte CalculateMandelbrot(double x0, double y0,double scale)
    {
        x0 /= scale;
        y0 /= scale;
        double y = 0;
        double x = 0;
        byte i = 0;
        while (x * x + y * y < 4)
        {
            double tx = x;
            x = x * x - y * y + x0;
            y = 2 * tx * y + y0;
            i++;
            if (i == 0xFF) break;
        }

        return i;
    }
Was it helpful?

Solution

A double has 53 bits of precision. This amounts to about 16 decimal places.

If you zoom in on your fractal 10^13 times, and make picture of 1000x1000 pixels, the precision is about the same as the screen resolution: the minimal change possible in a double is a step of one pixel on the screen.

But you will get into trouble before that, because you iterate the mandelbrot formula a hundred times iteratively on the same number. Each calculation adds a roundoff error (multiple ones, probably) of about 1/10^16. It is possible (although tedious) to predict when this gets noticable.

The FPU internally has a higher number of bits than standard double, this will reduce the abovementioned effect.

OTHER TIPS

This is the classic "decimal vs double" pitfall. Try using 'decimal' for all vars and see if it clicks.

From the C# Reference page:

Compared to floating-point types, the decimal type has a greater precision and a smaller range

There are also arbitrary precision implementations like BigFloat Class.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top