문제

I'm rendering buddhabrot fractals and I'm looking for some optimisations/speedups and I was wondering if it could be worth while trying to do z = z^2 + c using bitwise operators. I've allready simplified it down a bit.

   double zi2 = z.i*z.i;
   double zr2 = z.r*z.r;
   double zir = z.i*z.r;
   while (iterations < MAX_BUDDHA_ITERATIONS && zi2 + zr2 < 4) {

         z.i = c.i;
         z.i += zir;
         z.i += zir;
         z.r = zr2 - zi2 + c.r;
         zi2 = z.i*z.i;
         zr2 = z.r*z.r;
         zir = z.i*z.r;
         iterations++;
   }
도움이 되었습니까?

해결책

z^2+c can be encapsulated in the fused multiply-accumulate operation. This is available as single instruction on some processors and is becoming available on others. In processors where it is not available, it is usually optimized or optimizeable. For instance, C99 defines the fma family of functions to provide it. So I'd say that what you want is probably happening already and, if it's not, there's a very readable way to guarantee that it is.

In general, you should be highly suspicious any time your subconscious whispers that it would be faster to replace readable, maintainable code with a less-readable, less-maintainable, more difficult to debug solution X which you have just dreamed up. Readability and maintainability are extremely important not just for writing code well, but for sharing it and for talking about its correctness; computers are fast, compilers are pretty decent.

다른 팁

The compiler does not do it in bitwise operations... It's CPU and its ALU which use bitwise operations, which happen in parallel for all bits of a word of course, even multiple machine code instructuons (like multipy) at a time in modern processors.

What you are asking makes no sense... Well, if you are programming an FPGA, it might make some sense, but I assume you are not...

라이센스 : CC-BY-SA ~와 함께 속성
제휴하지 않습니다 StackOverflow
scroll top