Question

Anyone knows if multiply operator is faster than using the Math.Pow method? Like:

n * n * n

vs

Math.Pow ( n, 3 )
Was it helpful?

Solution

Basically, you should benchmark to see.

Educated Guesswork (unreliable):

In case it's not optimized to the same thing by some compiler...

It's very likely that x * x * x is faster than Math.Pow(x, 3) as Math.Pow has to deal with the problem in its general case, dealing with fractional powers and other issues, while x * x * x would just take a couple multiply instructions, so it's very likely to be faster.

OTHER TIPS

I just reinstalled windows so visual studio is not installed and the code is ugly

using System;
using System.Diagnostics;

public static class test{

public static void Main(string[] args){
    MyTest();
    PowTest();
}

static void PowTest(){
    var sw = Stopwatch.StartNew();
    double res = 0;
    for (int i = 0; i < 333333333; i++){
        res = Math.Pow(i,30); //pow(i,30)
    }
    Console.WriteLine("Math.Pow: " + sw.ElapsedMilliseconds + " ms:  " + res);
}

static void MyTest(){
    var sw = Stopwatch.StartNew();
    double res = 0;
    for (int i = 0; i < 333333333; i++){
        res = MyPow(i,30);
    }
    Console.WriteLine("MyPow: " + sw.ElapsedMilliseconds + " ms:  " + res);
}



static double MyPow(double num, int exp)
{
    double result = 1.0;
    while (exp > 0)
    {
        if (exp % 2 == 1)
            result *= num;
        exp >>= 1;
        num *= num;
    }

    return result;
}
}

The results:
csc /o test.cs

test.exe

MyPow: 6224 ms:  4.8569351667866E+255  
Math.Pow: 43350 ms:  4.8569351667866E+255 

Exponentiation by squaring (see https://stackoverflow.com/questions/101439/the-most-efficient-way-to-implement-an-integer-based-power-function-powint-int) is much faster than Math.Pow in my test (my CPU is a Pentium T3200 at 2 Ghz)

EDIT: .NET version is 3.5 SP1, OS is Vista SP1 and power plan is high performance.

A few rules of thumb from 10+ years of optimization in image processing & scientific computing:

Optimizations at an algorithmic level beat any amount of optimization at a low level. Despite the "Write the obvious, then optimize" conventional wisdom this must be done at the start. Not after.

Hand coded math operations (especially SIMD SSE+ types) will generally outperform the fully error checked, generalized inbuilt ones.

Any operation where the compiler knows beforehand what needs to be done are optimized by the compiler. These include: 1. Memory operations such as Array.Copy() 2. For loops over arrays where the array length is given. As in for (..; i<array.Length;..)

Always set unrealistic goals (if you want to).

I just happened to have tested this yesterday, then saw your question now.

On my machine, a Core 2 Duo running 1 test thread, it is faster to use multiply up to a factor of 9. At 10, Math.Pow(b, e) is faster.

However, even at a factor of 2, the results are often not identical. There are rounding errors.

Some algorithms are highly sensitive to rounding errors. I had to literally run over a million random tests until I discovered this.

I checked, and Math.Pow() is defined to take two doubles. This means that it can't do repeated multiplications, but has to use a more general approach. If there were a Math.Pow(double, int), it could probably be more efficient.

That being said, the performance difference is almost certainly absolutely trivial, and so you should use whichever is clearer. Micro-optimizations like this are almost always pointless, can be introduced at virtually any time, and should be left for the end of the development process. At that point, you can check if the software is too slow, where the hot spots are, and spend your micro-optimization effort where it will actually make a difference.

This is so micro that you should probably benchmark it for specific platforms, I don't think the results for a Pentium Pro will be necessarily the same as for an ARM or Pentium II.

All in all, it's most likely to be totally irrelevant.

Let's use the convention x^n. Let's assume n is always an integer.

For small values of n, boring multiplication will be faster, because Math.Pow (likely, implementation dependent) uses fancy algorithms to allow for n to be non-integral and/or negative.

For large values of n, Math.Pow will likely be faster, but if your library isn't very smart it will use the same algorithm, which is not ideal if you know that n is always an integer. For that you could code up an implementation of exponentiation by squaring or some other fancy algorithm.

Of course modern computers are very fast and you should probably stick to the simplest, easiest to read, least likely to be buggy method until you benchmark your program and are sure that you will get a significant speedup by using a different algorithm.

I disagree that handbuilt functions are always faster. The cosine functions are way faster and more accurate than anything i could write. As for pow(). I did a quick test to see how slow Math.pow() was in javascript, because Mehrdad cautioned against guesswork

    for (i3 = 0; i3 < 50000; ++i3) { 
      for(n=0; n < 9000;n++){ 
        x=x*Math.cos(i3);
      }
    }

here are the results:

Each function run 50000 times 

time for 50000 Math.cos(i) calls = 8 ms 
time for 50000 Math.pow(Math.cos(i),9000) calls = 21 ms 
time for 50000 Math.pow(Math.cos(i),9000000) calls = 16 ms 
time for 50000 homemade for loop calls 1065 ms

if you don't agree try the program at http://www.m0ose.com/javascripts/speedtests/powSpeedTest.html

Math.Pow(x, y) is typically calculated internally as Math.Exp(Math.Log(x) * y). Evey power equation requires finding a natural log, a multiplication, and raising e to a power.

As I mentioned in my previous answer, only at a power of 10 does Math.Pow() become faster, but accuracy will be compromised if using a series of multiplications.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top