Question

Consider these two implementations of a function:

double MyFoo::foo(std::vector<double> & v){
    double f1 = v.at(1);
    double f2 = v.at(2);
    double f3 = v.at(3);
    double f4 = v.at(4);
    double f5 = v.at(5);
    double f6 = v.at(6);    

    double ret = sin(f1)+ sin(f2)+ sin(f3)+ sin(f4)+ sin(f5)+ sin(f6)+ sin(f7);
    return ret;
}

and

double MyFoo::foo(std::vector<double> & v){
    double ret = sin(v.at(1))+ sin(v.at(2))+ sin(v.at(3))+ sin(v.at(4))+ sin(v.at(5))+ sin(v.at(6))+ sin(v.at(7));
    return ret;
}

Is there any noticeable difference (if any) in execution time of these functions? Do these local variable assignments introduce computational overhead or the compilers handles the useless local variables?

P.S. Choice of sin() is completely arbitrary. My question is focused on the local variables and not the operations happening inside.

Was it helpful?

Solution

Without actually building a system that does this, I would GUESS that there's no difference at all. And sin tends to take quite some time anyway, so it's likely that any minor optimisation made on the use of local variables, etc is very marginal.

I'd expect that most "good" compilers (gcc, Microsoft, etc) would optimise out any local variable that isn't extensively used, and produce exactly the same code.

But like I said, without actually trying both methods, it's hard so say for sure [And I'd remove the call to sin too, since that's likely to "hide" any minor differences between the two variants - just adding the 6 double values would be a better solution].

If performance (in this function in particular) is of essence then produce a benchmark. But also, before you start "messing" with functions, make sure you know what functions consume most of the time in your code. It's not much point in shaving two clockcycles off a function that runs a few dozen times, when the whole execution time is several hours (so many billions of cycles).

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top