Question

Or is it all about semantics?

Was it helpful?

Solution

Short answer: no, they are exactly the same.

Guess it could in theory depend on the compiler; a really broken one might do something slightly different but I'd be surprised.

Just for fun here are two variants that compile down to exactly the same assembly code for me using x86 gcc version 4.3.3 as shipped with Ubuntu. You can check the assembly produced on the final binary with objdump on linux.

int main()
{
#if 1
    int i = 10;
    do { printf("%d\n", i); } while(--i);
#else
    int i = 10;
    for (; i; --i) printf("%d\n", i);
#endif
}

EDIT: Here is an "oranges with oranges" while loop example that also compiles down to the same thing:

    while(i) { printf("%d\n", i); --i; }

OTHER TIPS

If your for and while loops do the same things, the machine code generated by the compiler should be (nearly) the same.

For instance in some testing I did a few years ago,

for (int i = 0; i < 10; i++)
{
...
}

and

int i = 0;
do
{
  ...
  i++;
}
while (i < 10);

would generate exactly the same code, or (and Neil pointed out in the comments) with one extra jmp, which won't make a big enough difference in performance to worry about.

There is no semantic difference, there need not be any compiled difference. But it depends on the compiler. So I tried with with g++ 4.3.2, CC 5.5, and xlc6.

g++, CC were identical, xlc WAS NOT

The difference in xlc was in the initial loop entry.

extern int doit( int );
void loop1( ) {
  for ( int ii = 0; ii < 10; ii++ ) {
    doit( ii );
  }
}

void loop2() {
  int ii = 0; 
  while ( ii < 10 ) {
    doit( ii );
    ii++;
  }
}

XLC OUTPUT

.loop2:                                 # 0x00000000 (H.10.NO_SYMBOL)
    mfspr   r0,LR
    stu     SP,-80(SP)
    st      r0,88(SP)
    cal     r3,0(r0)
    st      r3,64(SP)
    l       r3,64(SP)  ### DIFFERENCE ###
    cmpi    0,r3,10
    bc      BO_IF_NOT,CR0_LT,__L40
...
enter code here
.loop1:                                 # 0x0000006c (H.10.NO_SYMBOL+0x6c)
    mfspr   r0,LR
    stu     SP,-80(SP)
    st      r0,88(SP)
    cal     r3,0(r0)
    cmpi    0,r3,10    ### DIFFERENCE ###
    st      r3,64(SP)
    bc      BO_IF_NOT,CR0_LT,__La8
...

The scope of the variable in the test of the while loop is wider than the scope of variables declared in the header of the for loop.

Therefore, if there are performance implications as a side-effect of keeping a variable alive longer, then there will be performance implications in choosing between a while and a for loop ( and not wrapping the while up in {} to reduce the scope of its variables ).

An example might be a concurrent collection which counts the number of iterators referring to it, and if more than one iterator exists, it applies locking to prevent concurrent modification, but as an optimisation elides the locking if only one iterator refers to it. If you then had two for loops in a function using differently named iterators on the same container, the fast path would be taken, but with two while loops the slow path would be taken. Similarly there may be performance implications if the objects are large (more cache traffic), or use system resources. But I can't think of a real example that I've ever seen where it would make a difference.

Compilers that optimize using loop unrolling will probably only do so in the for-loop case.

Both are equivalent. It's a matter of semantics.

The only difference may lie in the do... while construct, where you postpone the evaluation of the condition until after the body, and thus may save 1 evaluation.

i = 1; do { ... i--; } while( i > 0 ); 

as opposed to

for(  i = 1; i > 0; --i )
{ ....
}  

I write compilers. We compile all "structured" control flow (if, while, for, switch, do...while) into conditional and unconditional branches. Then we analyze the control-flow graph. Since a C compiler has to deal with general goto anyway, it is easiest to reduce everything to branch and conditional-branch instructions, then be sure to handle that case well. (A C compiler has to do a good job not just on handwritten code but also on automatically generated code, which may have many, many goto statements.)

No. If they're doing equivalent things, they'll compile to the same code - as you say, it's about semantics. Choose the one that best represents what you're trying to express.

Ideally it should be the same, but eventually it depends on your compiler/interpreter. To be sure, you must measure or examine the generated assembly code.

Proof that there may be a difference: These lines produce different assembly code using cc65.

for (; i < 1000; ++i);   
while (i < 1000) ++i;

On Atmel ATMega while() is faster than for(). Why is this is explained in AVR035: Efficient C Coding for AVR.

P.S. Original platform was not mentioned in question.

continue behaves differently in for and while: in for, it alters the counter, in while, it usually doesn't

To add another answer: In my experience, optimizing software is like a big, bushy beard being shaved off a man.

  • First you lop it off in big chunks with scissors (prune whole limbs off the call tree).
  • Then you make it short with an electric clipper (tweak algorithms).
  • Finally you shave it with a razor to get rid of the last little bit (low-level optimization).

The last is where the difference between for() and while() might, but probably won't, make a difference.

P.S. The programmers I know (who are all very good, and I suspect are a representative sample) basically go at it from the other direction.

They are the same as far as performance goes. I tend to use while when waiting for a state change (such as waiting for a buffer to be filled) and for when processing a number of discrete objects (such as going through each item in a collection).

There is a difference in some cases.

If you are at the point where that difference matters, you either need to pick a better algorithm or begin coding in assembly language. Trust me, coding in assembly is preferable to fixing your compiler version.

Is while() faster/slower than for()? Let's review a few things about optimization:

  • Compiler-writers work very hard to shave cycles by having fewer calls to jump, compare, increment, and the other kinds of instructions that they generate.

  • Call instructions, on the other hand, consume many magnitudes more cycles, but the compiler is nearly powerless to do anything to remove those.

  • As programmers, we write lots of function calls, some because we mean to, some because we're lazy, and some because the compiler slips them in without being obvious.

  • Most of the time, it doesn't matter, because the hardware is so fast, and our jobs are so small, that the computer is like a beagle dog who wolfes her food and begs for more.

  • Sometimes, however, the job is big enough that performance is an issue.

  • What do we do then? Where's the bigger payoff?

    • Getting the compiler to shave a few cycles off loops & such?
    • Finding function calls that don't -really- need to be done so much?
  • The compiler can't do the latter. Only we the programmers can.

  • We need to learn or be taught how to do this. It doesn't come naturally. We are congenitally inclined to make wrong guesses and then bet on them. Getting better algorithms is a start, but only a start. Our teachers need to teach this, if indeed they know how.

  • Profilers are a start. I do this.

The apocryphal quote of Willie Sutton when asked Why do you rob banks?:
Because that's where the money is.

If you want to save cycles, find out where they are.

Probably only coding style.

  • for if you know the number of iterations.
  • while if you do not know the number of iterations.
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top