質問

A common pattern I've encountered in my office is if we fail an IO operation, we wait a short amount of time and try again a couple of times in the hope it suddenly starts working.

Example of what I'm talking about:

bool WriteAFile()
{
    uint32_t writeAttempts = 0;
    do
    {
        if (WriteFile(/*...*/))
        {
            break;
        }
        Sleep(50);
        writeAttempts++;
    } while (writeAttempts < 3);
    return writeAttempts < 3;
}

I imagine this behaviour originally popped up to prevent failures when working with files which are locked by another process temporarily... which makes a some sense... however, I fail to see how this is applicable to other operations.

Does repeating IO operations in this fashion increase your chances of writing 'good' data to the disk? Can it be used as a workaround for dying drives? Are there any other legitimate uses for this kind of behaviour?

PS: While I've marked this as a C++ Windows example, I'm interested to hear if there are any compelling reasons to do this with other languages/platforms as well!

役に立ちましたか?

解決

as usual...

It Depends

Offhand, there are a few reasons why this might be valid/desirable:

  • you're writing to a removable drive that may not be instantly ready
  • you're writing to a network drive that might disappear and reappear a few milliseconds later (temporary network glitch)
  • something complete different...

One thing is almost certain: that kind of retry mechanism was not coded accidentally or casually!

I suggest tracking down the original author and asking him/her why he/she did that... there might be a very good reason, or the original reason might be long obsoleted.

ライセンス: CC-BY-SA帰属
所属していません StackOverflow
scroll top