質問

I am working on a project for a device that must constantly write information to a storage device. The device will need to be able to lose power but accurately retain the information that it collects up until the time that power is lost.

I've been looking for answers for what would happen if power was lost on a system like this. Are there any issues with losing power and not closing the file? Is data corruption a possibility?

Thank you

役に立ちましたか?

解決

The whole subject of "safely storing data when power may be cut" is quite hard to solve in a generic way - the exact solution will depend on the exact type of data, rate data is stored, etc.

To retain information "while power is off", the data needs to be stored in non-volatile memory (either flash, eeprom or battery backed RAM). Again, this is a hardware solution.

If you may "lose data written to file"? Yes, it's entirely possible that the file may not be correctly written if the power to the file-storage device is lost when the system is in the middle of writing.

The answer to this really depends on how much freedom you have to build/customise the hardware to cope with this situation. Systems that are designed for high reliability will have a way to detect power-cuts and still run for several seconds (sometimes a lot more) after a power-cut, and when the power-cut happens, it goes into "save all data, and shut down nicely" mode. Typically, this is done by using an uninterruptable power supply (UPS), which has an alarm mechanism that signals that the external power is gone, and when the system receives this signal, starts a emergency shutdown.

If you don't have any way to connect a UPS and shut down in an orderly fashion, then there's other features, such as journaling filesystem that can give you a good set of data, but it's not guaranteed to give you complete data (and you need to handle your fileformat such that "cut off data" doesn't completely ruin the file - the classic example is a zip-file, which stores the "directory" (list of contents) at the very end of the file. So you can have 99.9% of the file complete, but the missing 0.1% is what you need to decode all the content.

他のヒント

Yes, data corruption is definitely a possibility.

However there are a few guidelines to minimize it in a purely software way:

  • Use a journalling filesystem and put it in its maximum journal mode (eg. for ext3/ext4 use data=journal, no less).
  • Avoid software buffers. If you don't have a choice, flush them ASAP.
  • Synchronize the filesystem ASAP (either through the sync/syncfs/fsync system calls, or using the sync mount option).
  • Never overwrite existing data, just append new data to existing files.
  • Be prepared to deal with incomplete data records.

This way, even if you lose data it will only be the last few bytes written, and the filesystem in general won't be corrupt.

You'll notice that I assumed a Unix-y OS. As far as I know, Windows doesn't give you enough control to enforce that kind of constraints on the filesystem.

ライセンス: CC-BY-SA帰属
所属していません StackOverflow
scroll top