Question

I'm working with a program that uses qdbm to maintain a key-value store (qdbm is linked into the program). In certain cases, the process puts a value in the qdbm database and then restarts itself by calling an external init script (via system()). It appears that sometimes a value written to the qdbm database doesn't actually stick, and I'm wondering if it could be due to the data not getting flushed to disk before the process is killed via SIGTERM.

Since qdbm does writes using the write() system call (versus say, the fwrite() library function), I would think that the Linux kernel should know to flush everything to disk eventually (the system doesn't get restarted, just the process). Also, close() does get called on the FD before the process is killed.

So, is my understanding correct, or do I need to add some fdatasync() or similar calls in there somewhere? Links to authoritative references on the semantics here would also be appreciated.

Was it helpful?

Solution

Normally, the data already written by the application into a kernel buffer with write() will not be affected by the application exiting or getting killed in any way. Exiting or getting killed implicitly closes all file descriptors, so there should be no difference, the kernel will handle the flushing afterwards. So no fdatasync() or similar calls are neccessary.

There are two exceptions to this:

  • if the application uses user-land buffering (not calling the write() system call, but instead caching the data in a user-space buffer, with fwrite()), those buffers might not get flushed unless a proper user-space file close is executed - getting killed by a SIGKILL will definitely cause you to lose the contents of those buffers,

  • if the kernel dies as well (loss of power, kernel crash, etc.), your data might have missed getting written to the disks from the kernel buffers, and will then get lost.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top