As its name suggests, the flock
utility wraps the flock(2)
system call, whose documentation states that it associates the lock with the fd's entry in the table of open files. Open files get removed when the last file descriptor is closed, and descriptors are themselves transient resources automatically cleaned up when the process exits in any fashion, including kill -9
. Forcible shutdown of the machine wipes the whole state of the running system, so locks associated with opened files cannot survive that scenario either. Therefore a script that exits in a timely fashion should not block the system.
One realistic possibility of blocking that comes to mind is the script having a bug that causes it to hang indefinitely, which will indeed prevent other instances from being able to run. If this is expected to occur in practice, it can be handled by writing out the script PID ($$
) into the lock file immediately after acquiring the lock. Before the lock is acquired, the process that waits for the lock would specify a timeout using the -w
option to flock
. If the timeout expires, it would read the PID of the hanging lock owner from the lock file, kill the owner, and repeat the locking process.
A theoretical problem is the hardcoded file descriptor number 200. If your script opens 200 files, or is spawned from a program that opens 200 files without the close-on-exec bit, you will clobber an existing file descriptor. This is highly unlikely to occur in practice.