It is now possible to create file locks using filelock
(GitHub)
In order to facilitate this with parSapply()
you would need to edit your loop so that if the file is locked the process will not simply quit, but either try again or Sys.sleep()
for a short amount of time. However, I am not certain how this will affect your performance.
Instead I recommend you create cluster-specific files that can hold your data, eliminating the need for a lock file and not reducing your performance. Afterwards you should be able to weave these files and create your final results file.
If size is an issue then you can use disk.frame
to work with files that are larger than your system RAM.