Question

I'm coding something that will check a specific static location's directory, and if it exists, and a certain process has not run in n minutes, it deletes that directory's files and all sub-directory files.

The catch is that this directory is constantly being written to. As soon as this service detects a build process trigger, that directory is being copied to and the deletion must not interfere with the files being copied into it.

Now, I came up with a design that I think is mostly safe, but it's expensive, and I have a feeling there is a cleverer way to do it. I'm looking to make this process cheaper and faster, if possible, and safer, if I'm neglecting any important caveats that aren't essentially self-evident. This isn't the full code logic, just the deletion logic for the files.

My main goal is making this cheaper without compromising robustness, and covering any obvious use cases that could compromise it not covered in the diagram below.

Note: Although I appreciate software suggestions, I'm looking to make this on my own.

Current suggestions:

  1. Use an alternative (currently unspecified) to the file system for queuing -kevin cline

  2. Rename build directory so that it won't be interacted with when a build event triggers -avnr

enter image description here

Was it helpful?

Solution

You could rename the directory to a new name and create a new empty directory by that name, both steps under the same lock (e.g., by locking the parent directory, but that depends on your specific file system). Then release the lock and empty the renamed directory. IMHO using this method you should be able to skip the renaming of each individual file.

Licensed under: CC-BY-SA with attribution
scroll top