Frage

I'm working on developing a process that will eventually be resident on a CentOS (latest) virtual machine, I'm developing in Ubuntu 12.04 LTS...

So, I have incron set to monitor my drop folder with IN_CLOSE_WRITE so that when a file is written into it a rather resource intensive script is then run on the file (Images, imagemagick). this all works fine; unless too many files are dropped at once. the script, as I said, is rather resource intensive and if more than 4 or so instances run concurrently my development machine is brought to its knees (the eventual virtual machine will be beefier, but I foresee instances where perhaps HUNDREDS of files will be dropped at once!)

dangerous incrontab:

/path/to/dropfolder IN_CLOSE_WRITE bash /path/to/resourceintensivescript.sh $@/$#

so the question is: how to limit the number of jobs spawned by incrond? I tried using gnu parallel but couldn't figure out how to make that work...

for example:

/path/to/dropfolder IN_CLOSE_WRITE parallel --gnu -j 4 bash /path/to/resourceintensivescript.sh $@/$#

seems to do nothing :/

and:

/path/to/dropfolder IN_CLOSE_WRITE;IN_NO_LOOP bash /path/to/resourceintensivescript.sh $@/$#

ends up missing files :P

Ideas on how to deal with this?

War es hilfreich?

Lösung

A very basic way to do this is to simply use grep and count the processes... something like:

    processName=myprocess

    if [ $(ps -ef |grep -v grep|grep ${processName} |wc -l) -le 4 ]
    then
      do something
    fi

With the loop suggestion:

processName=myprocess

while true
do
  if [ $(ps -ef |grep -v grep|grep ${processName} |wc -l) -le 4 ]
  then
    do something
    break
  fi
  sleep 5
done

Andere Tipps

You can use the sem utility that comes with parallel:

/path/to/dropfolder IN_CLOSE_WRITE sem --gnu --id myjobname -j 4 /path/to/resourceintensivescript.sh $@/$#
Lizenziert unter: CC-BY-SA mit Zuschreibung
Nicht verbunden mit StackOverflow
scroll top