Question

While issuing a new build to update code in workers how do I restart celery workers gracefully?

Edit: What I intend to do is to something like this.

  • Worker is running, probably uploading a 100 MB file to S3
  • A new build comes
  • Worker code has changes
  • Build script fires signal to the Worker(s)
  • Starts new workers with the new code
  • Worker(s) who got the signal after finishing the existing job exit.
Était-ce utile?

La solution

The new recommended method of restarting a worker is documented in here http://docs.celeryproject.org/en/latest/userguide/workers.html#restarting-the-worker

$ celery multi start 1 -A proj -l info -c4 --pidfile=/var/run/celery/%n.pid
$ celery multi restart 1 --pidfile=/var/run/celery/%n.pid

According to http://ask.github.com/celery/userguide/workers.html#restarting-the-worker you can restart a worker sending a HUP signal

 ps auxww | grep celeryd | grep -v "grep" | awk '{print $2}' | xargs kill -HUP

Autres conseils

celery multi start 1 -A proj -l info -c4 --pidfile=/var/run/celery/%n.pid
celery multi restart 1 --pidfile=/var/run/celery/%n.pid

http://docs.celeryproject.org/en/latest/userguide/workers.html#restarting-the-worker

If you're going the kill route, pgrep to the rescue:

kill -9 `pgrep -f celeryd`

Mind you, this is not a long-running task and I don't care if it terminates brutally. Just reloading new code during dev. I'd go the restart service route if it was more sensitive.

You can do:

celery multi restart w1 -A your_project -l info  # restart workers

Example

You should look at Celery's autoreloading

What should happen to long running tasks? I like it this way: long running tasks should do their job. Don't interrupt them, only new tasks should get the new code.

But this is not possible at the moment: https://groups.google.com/d/msg/celery-users/uTalKMszT2Q/-MHleIY7WaIJ

I have repeatedly tested the -HUP solution using an automated script, but find that about 5% of the time, the worker stops picking up new jobs after being restarted.

A more reliable solution is:

stop <celery_service>
start <celery_service>

which I have used hundreds of times now without any issues.

From within Python, you can run:

import subprocess
service_name = 'celery_service'
for command in ['stop', 'start']:
    subprocess.check_call(command + ' ' + service_name, shell=True)

Might be late to the party. I use:

sudo systemctl stop celery

sudo systemctl start celery

sudo systemctl status celery

Licencié sous: CC-BY-SA avec attribution
Non affilié à StackOverflow
scroll top