Ensure that queued tasks are not lost by enabling task_reject_on_worker_lost for Celery 4
In Celery 3, the resource_manager queue loses a currently running _queue_reserved_task if the resource manager is restarted with
sudo systemctl restart pulp_resource_manager.
The task is lost from the queue but still has an incorrect TaskStatus record showing as waiting which will never run.
Note that if you
sudo pkill -9 -f resource_manager and the
sudo systemctl start pulp_resource_manager it does not lose the task.
sudo systemctl stop pulp_workers pulp-admin rpm repo sync run --repo-id zoo qpid-stat -q <<-- observe that the queue depth of the resource_manager queue is 1 sudo systemctl restart pulp_resource_manager qpid-stat -q <<-- observe that the queue depth of the resource_manager queue is 0 pulp-admin tasks list -s waiting <<-- observe that the task which is gone is listed as 'waiting', but it will never run because it is gone
We need to make sure that this doesn't happen in Celery 4. There's a config task that should prevent this:
Also, we to apply this fix for Pulp 2 AND 3.
#8 Updated by daviddavis over 4 years ago
So I was able to reproduce the behavior in standalone celery where messages are persisted on warm shutdown even if task_reject_on_worker_lost is not set. It turns out that if you run
pkill -f celery instead of
kill $CHILD_PROCESS_ID, the message gets persisted.
This is why when shutting down pulp_resource_manager via systemctl, we're seeing messages getting persisted. It's killing (or doing a warm shutdown) on both processes. I have no idea why this is. I can open an upstream celery issue but this behavior sounds pretty much the same as some existing bugs .
Also, the message persisting is not a problem for us though. We're concerned about messages being lost and if a message gets persisted, it simply runs the next time pulp starts up. We're not concerned here about double execution either since pulp_workers are not running.
#13 Updated by daviddavis over 4 years ago
I would probably recommend using the following workflow for testing as it's a bit more precise in that it only kill the child worker process. Using
sudo systemctl restart pulp_resource_manager will kill both the child and the parent which will potentially leave the message in the queue and thus would be a false positive.
sudo systemctl stop pulp_workers # may need to wait 30 seconds for this to die pulp-admin rpm repo sync run --repo-id zoo --bg qpid-stat -q # observe that the queue depth of the resource_manager queue is 1 ps auxf | grep resource_manager # grab the child process id (e.g. 12345) sudo kill 12345 qpid-stat -q # observe that the queue depth of the resource_manager queue is still 1 sudo systemctl restart pulp_resource_manager sudo systemctl start pulp_workers # may need to wait 30 seconds for this to start and pick up task pulp-admin tasks list -s waiting # should be empty