Project

Profile

Help

Issue #1967

closed

Duplicate instances of pulp_celerybeat on the same host will cause some tasks to be dispatched multiple times

Added by dalley almost 8 years ago. Updated almost 5 years ago.

Status:
CLOSED - WONTFIX
Priority:
Normal
Assignee:
-
Category:
-
Sprint/Milestone:
-
Start date:
Due date:
Estimated time:
Severity:
1. Low
Version:
Platform Release:
OS:
Triaged:
Yes
Groomed:
No
Sprint Candidate:
No
Tags:
Pulp 2
Sprint:
Quarter:

Description

1. Run "pstart"
2. From the same machine, run "celery beat --app=pulp.server.async.celery_instance.celery --scheduler=pulp.server.async.scheduler.Scheduler"
3. From a new terminal window, create a scheduled sync using the command "pulp-admin rpm repo sync schedules create --repo-id zoo -s 2016-06-02T14:49:30+00:00", BUT modify the date and time to be in the near future of your present time. You may also use a different repo if desired.
4. Wait until some time after the time used in step 3
5. Execute the command "pulp-admin tasks list --all"
6. Observe that the sync and publish tasks were executed twice, instead of once

Background

The scheduler uses the following logic to prevent multiple instances from actively dispatching tasks
https://github.com/pulp/pulp/blob/a63536ac2384626f58939e3a41d7f2f31bd50750/server/pulp/server/async/scheduler.py#L245

celerybeat_name = constants.SCHEDULER_WORKER_NAME + "@" + platform.node()

....

# Updating the current lock if lock is on this instance of celerybeat
result = CeleryBeatLock.objects(celerybeat_name=celerybeat_name).\
    update(set__timestamp=datetime.utcnow())

# If current instance has lock and updated lock_timestamp, call super
if result == 1:
    _logger.debug(_('Lock updated by %(celerybeat_name)s')
                  % {'celerybeat_name': celerybeat_name})
    ret = self.call_tick(self, celerybeat_name)

If two instances of pulp_celerybeat share the same name (ie. when they exist on the same platform), each will be able to successfully update the same celerybeat_lock object without violating the uniqueness constraint. Both pulp_celerybeat instances expect that they have acquired the lock, and commence dispatching tasks.

The correct behavior here would be to ensure that regardless of how many are running, only one scheduler is able to dispatch tasks.

dkliban pointed out that users shouldn't be running 2 schedulers on the same machine anyway, but that it's very likely that some users will do so regardless, hence this bug should be filed.


Related issues

Related to Pulp - Story #2632: As a developer I want to reevaluate worker issues to see if they have been resolved by moving from Celery3 to Celery4CLOSED - WONTFIX

Actions
Actions #1

Updated by mhrivnak almost 8 years ago

If we just add the PID to the uniqueness formula, how far would that get us toward resolving this issue?

Actions #2

Updated by amacdona@redhat.com almost 8 years ago

  • Severity changed from 2. Medium to 1. Low
  • Triaged changed from No to Yes
Actions #3

Updated by bizhang about 7 years ago

  • Related to Story #2632: As a developer I want to reevaluate worker issues to see if they have been resolved by moving from Celery3 to Celery4 added
Actions #4

Updated by bmbouter almost 5 years ago

  • Status changed from NEW to CLOSED - WONTFIX
Actions #5

Updated by bmbouter almost 5 years ago

Pulp 2 is approaching maintenance mode, and this Pulp 2 ticket is not being actively worked on. As such, it is being closed as WONTFIX. Pulp 2 is still accepting contributions though, so if you want to contribute a fix for this ticket, please reopen or comment on it. If you don't have permissions to reopen this ticket, or you want to discuss an issue, please reach out via the developer mailing list.

Actions #6

Updated by bmbouter almost 5 years ago

  • Tags Pulp 2 added

Also available in: Atom PDF