Story #898
closedAs a user I can run multiple pulp_resource_managers concurrently with all of them actively participating
100%
Description
This story has a some database schema changes, a small coding part, and a good deal of testing. The FCFS ordering should be tested first before any of the coding changes happen.
Current ReservedResource Design¶
The ReservedResource collection needs to have its database schema adjusted so that we can have an opportunity to interact with it in a race-condition free way. Currently the mongoengine model defines:
<task_id, worker_name, resource_id>
The only restriction is that task_id needs to be unique. Generally one record is created for each task id.
Schema Changes¶
Having multiple records entered prevents us from adding the necessary indexes which would make writing atomic code for this collection more feasible. Consider this schema:
class ReservedResource(Document):
resource_id = StringField(primary_key=True, unique=True) # ensures correctness that each resource_type can only have one reservation across all workers
worker_name = StringField(required=True) # ensures that each worker can only receive one reservation at a time
task_ids = ListField() # notice, this is now a list
# For backward compatibility
_ns = StringField(default='reserved_resources')
meta = {'collection': 'reserved_resources', allow_inheritance': False}
The existing indexes would all be removed.
Coding Deliverables¶
- Rewrite the reservation creation code so that it has roughly the following logic:
while:
# assume we need to update an existing reservation
find an existing worker by querying for an existing ReservedResource by the resource_id
if there is a worker found:
attempt to add the task_id that needs reservation for any found existing worker, but do not allow upsert
assert that the number of documents updated is either 0 or 1. If not it should raise a loud exception
if 1:
query for the worker name that goes with the reservation just updated
else if no worker found or if no worker was updated:
# find an available worker, relying on uniqueness constraints to ensure atomic operation
query for the list of workers
for each worker in the list:
try:
create a new ReservedResource(worker_name, resource_id, taskids=[task_id]).save()
except ConstraintViolation:
continue
if a ReservationResource was updated:
break
else:
sleep(0.25)
Testing¶
This needs to be tested to ensure two main this:
1. That a highly concurrent environment calling reserve resource and release resource maintains correctness. This should be done with a simple concurrency driver which would be left to run in a bunch of processes in a test environment for a few days.
2. That the FCFS ordering is strictly preserved even when failures occur.
Related issues
Updated by bmbouter over 9 years ago
- Subject changed from [WIP] As a user I can run multiple pulp_resource_managers concurrently with all of them actively participating to As a user I can run multiple pulp_resource_managers concurrently with all of them actively participating
- Description updated (diff)
Updated by bmbouter over 9 years ago
- Sprint Candidate set to Yes
- Tags deleted (
Sprint Candidate)
Updated by sbhawsin over 9 years ago
- Status changed from NEW to ASSIGNED
- Assignee set to sbhawsin
- Platform Release set to master
Updated by bmbouter over 9 years ago
- Blocks Issue #1114: If an instance of pulp_resource_manager dies unexpectedly, Pulp incorrectly tries to "cancel all tasks in its queue" added
Updated by mhrivnak about 9 years ago
- Status changed from ASSIGNED to NEW
- Assignee deleted (
bmbouter) - Platform Release deleted (
master) - Tags Easy Fix added
Updated by sbhawsin almost 9 years ago
- Status changed from NEW to ASSIGNED
- Assignee set to sbhawsin
- Tags deleted (
Easy Fix)
Updated by bmbouter over 8 years ago
- Status changed from ASSIGNED to NEW
- Assignee deleted (
sbhawsin)
Updated by mhrivnak over 8 years ago
- Status changed from NEW to ASSIGNED
- Assignee set to dkliban@redhat.com
Updated by dalley over 8 years ago
Implementation Plan:
====================
The resource manager entry point - server.async.app - will be modified such that it checks whether an instance of the pulp_resource_manager already exists. If one does not exist, it will start normally. If one does exist, it will enter a loop in which it periodically checks whether the pre-existing resource mananger has died, using the same CELERY_CHECK_INTERVAL constant as the Celery process monitor. Should this happen, it will exit the loop and take over for the dead resource monitor.
Updated by dkliban@redhat.com over 8 years ago
The workers collections needs to have a uniqueness constraint added so that 'pulp_resource_manager' can only have one entry in it. This may require adding a new field to the document. The entry point for 'pulp_resource_manager' will then attempt to write to the 'workers' collection. When a write is successful, the celery process will continue loading. However, if an exception about the uniqueness constraint is raised, the pulp_resource_manager will sleep and try again in 30 seconds. This retry logic needs to be added to the correct celery signal handler. Celery emits multiple signals throughout it's life cycle. It is necessary to have this handled as early as possible.
Updated by dalley over 8 years ago
- Assignee changed from dkliban@redhat.com to dalley
Updated by bmbouter over 8 years ago
Having 1 entry for each resource_manager would be good because it's those entries which drive the /status/ API.
Would the code added to the entry point (server.async.app) run at import time? It would be good if no code is run at import time although this design may require it. We should think about where in server.async.app the if-statement-halting could be put to avoid having it run at import time.
Updated by dkliban@redhat.com over 8 years ago
A new model called ResourceManagerLock needs to be added. This model should be very similar to the CelerybeatLock model we already have.
As long as celery doesn't mind that a signal handler is taking a long time to return we should be able to add the following logic to the celeryd_after_setup signal handler:
While ResourceManagerLock has not been acquired, try to acquire it. If successful, return, and resource_manager is running like it's intended. If lock cannot be acquired, update the Worker collection in Mongo for this resource_manager and then sleep for 60 seconds. Try to acquire the lock again in 60 seconds and continue indefinitely to do so until the lock is acquired and signal handler finally returns.
Added by dalley over 8 years ago
Added by dalley over 8 years ago
Revision 0128ae2c | View on GitHub
Added mechanism to prevent two resource managers from being active at once - duplicate resource managers will wait until the original dies off. closes #898
Updated by dalley over 8 years ago
- Status changed from POST to MODIFIED
- % Done changed from 0 to 100
Applied in changeset pulp|0128ae2c99d9d57c4f73bcde52b43e1c81891736.
Updated by semyers over 8 years ago
- Status changed from 5 to CLOSED - CURRENTRELEASE
Added mechanism to prevent two resource managers from being active at once - duplicate resource managers will wait until the original dies off. closes #898