Project

Profile

Help

Story #2409

As an API user, a call to update an Publisher generates a Task

Added by ttereshc almost 3 years ago. Updated 6 months ago.

Status:
MODIFIED
Priority:
Normal
Category:
-
Sprint/Milestone:
Start date:
Due date:
% Done:

100%

Platform Release:
Blocks Release:
Backwards Incompatible:
No
Groomed:
Yes
Sprint Candidate:
Yes
Tags:
QA Contact:
Complexity:
Smash Test:
Verified:
No
Verification Required:
No
Sprint:
Sprint 21

Description

We need to create an update task which can live in pulp.app.tasks.publisher.

Task name in Pulp 2: `pulp.server.tasks.repository.distributor_update`.
Task implementation in Pulp 2: https://github.com/pulp/pulp/blob/3.0-dev/server/pulp/server/controllers/distributor.py#L184


Checklist


Related issues

Related to Pulp - Task #2380: Create a redmine task for each 2.y celery task to be converted to 3.0 CLOSED - CURRENTRELEASE Actions

Associated revisions

Revision e81d0a63 View on GitHub
Added by amacdona@redhat.com over 2 years ago

Dispatch a task to update publishers.

closes #2409

Revision e81d0a63 View on GitHub
Added by amacdona@redhat.com over 2 years ago

Dispatch a task to update publishers.

closes #2409

Revision e81d0a63 View on GitHub
Added by amacdona@redhat.com over 2 years ago

Dispatch a task to update publishers.

closes #2409

History

#1 Updated by ttereshc almost 3 years ago

  • Related to Task #2380: Create a redmine task for each 2.y celery task to be converted to 3.0 added

#2 Updated by ttereshc almost 3 years ago

  • Tags Pulp 3 added

#3 Updated by mhrivnak almost 3 years ago

  • Groomed changed from No to Yes
  • Sprint Candidate changed from No to Yes

#4 Updated by mhrivnak almost 3 years ago

This may be trickier than it sounds. I think we still want DRF to figure out what changes to make based on the request body, but we want to defer that. There's a logical, and potentially important, difference between:

1. use DRF to figure out what changes to make
2. queue a task to make those changes later

and

1. queue a task with the response data
2. later, use DRF to figure out what changes to make, and make those changes immediately

The difference lies in the potential for the object to change between the time the request is made, and the task actually runs.

As such, this may be difficult to do without also considering the API integration. It might make sense to do both at the same time.

#5 Updated by mhrivnak over 2 years ago

Given the challenge with the above, I wonder if we should allow updating of these configs via the REST API synchronously, and just expect anything that uses the config to be ok that it changes at any time. The tasks that use these configs will naturally want to load them just once anyway.

#6 Updated by jortel@redhat.com over 2 years ago

mhrivnak wrote:

Given the challenge with the above, I wonder if we should allow updating of these configs via the REST API synchronously, and just expect anything that uses the config to be ok that it changes at any time. The tasks that use these configs will naturally want to load them just once anyway.

Makes sense to me. I cannot think of any reason that updates to the importer/publisher resource would need to be long running or otherwise benefit from being asynchronous. Even in the case of DELETE. If an importer is deleted while there are queued sync tasks, it seems reasonable for those tasks to fail when resources cannot be fetched from the DB. As a user, I would expect that. On the flip side, if the DELETE is queued behind several sync tasks, the system get tied up syncing repositories that are intended to be be deleted.

#7 Updated by bmbouter over 2 years ago

After some good discussion we determined this will be done asynchronously. Info on that discussion can be seen here: https://www.redhat.com/archives/pulp-dev/2017-May/msg00014.html

I also believe we determined the user supplied data will be validated in the API using the serializer prior to dispatching the task. The update will apply when the task runs, and any issues that rise will be captured by the normal task failure handlers.

#8 Updated by bmbouter over 2 years ago

  • Tracker changed from Task to Story
  • Subject changed from Convert celery task repository.distributor_update to Pulp 3 to As an API user, a call to update an Publisher generates a Task
  • Description updated (diff)

Rewriting based on similarity to issue 2756 and discussion on the mailing list: https://www.redhat.com/archives/pulp-dev/2017-May/msg00014.html

#9 Updated by mhrivnak over 2 years ago

  • Sprint/Milestone set to 39

#10 Updated by amacdona@redhat.com over 2 years ago

  • Status changed from NEW to ASSIGNED
  • Assignee set to amacdona@redhat.com

#11 Updated by mhrivnak over 2 years ago

  • Sprint/Milestone changed from 39 to 40

#12 Updated by mhrivnak over 2 years ago

  • Status changed from ASSIGNED to POST

#13 Updated by amacdona@redhat.com over 2 years ago

  • Status changed from POST to MODIFIED
  • % Done changed from 0 to 100

#14 Updated by bmbouter over 1 year ago

  • Sprint set to Sprint 21

#15 Updated by bmbouter over 1 year ago

  • Sprint/Milestone deleted (40)

#16 Updated by daviddavis 6 months ago

  • Sprint/Milestone set to 3.0

#17 Updated by bmbouter 6 months ago

  • Tags deleted (Pulp 3)

Please register to edit this issue

Also available in: Atom PDF