Project

Profile

Help

Story #7791

closed

As a user I can re-upload artifacts if the file has gone missing or corrupted

Added by ipanova@redhat.com over 3 years ago. Updated about 2 years ago.

Status:
CLOSED - DUPLICATE
Priority:
Normal
Assignee:
-
Category:
-
Sprint/Milestone:
-
Start date:
Due date:
% Done:

0%

Estimated time:
Platform Release:
Groomed:
No
Sprint Candidate:
No
Tags:
Sprint:
Quarter:

Description

Ticket moved to GitHub: "pulp/pulpcore/1941":https://github.com/pulp/pulpcore/issues/1941


If an artifact has gone missing or corrupted(bit rot) there is no way to re-upload it back to the filesystem

  1. upload an artifact
  2. rm /var/lib/pulp/artifacts/<some_artifact> or corrupt it
  3. upload same artifact
  4. 400 error with { "non_field_errors": [ "sha384 checksum must be unique." ] }

While we have /repair/ endpoint it will not work for the operations where artifact has no remoteartifact.

Problem statements:

1. If a file is missing it is impossible to upload a new one

  • when saving artifact add try/except, look for existing one
  • verify whether storage_path is an existing location if not update it with the newly uploaded bits
  • Issue 400 due to duplicated artifact, but in addition return the href of the existing artifact.

2. If a file is corrupted it is impossible to re-upload and replace it with a valid one

  • option1 Running repair can find corrupted files. It should remove the corrupted file to get back to the case outlined in 1. .
    • Repair can be run against specific repo version, potentially can extend the functionality to repair a specific artifact/content
  • option2 We could recalculate the checksum on all upload attempts.
    • Might be a lot of overhead for a rare failure
  • option3 Introduce a flag which will be specified at upload time E.g. --repair, or --force, or --validate-checksum. It will replace broken bits if checksum of the newly uploaded file matches a checksum in the DB. The recalculation of a checksum will happen on_demand this way and not for every upload attempt.

Related issues

Related to Pulp - Story #7114: Improve Artifact upload experienceCLOSED - DUPLICATE

Actions
Copied from Container Support - Story #7790: As a user I can re-upload artifacts if the file has gone missing or corruptedCLOSED - DUPLICATE

Actions
Actions #1

Updated by ipanova@redhat.com over 3 years ago

  • Copied from Story #7790: As a user I can re-upload artifacts if the file has gone missing or corrupted added
Actions #2

Updated by ipanova@redhat.com over 3 years ago

  • Description updated (diff)
Actions #3

Updated by ttereshc over 3 years ago

  1. If a file is missing it is impossible to upload a new one when saving artifact add try/except, look for existing one verify whether storage_path is an existing location if not update it with the newly uploaded bits

I would explicitly mention here that the checksums should match, the one in the DB and the checksum of newly uploaded file (checksum in the DB should not be updated, just used for comparison)

  1. If a file is corrupted it is impossible to re-upload and replace it with a valid one

how about option3 (a more explicit version of option2): introduce a flag which will be specified at upload time E.g. --repair, or --force, or --validate-checksum.
It will replace broken bits if checksum of the newly uploaded file matches a checksum in the DB.
The recalculation of a checksum will happen on_demand this way and not for every upload attempt.

Actions #4

Updated by ipanova@redhat.com over 3 years ago

ttereshc wrote:

  1. If a file is missing it is impossible to upload a new one when saving artifact add try/except, look for existing one verify whether storage_path is an existing location if not update it with the newly uploaded bits

I would explicitly mention here that the checksums should match, the one in the DB and the checksum of newly uploaded file (checksum in the DB should not be updated, just used for comparison)

I was imagining the artifat._init_and_validate would calculate all the checksums of the upload in course and when saving it in case of existing artifact we would handle the IntegrityError

  1. If a file is corrupted it is impossible to re-upload and replace it with a valid one

how about option3 (a more explicit version of option2): introduce a flag which will be specified at upload time E.g. --repair, or --force, or --validate-checksum.
It will replace broken bits if checksum of the newly uploaded file matches a checksum in the DB.
The recalculation of a checksum will happen on_demand this way and not for every upload attempt.

This is a good compromise for the user experience improvement.

Actions #5

Updated by ttereshc over 3 years ago

wrote:

ttereshc wrote:

  1. If a file is missing it is impossible to upload a new one when saving artifact add try/except, look for existing one verify whether storage_path is an existing location if not update it with the newly uploaded bits

I would explicitly mention here that the checksums should match, the one in the DB and the checksum of newly uploaded file (checksum in the DB should not be updated, just used for comparison)

I was imagining the artifat._init_and_validate would calculate all the checksums of the upload in course and when saving it in case of existing artifact we would handle the IntegrityError

Right, it's a part of the try/except you mention, ok. Disregard my comment then.

Actions #6

Updated by ttereshc over 3 years ago

After some offline discussion, it turned out that this ticket implies that the upload of existing artifact (which is good and NOT corrupted or missing) will also return the existing artifact and won't fail.
This directly relates to #7114.
If it's done this way, we ought to be consistent across all resources. At least, content upload needs to be adjusted as well.

We also need to be sure that it's only affecting upload workflows and not the artifact creation during sync.

Actions #7

Updated by ttereshc over 3 years ago

  • Related to Story #7114: Improve Artifact upload experience added
Actions #8

Updated by daviddavis over 3 years ago

ttereshc wrote:

After some offline discussion, it turned out that this ticket implies that the upload of existing artifact (which is good and NOT corrupted or missing) will also return the existing artifact and won't fail.
This directly relates to #7114.
If it's done this way, we ought to be consistent across all resources. At least, content upload needs to be adjusted as well.

We also need to be sure that it's only affecting upload workflows and not the artifact creation during sync.

#7114 asks the pulp href be returned if the artifact already exists. It does not ask that the end point should succeed and no longer fail.

Also, IMO this proposal would break semantic versioning. Changing the response when a user uploads the exact same artifact twice would be a backwards incompatible change.

Actions #9

Updated by daviddavis over 3 years ago

how about option3 (a more explicit version of option2): introduce a flag which will be specified at upload time E.g. --repair, or --force, or --validate-checksum.

To me this makes the most sense. It's backwards compatible and is clear to the user what's happening. Fixing an artifact by calling the create endpoint without a special param seems a bit too strange/magical/clever IMO.

Actions #10

Updated by ipanova@redhat.com about 3 years ago

  • Description updated (diff)
Actions #11

Updated by ipanova@redhat.com about 3 years ago

daviddavis wrote:

how about option3 (a more explicit version of option2): introduce a flag which will be specified at upload time E.g. --repair, or --force, or --validate-checksum.

To me this makes the most sense. It's backwards compatible and is clear to the user what's happening. Fixing an artifact by calling the create endpoint without a special param seems a bit too strange/magical/clever IMO.

The caveat of this option is if a client is trying get content into pulp, like podman, there is no way to specify this option on the client side, since the pulp upload layer is in the middle of the process. I am not advocating to adjust the whole proposal based on this specific usecase, but we'd need to whether hardcode the 'force' flag in the plugin code during the upload or come up with something else to enable container plugin to deal with the corrupted content.

Actions #12

Updated by daviddavis about 3 years ago

wrote:

The caveat of this option is if a client is trying get content into pulp, like podman, there is no way to specify this option on the client side, since the pulp upload layer is in the middle of the process. I am not advocating to adjust the whole proposal based on this specific usecase, but we'd need to whether hardcode the 'force' flag in the plugin code during the upload or come up with something else to enable container plugin to deal with the corrupted content.

That's a good point. What about introducing a new endpoint (smart_update?)?

Also, just to call it out: another option is to bump the pulpcore major version. We have some other changes in our queue that would benefit from being able to make some backwards incompatible changes to the API (e.g. https://pulp.plan.io/issues/7762)

Actions #13

Updated by pulpbot about 2 years ago

  • Description updated (diff)
  • Status changed from NEW to CLOSED - DUPLICATE

Also available in: Atom PDF