Content with duplicate repo_key_fields can be added to a repo version
Steps to reproduce:
1. Create two Content with the same repo_key_fields
2. Add them to a repository with one single call
You'll end up with both in the repository, which is not valid. The problem is that the code is not checking the set of content it's adding to a repo version for duplicate repo_keys. I think the solution is to fail if there is duplicate repo_key_fields content being added to a repo version.
Another option for where to implement it:
Given we have #3541, we could offer the entire
repo_key check as a generic implementation that plugins may mix in/derive from/use in their
This would simplify the
add_content implementation in RepositoryVersion.
- Status changed from ASSIGNED to NEW
- Assignee deleted (
- Sprint deleted (
I started working on this, but I want to pick it up after the 3.0 branch point (or release). I believe this for a few reasons:
- It's a bugfix so it can be fixed later
- it's not a blocker
- it's not a very common situation
- it should come with a test (which takes a bit more time to create)
- there is other higher prio work atm.
I'm also removing from the sprint for ^ reasons.
#17 Updated by fabricio.aguiar about 2 months ago
seems the code is prepared to replace content based on repo_key_fields: remove_duplicates: https://github.com/pulp/pulpcore/blob/master/pulpcore/plugin/repo_version_utils.py#L16 test_second_unit_replaces_the_first: https://github.com/pulp/pulp_file/blob/master/pulp_file/tests/functional/api/test_crud_content_unit.py#L264
In : repo = FileRepository.objects.first() In : repo.content.all() Out: <QuerySet [<Content (pulp_type=file.file): pk=735cbb5a-494c-4f88-9e10-2f85a2a1fb82>, <Content (pulp_type=file.file): pk=208f273c-2035-4863-834d-ecdd98789f55>]> In : [o.content for o in RepositoryVersion.objects.all()] Out: [<QuerySet >, <QuerySet [<Content (pulp_type=file.file): pk=208f273c-2035-4863-834d-ecdd98789f55>]>, <QuerySet [<Content (pulp_type=file.file): pk=735cbb5a-494c-4f88-9e10-2f85a2a1fb82>]>]
Please register to edit this issue