Network maintenance. Planio will be observing two scheduled maintenance windows this Tuesday, March 2 and Wednesday, March 3 from 02:00 UTC until 06:00 UTC each in order to perform maintenance on access routers in our primary datacenter. Your account might observe short downtimes during these periods up to several minutes at a time.

Refactor #5701

Updated by dalley over 1 year ago

The current implementation of the "Remove Duplicates" functionality is probably lacking in efficiency. It looks like this:

<pre><code class="python">
query_for_repo_duplicates_by_type = defaultdict(lambda: Q())
for item in repository_version.added():
detail_item = item.cast()

if detail_item.repo_key_fields == ():
unit_q_dict = {
field: getattr(detail_item, field) for field in detail_item.repo_key_fields
item_query = Q(**unit_q_dict) & ~Q(
query_for_repo_duplicates_by_type[detail_item._meta.model] |= item_query

for model in query_for_repo_duplicates_by_type:
_logger.debug(_("Removing duplicates for type: {}".format(model)))
qs = model.objects.filter(query_for_repo_duplicates_by_type[model])

While I haven't measured the exact impact, the individual item.cast() for each item is probably quite expensive. What would likely improve the situation is one of the following:

Proposal 1:

# Sort these into groups based on their pulp_type which is present on the master Content model.
# Look up the detail content models that represent the pulp_type strings
# Query the detail content models directly, in bulk, provided a list of PKs, instead of cast() individually
# Then within each type group check for duplicates

Proposal 2:

Alternately, each repository can list all of the content types it supports, which would allow us to skip item 2 above (maybe item 1 also) and would allow us to provide an extra layer of protection around making sure you can't have e.g. file content in an RPM repository which we can't easily or centrally guarantee otherwise.