Project

Profile

Help

Issue #8565

Uploading container return 500

Added by spredzy 6 months ago. Updated 5 months ago.

Status:
CLOSED - CURRENTRELEASE
Priority:
Normal
Assignee:
Sprint/Milestone:
Start date:
Due date:
Estimated time:
Severity:
2. Medium
Platform Release:
OS:
Triaged:
No
Groomed:
No
Sprint Candidate:
No
Tags:
Sprint:
Sprint 94
Quarter:

Description

Based on the number of blob a container is based of, pulp-container will return a 500

A simple upload - Everything works fine

#> podman push --tls-verify=false 192.168.121.209/alpine:latest                                                                                                                         
Getting image source signatures
Copying blob f4666769fca7 done  
Copying config b14afc6dfb done  
Writing manifest to image destination
Storing signatures

More layered container - 500 is returned


#> podman push --tls-verify=false 192.168.121.209/platformtest:latest                                                                                                                   !11413
Getting image source signatures                                                                                                                                                                 
Copying blob 37842838092c done                                                                  
Copying blob 98a5965029a0 [--------------------------------------] 8.0b / 47.1MiB                                                                                                               
Copying blob 50644c29ef5a done                                                                  
Copying blob 912e10e7963c done                                                                                                                                                                  
Copying blob 4150c4f2e6df done                                                                  
Error: error copying image to the remote destination: Error writing blob: Error initiating layer upload to /v2/platformtest/blobs/uploads/ in 192.168.121.209: received unexpected HTTP status: 
500 Internal Server Error

Logs present on the pulp server:

Apr 15 08:43:23 localhost.localdomain gunicorn[37060]: pulp [None]: django.request:ERROR: Internal Server Error: /v2/platformtest/blobs/uploads/                                                
Apr 15 08:43:23 localhost.localdomain gunicorn[37060]: Traceback (most recent call last):                                                                                                       
Apr 15 08:43:23 localhost.localdomain gunicorn[37060]:   File "/usr/lib/python3.6/site-packages/pulp_container/app/registry_api.py", line 318, in get_dr_push                                   
Apr 15 08:43:23 localhost.localdomain gunicorn[37060]:     distribution = models.ContainerDistribution.objects.get(base_path=path)                                                              
Apr 15 08:43:23 localhost.localdomain gunicorn[37060]:   File "/usr/lib/python3.6/site-packages/django/db/models/manager.py", line 82, in manager_method                                        
Apr 15 08:43:23 localhost.localdomain gunicorn[37060]:     return getattr(self.get_queryset(), name)(*args, **kwargs)                                                                           
Apr 15 08:43:23 localhost.localdomain gunicorn[37060]:   File "/usr/lib/python3.6/site-packages/django/db/models/query.py", line 408, in get                                                    
Apr 15 08:43:23 localhost.localdomain gunicorn[37060]:     self.model._meta.object_name                                                                                                         
Apr 15 08:43:23 localhost.localdomain gunicorn[37060]: pulp_container.app.models.ContainerDistribution.DoesNotExist: ContainerDistribution matching query does not exist.                       
Apr 15 08:43:23 localhost.localdomain gunicorn[37060]: During handling of the above exception, another exception occurred:                                                                      
Apr 15 08:43:23 localhost.localdomain gunicorn[37060]: Traceback (most recent call last):                                                                                                       
Apr 15 08:43:23 localhost.localdomain gunicorn[37060]:   File "/usr/lib/python3.6/site-packages/django/db/backends/utils.py", line 84, in _execute                                              
Apr 15 08:43:23 localhost.localdomain gunicorn[37060]:     return self.cursor.execute(sql, params)                                                                                              
Apr 15 08:43:23 localhost.localdomain gunicorn[37060]: psycopg2.IntegrityError: duplicate key value violates unique constraint "core_repository_name_key"                                       
Apr 15 08:43:23 localhost.localdomain gunicorn[37060]: DETAIL:  Key (name)=(platformtest) already exists. 

Associated revisions

Revision 1bc6e0fe View on GitHub
Added by mdellweg 6 months ago

Fix a race while creating push repositories

This bug is highly reproducable when pushing multilayer images to a not yet existing repository.

fixes #8565 https://pulp.plan.io/issues/8565

Revision 1bc6e0fe View on GitHub
Added by mdellweg 6 months ago

Fix a race while creating push repositories

This bug is highly reproducable when pushing multilayer images to a not yet existing repository.

fixes #8565 https://pulp.plan.io/issues/8565

History

#1 Updated by mdellweg 6 months ago

My take: It is a race condition when attempting to create the distribution and repository in several api-processes at the same time. We should be able to catch the IntegrityError and reattempt to fetch the objects from database.

#2 Updated by mdellweg 6 months ago

  • Status changed from NEW to ASSIGNED
  • Assignee set to mdellweg
  • Sprint set to Sprint 94

#3 Updated by pulpbot 6 months ago

  • Status changed from ASSIGNED to POST

#4 Updated by mdellweg 6 months ago

  • Status changed from POST to MODIFIED

#5 Updated by ipanova@redhat.com 6 months ago

  • Sprint/Milestone set to 2.5.2

#6 Updated by ipanova@redhat.com 6 months ago

  • Sprint/Milestone deleted (2.5.2)

#7 Updated by ipanova@redhat.com 6 months ago

  • Sprint/Milestone set to 2.6.0

#8 Updated by pulpbot 5 months ago

  • Status changed from MODIFIED to CLOSED - CURRENTRELEASE

Please register to edit this issue

Also available in: Atom PDF