Project

Profile

Help

Issue #5173

closed

Should handle "429" response code appropriately e.g. from Quay.io

Added by twaugh over 5 years ago. Updated about 5 years ago.

Status:
CLOSED - CURRENTRELEASE
Priority:
Normal
Assignee:
Start date:
Due date:
Estimated time:
Severity:
2. Medium
Version - Nectar:
Platform Release:
2.21.0
Target Release - Nectar:
OS:
Triaged:
Yes
Groomed:
No
Sprint Candidate:
No
Tags:
Pulp 2
Sprint:
Sprint 56
Quarter:

Description

When syncing from a registry which applies rate-limiting to requests, Pulp fails to handle the 429 response code ("Too many requests") and the sync task fails.

A better behaviour would be to back off when seeing 429, then retry.

In the particular instance of Quay.io, request limiting is applied per client IP address per second.

Related: DELIVERY-7214

2019-07-23 03:34:59 +0000 [ERROR   ] Pulp task [0d11937c-f11a-4ba5-b082-245febe02e1c] failed: PLP0000: Too Many Requests:
  Traceback (most recent call last):
    File "/usr/lib/python2.7/site-packages/celery/app/trace.py", line 367, in trace_task
      R = retval = fun(*args, **kwargs)
    File "/usr/lib/python2.7/site-packages/pulp/server/async/tasks.py", line 529, in __call__
      return super(Task, self).__call__(*args, **kwargs)
    File "/usr/lib/python2.7/site-packages/pulp/server/async/tasks.py", line 107, in __call__
      return super(PulpTask, self).__call__(*args, **kwargs)
    File "/usr/lib/python2.7/site-packages/celery/app/trace.py", line 622, in __protected_call__
      return self.run(*args, **kwargs)
    File "/usr/lib/python2.7/site-packages/pulp/server/controllers/repository.py", line 769, in sync
      sync_report = sync_repo(transfer_repo, conduit, call_config)
    File "/usr/lib/python2.7/site-packages/pulp/server/async/tasks.py", line 737, in wrap_f
      return f(*args, **kwargs)
    File "/usr/lib/python2.7/site-packages/pulp_docker/plugins/importers/importer.py", line 85, in sync_repo
      return self.sync_step.process_lifecycle()
    File "/usr/lib/python2.7/site-packages/pulp/plugins/util/publish_step.py", line 572, in process_lifecycle
      super(PluginStep, self).process_lifecycle()
    File "/usr/lib/python2.7/site-packages/pulp/plugins/util/publish_step.py", line 163, in process_lifecycle
      step.process()
    File "/usr/lib/python2.7/site-packages/pulp/plugins/util/publish_step.py", line 256, in process
      self._process_block()
    File "/usr/lib/python2.7/site-packages/pulp/plugins/util/publish_step.py", line 303, in _process_block
      self.process_main()
    File "/usr/lib/python2.7/site-packages/pulp_docker/plugins/importers/sync.py", line 224, in process_main
      available_tags = self.parent.index_repository.get_tags()
    File "/usr/lib/python2.7/site-packages/pulp_docker/plugins/registry.py", line 473, in get_tags
      headers, tags = self._get_path(link)
    File "/usr/lib/python2.7/site-packages/pulp_docker/plugins/registry.py", line 528, in _get_path
      self._raise_path_error(report)
    File "/usr/lib/python2.7/site-packages/pulp_docker/plugins/registry.py", line 550, in _raise_path_error
      raise IOError(report.error_msg)
  IOError: Too Many Requests

Files

Actions #1

Updated by ipanova@redhat.com over 5 years ago

  • Description updated (diff)
Actions #2

Updated by ipanova@redhat.com over 5 years ago

  • Tags Pulp 2 added
Actions #3

Updated by ipanova@redhat.com over 5 years ago

If hitting 429, check Retry-After response header that specifies how long to wait before retrying.
If registy does not say how long to wait use Exponential backoff http://docs.celeryproject.org/en/latest/userguide/tasks.html#Task.retry_backoff

Actions #4

Updated by twaugh over 5 years ago

I don't think Quay.io gives any Retry-After response header.

Actions #5

Updated by ipanova@redhat.com over 5 years ago

  • Status changed from NEW to ASSIGNED
  • Assignee set to twaugh
  • Triaged changed from No to Yes

Added by twaugh over 5 years ago

Revision d94087de | View on GitHub

Always retry on 429 response, even without a retry-after header

Requests with 429 responses ('Too many requests') are only retried by default if the retry-after header is present.

Instead, always retry these requests. This fixes interoperability with Quay.io, which presents 429 requests without a retry-after header.

Fixes #5173.

Signed-off-by: Tim Waugh

Actions #7

Updated by ipanova@redhat.com over 5 years ago

  • Project changed from Docker Support to Nectar
  • Status changed from ASSIGNED to MODIFIED
Actions #8

Updated by ipanova@redhat.com over 5 years ago

  • Sprint set to Sprint 56
Actions #9

Updated by twaugh over 5 years ago

Actions #10

Updated by dalley over 5 years ago

  • Platform Release set to 2.21.0
Actions #12

Updated by dalley about 5 years ago

  • Status changed from MODIFIED to CLOSED - CURRENTRELEASE

Also available in: Atom PDF