Project

Profile

Help

Issue #478

closed

Pulp sync'ing should gracefully handle inability to resolve hostname

Added by jsherril@redhat.com about 9 years ago. Updated almost 5 years ago.

Status:
CLOSED - CURRENTRELEASE
Priority:
Low
Category:
-
Sprint/Milestone:
-
Start date:
Due date:
Estimated time:
Severity:
1. Low
Version:
2.4 Beta
Platform Release:
2.6.0
OS:
Triaged:
Yes
Groomed:
No
Sprint Candidate:
No
Tags:
Pulp 2
Sprint:
Quarter:

Description

++ This bug was initially created as a clone of Bug #1124616 ++

During a sync from a remote repository such as the CDN, if the hostname of the source is unable to be resolved, the sync task can become indefinitely stuck and unrecoverable. The scenario by which I encountered this:

For Sync'ing:
Import a manifest
Enable Jboss repository
Start sync of Jboss repository
Pull network cable from box (effectively removing ability to reach DNS servers)
Sync progress part sat as if progress was being made but never changed

I then replaced the network cable and ran a 'katello-service restart' at which point the last node hung at being stopped (same issue you saw). I terminated the service restart, killed the pulp workers with 'kill -9' and then was able to cleanly run 'katello-service' restart. At which point, I was able to re-run a sync successfully. Note for Pulp's sake, the 'katello-service restart' restarts all Pulp services including workers, resource managers and celery beat.

--- Additional comment from RHEL Product and Program Management on 2014-07-29 19:13:32 EDT ---

Since this issue was entered in Red Hat Bugzilla, the release flag has been
set to ? to ensure that it is properly evaluated for this release.

--- Additional comment from RHEL Product and Program Management on 2014-07-29 19:14:57 EDT ---

Since this issue was entered in Red Hat Bugzilla, the pm_ack has been
set to + automatically for the next planned release

+ This bug was cloned from Bugzilla Bug #1124625 +

Actions #1

Updated by rbarlow about 9 years ago

I suspect that the DNS timeout is happening for every file that the importer tries to download, and the importer doesn't consider this a permanent sort of failure. This could cause the sync to take a very long time. I'm not familiar with what the DNS timeout would be, but even if it were 5 seconds and there were 10,000 items to sync that would be 50,000 seconds (more than half of a day!).

The importer should have a class of errors it considers to be permanent failures that includes DNS timeouts and probably out of disk space errors. Can anyone think of others?

+ This comment was cloned from Bugzilla #1124625 comment 1 +

Actions #3

Updated by cduryee about 9 years ago

fixed in pulp 2.6.0-0.2.beta

+ This comment was cloned from Bugzilla #1124625 comment 3 +

Actions #4

Updated by pthomas@redhat.com about 9 years ago

verified

[root@cloud-qe-4 ~]# pulp-admin rpm repo sync run --repo-id zoo
--------------------------------------------------------------------
Synchronizing Repository [zoo]
--------------------------------------------------------------------

This command may be exited via ctrl+c without affecting the request.

Downloading metadata...
[/]
... failed

('Connection aborted.', error(110, 'Connection timed out'))

Task Failed

Importer indicated a failed response

+ This comment was cloned from Bugzilla #1124625 comment 4 +

Actions #8

Updated by rbarlow almost 9 years ago

  • Status changed from 6 to CLOSED - CURRENTRELEASE
Actions #9

Updated by ipanova@redhat.com about 8 years ago

  • Private changed from Yes to No
  • Severity set to 1. Low
Actions #12

Updated by bmbouter almost 5 years ago

  • Tags Pulp 2 added

Also available in: Atom PDF