Issue #469
closedcontent sync via authenticated proxy using digest_pw method fails
Description
Description of problem:
When syncing through a proxy using the digest_pw method of authentication in squid, sync fails with an access denied -- despite the proxy apparently working otherwise for other traffic.
Steps to Reproduce:
1. Configure a squid proxy using digest_pw auth
COMMENT OUT ("#") the following line in /etc/squid/squid.conf to assure we're not bypassing auth.
http_access allow localnet
ADD the following lines to /etc/squid/squid.conf in the access section
auth_param digest program /usr/lib64/squid/digest_pw_auth -c /etc/squid/passwords
auth_param digest realm proxy
acl authenticated proxy_auth REQUIRED
http_access allow authenticated
EXECUTE the following
- htdigest -c /etc/squid/passwords proxy katello
(provide password for user 'katello' twice)
RESTART squid
- service squid restart
(if you want, assure your proxy works by pointing a browser to it - you should be forced to authenticate with katello/katello username/passwd
2. katello-configure --proxy-url http://yourproxy.example.com --proxy-port 3128 --proxy-user katello --proxy-pass katello
3. Attempt to sync repo content
Actual results:
1383336473.313 0 10.16.96.134 TCP_DENIED/407 4254 GET http://dl.google.com/linux/chrome/rpm/stable/x86_64/repodata/repomd.xml - NONE/- text/html
1383336495.477 0 10.16.96.134 TCP_DENIED/407 4254 GET http://dl.google.com/linux/chrome/rpm/stable/x86_64/repodata/repomd.xml - NONE/- text/html
Expected results:
Successful sync
Additional info:
Here's an example of the same content working with an ncsa auth method in squid
1383336589.341 66 10.16.96.134 TCP_MISS/200 1543 GET http://dl.google.com/linux/chrome/rpm/stable/x86_64/repodata/repomd.xml katello DIRECT/74.125.226.229 application/xml
1383336589.424 36 10.16.96.134 TCP_MISS/200 1767 GET http://dl.google.com/linux/chrome/rpm/stable/x86_64/repodata/filelists.xml.gz katello DIRECT/74.125.226.229 application/xml
1383336589.448 58 10.16.96.134 TCP_MISS/200 1038 GET http://dl.google.com/linux/chrome/rpm/stable/x86_64/repodata/other.xml.gz katello DIRECT/74.125.226.229 application/xml
1383336589.451 61 10.16.96.134 TCP_MISS/200 2524 GET http://dl.google.com/linux/chrome/rpm/stable/x86_64/repodata/primary.xml.gz katello DIRECT/74.125.226.229 application/xml
Updated by cduryee almost 10 years ago
putting bug down for now
+ This comment was cloned from Bugzilla #1116898 comment 1 +
Updated by jcline@redhat.com almost 10 years ago
Updated this bug to block the corresponding Satellite bug (1025890) rather than to depend on it.
+ This comment was cloned from Bugzilla #1116898 comment 2 +
Updated by rbarlow almost 10 years ago
Moving back to new, since Chris said he wasn't working on it at the moment.
+ This comment was cloned from Bugzilla #1116898 comment 3 +
Updated by cduryee almost 10 years ago
still working on it:)
+ This comment was cloned from Bugzilla #1116898 comment 4 +
Updated by cduryee almost 10 years ago
This requires changes to the sat6 installer to specify basic vs digest proxy auth in /etc/pulp/server/plugins.conf.d/yum_importer.json as well as to pulp to support the new proxy auth method. We are not able to try digest and then basic since that would send a plaintext password over the wire if digest failed for any reason.
There are ways to figure out if digest auth is supported, but it appears that those are for authenticating to the end website and not to the proxy. FWIW I was not able to get curl or wget to be able to guess the proxy auth method.
Setting needinfo on bbuckingham to validate if this is a blocker.
+ This comment was cloned from Bugzilla #1116898 comment 5 +
Updated by cduryee almost 10 years ago
moving to medium/no release, after discussion with Brad and others. Also clearing NEEDINFO.
+ This comment was cloned from Bugzilla #1116898 comment 6 +
Updated by bmbouter almost 10 years ago
- Severity changed from Medium to 2. Medium
Updated by pcreech about 9 years ago
- Status changed from NEW to ASSIGNED
- Assignee set to pcreech
Updated by bmbouter almost 9 years ago
- Project changed from Pulp to Nectar
- Category deleted (
14) - Target Release - Nectar set to 1.4.4
Updated by pcreech almost 9 years ago
- Status changed from ASSIGNED to POST
Added by pcreech almost 9 years ago
Updated by pcreech almost 9 years ago
- Status changed from POST to MODIFIED
- % Done changed from 0 to 100
Applied in changeset 6def26fa9818b83249aabc609ae90e88ba274834.
Updated by rbarlow almost 9 years ago
- Status changed from MODIFIED to CLOSED - CURRENTRELEASE
Updated by pcreech almost 9 years ago
- Status changed from CLOSED - CURRENTRELEASE to NEW
- Target Release - Nectar deleted (
1.4.4)
Reopening due to issues with a specific use case that doesn't work.
With the way the fix utilized response handlers in the requests library, connecting https through a http proxy that requires authentication fails. The solution relied on discovering the authentication through the 407 response from the proxy. On https connections, the http 'CONNECT' action in httplib will throw an exception on any response other than 200, preventing the handlers from firing. Proxy method will be unable to be 'discovered' in this way. Previous fix has been reverted.
Related python-requests issue
https://github.com/kennethreitz/requests/issues/1582#issuecomment-195288875
Related python-requests-toolbelt issue
https://github.com/sigmavirus24/requests-toolbelt/issues/136#issuecomment-190610347
This latter one also seems to indicate that digest proxy authentication will not be handled well even if known (since it relies on details in the 407 response to work)
Updated by jortel@redhat.com about 8 years ago
- Groomed changed from No to Yes
- Sprint Candidate changed from No to Yes
Updated by pcreech about 8 years ago
The issues upstream enabling this to work still haven't been resolved.
The issue here is in a dependency of the requests library, httplib, that is used to open connections.
DIgest authentication requires a back and forth exchange so tokens can be passed. Unfortunately, with the current stack, when connecting to an https endpoint with a proxy that requires this kind of back and forth exchange, an exception gets thrown in the ssl connection logic interrupting this execution flow.
The requests team is working on getting a fix in that will allow this to work properly. Until that time, we will be unable to support any proxy logic that requires a back and forth exchange of information to authenticate to connect to https endpoints.
Updated by ttereshc almost 7 years ago
- Status changed from NEW to CLOSED - WONTFIX
- Sprint Candidate set to No
https through authenticated proxies can't be supported/implemented with httplib2
.
The requests
library currently uses httplib2
.
The new stack for requests
will not support it any time soon https://github.com/requests/requests/issues/2386#issuecomment-71643022
Enable content sync via digest proxy
Enable the guessing of the proxy authentication mechanism, via digest or basic.
This will also guess the HTTP proxy, via digest or basic.
https://pulp.plan.io/issues/469 closes #469