Story #4181
Updated by bmbouter almost 6 years ago
h2. Problem Content that should be protected could be fetched by unauthorized clients via the streamer which could either (a) fetch the content fresh and hand it to the client or (b) serve up an already-saved Artifact from Pulp's filesystem. h2. One Solution: Signed URLs This is how we did it in Pulp 2. The client authorization for that content is checked in the content app. Then when Pulp redirects the client it signs it with a time-limited signature. Then that redirect is validated by a webserver, e.g. apache, nginx, gunicorn, etc. If the url is valid, the url is reverse proxy sent to either squid or the streamer itself. The downsides here are that each webserver will need to handle this differently so Pulp remaining webserver agnostic is unlikely. Also it makes the architecture a lot more complicated introducing additional dependencies, crypto calls, and an additional webserver everywhere the streamer is run. h2. Another Solution: Stop using Squid way Add the content protection to the streamer app the same way it was added to the content app. This would cause squid to never be able to be used though because content cached in front of the streamer couldn't be protected and could be fetched by another client. Not using squid mainly affects repos with policy='cache_only' which serves the content and then forgets it. This means there is no caching w/ that setting. Note that with policy='on_demand' the only downsides are multiple requests arriving at the streamer for the same file before the first one completes would not be de-duplicated like squid would have done. Once the first request saves the Artifact, additional requests are de-duplicated.