Distribute Pulp with Pulp
- Old releases need to be archived, so that downstream folks like katello can pull specific release versions. This is also just a good thing to do. Currently, we only keep the latest release of a given x.y stream, and earlier releases can't be easily found online.
- Releases need to happen atomically. COPR supports this, but offers limited control over the exact moment a repository's metadata is regenerated.
Pulp meets both of these needs, and should be the tool we use to distribute Pulp. :)
This pulp instance will need to be secure and the following things should be ensured:
- Pulp's REST API should be ran on a non default port
- Pulp's content serving API should be run on ports 80 and 443
- mongo set up with authentication and listen locally (through sockets)
- message brokers also set up with authentication and configured to only listen locally
- The RHEL7 hardening guide is followed 
#1 Updated by semyers about 3 years ago
- Checklist item Figure out where to host Pulp added
- Checklist item Come up with an upgrade policy (i.e. do we pin it to the latest release?) added
- Checklist item Come up with a pulp release workflow that supports archiving releases and atomic releases added
- Checklist item Investigate High-Availability options for content distribution added
I added some checklist items. One, "Come up with an upgrade policy" is a little ambiguous, but I'm not sure how to phrase it more accurately. It refers to the upgrade policy of the Pulp installation being used to distribute Pulp, and whether we want to always keep it up to date with the latest release, or only update it as-needed. It doesn't refer to how we update the versions of pulp being distributed, that's a separate checklist item.
#7 Updated by bmbouter about 2 years ago
I think OS1 has been shutdown or nearly shutdown at this point. Open Source and Standards (OSAS) has offered some hosting for upstream Pulp's needs, like maybe hosting this environment. I'm emailing them to get some more details about hosting a Pulp inside the OSAS infrastructure.
#12 Updated by bizhang about 2 years ago
- Description updated (diff)
@bmbouter, since this is public facing people should be using it to consume our bits. I don't think we'll be distributing out the pulp3 bits to fedorapeople, but can be convinced otherwise.
As far as SNI goes @pcreech mentioned hosting multiple https sites from the same IP, but I think that can (and should) be taken off, since this story only deals with getting pulp up and running.
#14 Updated by pcreech about 2 years ago
The intent behind moving the REST api to a different port is that doing such would allow us to have more access control at the firewall level. This machine has a public IP, and therefore anything we set to listen will listen on the public IP. Having our web service listen at the same ip:port endpoint as our content will allow anyone coming in to attempt to access our rest api as well.
Moving to a separate port will allow us to implement stricter access controls on the rest api that we interface with, therefore helping reduce our attack surface.
The other option would be to have the rest api listen to local only, therefore necessitating that all interaction with pulp happens solely on that machine.
#29 Updated by bmbouter over 1 year ago
- Sprint deleted (
Removing from sprint through email list discussion: https://www.redhat.com/archives/pulp-dev/2018-March/msg00080.html
I think we're moving away from self-distribution and towards container distribution in registries we don't operate and also PyPI delivery. Does doing this effort still make sense?
Note that the infra wiki shows this initative also: https://pulp.plan.io/projects/pulp/wiki/Infrastructure_&_Hosting#Distribute-Pulp-with-Pulp
Also AIUI the OSCI group has provisioned a machine in the community data center for this purpose. Since we're not using it, and if we decide to not go forward with it, we should ask them to deprovisioning it.
What do you all think?
Please register to edit this issue