Project

Profile

Help

Story #6205

[Epic] As a user, I have a single container with all Pulp services

Added by bmbouter about 1 month ago. Updated 14 days ago.

Status:
ASSIGNED
Priority:
Normal
Category:
-
Sprint/Milestone:
-
Start date:
Due date:
% Done:

0%

Platform Release:
Blocks Release:
Backwards Incompatible:
No
Groomed:
Yes
Sprint Candidate:
Yes
Tags:
QA Contact:
Complexity:
Smash Test:
Verified:
No
Verification Required:
No
Sprint:
Sprint 69

Description

Motivation

A Pulp user has a requirement where they want to launch Pulp's in each datacenter to sync content around the world daily. They specifically can only launch 1 docker container (their requirement). This is because there are too many individuals at various sites with equipment/networking differences to make deploying multiple containers feasible.

Solution

He proposed that we do what Gitlab does and use a userspace process launcher like s6-overlay to have all the services in that one container.

What's in the image?

  • All GA's plugins would be in the image.
  • S6 would start the postgresql and apply the migrations
  • Then S6 starts redis, gunicorn (api), gunicorn (content app), nginx, resource manager, and two workers.

Building it

We'll build it nightly on Travis similar to how we build our other containers nightly.

Shipping it

We'll ship it through the pulp account on quay

Advertising it

We should put it on the homepage of https://pulpproject.org/

History

#1 Updated by bmbouter about 1 month ago

  • Description updated (diff)

#2 Updated by daviddavis about 1 month ago

  • Groomed changed from No to Yes
  • Sprint Candidate changed from No to Yes

#3 Updated by ggainey about 1 month ago

Don't forget to think about how to handle updates in this single-container case

#4 Updated by bmbouter about 1 month ago

ggainey wrote:

Don't forget to think about how to handle updates in this single-container case

I was thinking updates will be handled by migrations application, which will happen each time the container is started. It's connected to the long-living filesystem backing the database and filesystem so each time it's started it can upgrade from the files it finds then.

#5 Updated by mdepaulo@redhat.com about 1 month ago

As mentioned on the Mailing list (correction: accidentally replied to Tania individually):

if we do pursue this, we should at least consider using something else with less duplicate code to accomplish this, like our ansible-pulp roles with ansible-bender, or adapting our existing CI's ansible-pulp molecule container w/ systemd (normally a temporary artifact.)

Also:

We have the insta-demo already for a dead simple deployment. This use case covers a simple (but less simple) deployment for people with existing small-scale container infrastructure.

We have to be concerned with postgres & rq also

We have to account for persistent data in volume mount(s). And upgrading it over time.

#5394 Separate out building the pulpcore container image from building a plugin image, and an overall strategy for building container images with different sets of plugins, is still relevant.

#6 Updated by dkliban@redhat.com about 1 month ago

  • Status changed from NEW to ASSIGNED
  • Assignee set to dkliban@redhat.com
  • Sprint set to Sprint 67

#7 Updated by rchan 28 days ago

  • Sprint changed from Sprint 67 to Sprint 68

#8 Updated by rchan 14 days ago

  • Sprint changed from Sprint 68 to Sprint 69

Please register to edit this issue

Also available in: Atom PDF