Project

Profile

Help

Story #282

closed

As a user, I have docs on how to cluster Pulp with NFS for httpd and worker scaling purposes

Added by mhrivnak over 9 years ago. Updated over 5 years ago.

Status:
CLOSED - CURRENTRELEASE
Priority:
High
Assignee:
Category:
-
Sprint/Milestone:
-
Start date:
Due date:
% Done:

100%

Estimated time:
Platform Release:
2.6.1
Groomed:
Yes
Sprint Candidate:
Yes
Tags:
Documentation, Pulp 2
Sprint:
April 2015
Quarter:

Description

The scaling guide needs improvement to describe in more detail how to configure Pulp into a cluster for the purposes of scaling both httpd and pulp workers.

Deliverables:

0. Setup and test a Pulp clustered operation to know what is needed with a clustered installation
1. Document the requirements for an NFS clustered Pulp installation.
2. Revert/rewrite https://github.com/pulp/pulp/pull/1097/files since we and QE will have tested it
3. Make sure the docs work with SELinux (see below)
4. Add a release note about this change.

SELinux and NFS issues:

This documentation needs to describe a fix for running with SELinux

For Pulp to correctly operate, it expects the that /var/lib/pulp/* will have httpd_sys_rw_content_t file context. If you want to make /var/lib/pulp/ or some subdirectory of it hosted via NFS those files will receive the nfs_t label. This causes Pulp to not work correctly if you want to scale the /var/lib/pulp filesystem with NFS and be secure with SELinux in Enforcing.

The recommended way to fix this is by making NFS aware of the SELinux context to use when mounting /var/lib/pulp/ or a subsection of it using the context option on mount.

# mount -o context="system_u:object_r:httpd_sys_rw_content_t:s0" REMOTEHOST:/var/lib/pulp /var/lib/pulp
Actions #4

Updated by bmbouter over 9 years ago

  • Category deleted (1)
  • Tags Documentation added

Documentation is now a Tag not a Category.

Actions #5

Updated by bmbouter over 9 years ago

  • Subject changed from [RFE] Provide a way to move pulp content to NFS share with SELinux turned on to Provide a way to move pulp content to NFS share with SELinux turned on
  • Description updated (diff)
  • Private changed from Yes to No
  • Tags Sprint Candidate added
Actions #8

Updated by bmbouter over 9 years ago

  • Subject changed from Provide a way to move pulp content to NFS share with SELinux turned on to As a user, I can host /var/lib/pulp on an NFS share with SELinux turned on
  • Description updated (diff)
Actions #9

Updated by mhrivnak over 9 years ago

  • Tags Groomed added
Actions #10

Updated by bmbouter over 9 years ago

  • Description updated (diff)
Actions #11

Updated by bmbouter over 9 years ago

  • Description updated (diff)
Actions #12

Updated by bmbouter over 9 years ago

  • Status changed from NEW to ASSIGNED
  • Assignee set to bmbouter
Actions #13

Updated by bmbouter over 9 years ago

  • Subject changed from As a user, I can host /var/lib/pulp on an NFS share with SELinux turned on to As a user, I have docs on how to cluster Pulp with NFS for httpd and worker scaling purposes
  • Description updated (diff)

Added by bmbouter over 9 years ago

Revision e2a60d71 | View on GitHub

Adds a clustering requirements docs and a release note

closes #282

Added by bmbouter over 9 years ago

Revision e2a60d71 | View on GitHub

Adds a clustering requirements docs and a release note

closes #282

Actions #15

Updated by bmbouter over 9 years ago

  • Status changed from ASSIGNED to MODIFIED
  • % Done changed from 0 to 100
Actions #16

Updated by bmbouter over 9 years ago

  • Status changed from MODIFIED to 5
  • Platform Release set to 2.6.1
Actions #17

Updated by bmbouter over 9 years ago

Actions #18

Updated by bmbouter over 9 years ago

For testing this setup a three node pulp-server with NFS sharing as described. You'll also need a separate consumer machine since clustered nodes can't be used as registered consumers themselves. You should use an HA proxy as the loadbalancer on also a separate machine. You can run Qpid and mongoDB on any of these machines. Run pulp_workers and httpd on all 3 of the clustered server machines. That's a total of 5 machines (3 pulp server, one load balancer, and one consumer).

Then using the load balacner hostname, do a full regression test. All consumer testing should involve the dedicated consumer.

Added by bmbouter over 9 years ago

Revision 3d4eec1a | View on GitHub

Adds logging section to clustering docs

re #282

Added by bmbouter over 9 years ago

Revision 3d4eec1a | View on GitHub

Adds logging section to clustering docs

re #282

Actions #20

Updated by bmbouter over 9 years ago

  • Sprint/Milestone set to 15

Added by bmbouter over 9 years ago

Revision e00e8d5a | View on GitHub

Adds three updates to the clustering documentation.

  • A section on clustered monitoring which identifies issues and workarounds for the /status/ API in clustered environments.

  • A section on pulp-admin configuration for clustered environments.

  • Generalizes the load balancer docs to include DNS based load balancing in addition to TCP based load balancing.

re #915 re #282

https://pulp.plan.io/issues/915 https://pulp.plan.io/issues/282

Added by bmbouter over 9 years ago

Revision e00e8d5a | View on GitHub

Adds three updates to the clustering documentation.

  • A section on clustered monitoring which identifies issues and workarounds for the /status/ API in clustered environments.

  • A section on pulp-admin configuration for clustered environments.

  • Generalizes the load balancer docs to include DNS based load balancing in addition to TCP based load balancing.

re #915 re #282

https://pulp.plan.io/issues/915 https://pulp.plan.io/issues/282

Actions #21

Updated by pthomas@redhat.com over 9 years ago

verified

Setup cluster pulp and performed regression testing.

Actions #22

Updated by pthomas@redhat.com over 9 years ago

  • Status changed from 5 to 6
Actions #23

Updated by bmbouter over 9 years ago

  • Groomed set to Yes
  • Sprint Candidate set to No
  • Tags deleted (Groomed)
Actions #24

Updated by bmbouter over 9 years ago

  • Sprint Candidate changed from No to Yes
  • Tags deleted (Sprint Candidate)
Actions #25

Updated by dkliban@redhat.com over 9 years ago

  • Status changed from 6 to CLOSED - CURRENTRELEASE
Actions #27

Updated by bmbouter over 6 years ago

  • Sprint set to April 2015
Actions #28

Updated by bmbouter over 6 years ago

  • Sprint/Milestone deleted (15)
Actions #29

Updated by bmbouter over 5 years ago

  • Tags Pulp 2 added

Also available in: Atom PDF