Project

Profile

Help

Issue #670

closed

init scripts pulp_workers and pulp_celerybeat check number of blocks instead of file permissions on /etc/default/pulp_workers

Added by moritz.rogalli@klarna.com about 7 years ago. Updated about 3 years ago.

Status:
CLOSED - CURRENTRELEASE
Priority:
High
Assignee:
Category:
-
Sprint/Milestone:
-
Start date:
Due date:
Estimated time:
Severity:
2. Medium
Version:
2.5
Platform Release:
2.6.0
OS:
Triaged:
Yes
Groomed:
No
Sprint Candidate:
No
Tags:
Easy Fix, Pulp 2
Sprint:
Quarter:

Description

diff for fixing pulp_workers and pulp_celerybeat init scripts

Description of problem:

The init scripts for pulp_workers and pulp_celerybeat have a sanity check that verifies file permissions on the respective file in /etc/default/.

The scripts execute

local perm=$(stat -Lt "$path" | awk '{print $3}')

to get the permissions. This is afaik the block number instead of the permissions.

This worked fine until we switched the underlying disk when blocks went from 8 to 16 for the file /etc/pulp_workers.

use "0$(stat -c %a)" and $(stat -c %u) for owner looks better and works.

Version-Release number of selected component (if applicable): 2.5.1

How reproducible:

Have a /etc/defaults/pulp_workers file with 16 blocks in the file system.

Steps to Reproduce:
1. make sure 'stat /etc/default/pulp_workers' reports "Blocks: 16"
2. run service pulp_workers start

Actual results:

service pulp_workers status
celery init v10.0.
Error: Config script '/etc/default/pulp_workers' cannot be writable by group!

Resolution:
Review the file carefully and make sure it has not been
modified with malicious intent. When sure the
script is safe to execute with superuser privileges
you can change the scripts permissions:
$ sudo chmod 640 '/etc/default/pulp_workers'

Expected results:

celery init v10.0.
node celery is stopped...
node reserved_resource_worker-0 is stopped...

or running or whatever state the workers are in.

Additional info:

centos-6

+ This bug was cloned from Bugzilla Bug #1183706 +


Files

c093ca366025718fdcdca91ce48b2fcf (200 Bytes) c093ca366025718fdcdca91ce48b2fcf moritz.rogalli@klarna.com, 03/01/2015 12:20 AM
Actions #1

Updated by rbarlow about 7 years ago

When fixing this, it would be worth considering whether or not these permission checks are valuable for the init script to perform. I don't personally see value in it, but don't take my word for it!

It's also worth noting that our systemd counterpart does not do any such checking.

+ This comment was cloned from Bugzilla #1183706 comment 1 +

Actions #2

Updated by bmbouter about 7 years ago

This was fixed in the upstream Celery init scripts but due to a multi-platform compatibility issue and not due to a correctness problem. These statements provide very little actual increase in security given that anyone with non-root write privileges can remove the checks with the very access the checks are designed to prevent.

My resolution is to remove the _config_sanity bash function from the Upstart scripts altogether. That is where the issue was occurring and it wasn't providing much value anyway. Also this makes them more consistent with the systemd unit files which don't do any fancy permissions enforcement.

+ This comment was cloned from Bugzilla #1183706 comment 2 +

Actions #3

Updated by bmbouter about 7 years ago

PR available at: https://github.com/pulp/pulp/pull/1567

+ This comment was cloned from Bugzilla #1183706 comment 3 +

Actions #4

Updated by bmbouter about 7 years ago

Merged to 2.6-testing -> 2.6-dev. Could not merge 2.6-dev -> master because other commits on 2.6-dev cause conflicts on master. This PR doesn't cause a conflict so it will be included in the next merge forward from 2.6-dev to master.

+ This comment was cloned from Bugzilla #1183706 comment 4 +

Actions #5

Updated by bmbouter about 7 years ago

2.6-dev has been merged to master

+ This comment was cloned from Bugzilla #1183706 comment 5 +

Actions #6

Updated by bmbouter about 7 years ago

QE to verify:

1. Give 777 permissions to pulp_celerybeat and pulp_workers in /etc/rc.d/init.d/pulp_celerybeat
2. Verify they have the 777 permissions with `ls -la` output of that folder.
3. restart the celery services. They should restart without complaint. If everything starts normally then verify this BZ.

+ This comment was cloned from Bugzilla #1183706 comment 6 +

Actions #7

Updated by cduryee about 7 years ago

2.6.0-0.7.beta

+ This comment was cloned from Bugzilla #1183706 comment 7 +

Actions #8

Updated by igulina@redhat.com about 7 years ago

ls -la /etc/rc.d/init.d/pulp_*

-rwxr-xr-x. 1 root root 7124 Feb 10 16:15 /etc/rc.d/init.d/pulp_celerybeat
-rwxr-xr-x. 1 root root 843 Feb 10 16:15 /etc/rc.d/init.d/pulp_resource_manager
-rwxr-xr-x. 1 root root 9328 Feb 10 16:15 /etc/rc.d/init.d/pulp_workers

chmod 777 /etc/rc.d/init.d/pulp_celerybeat
chmod 777 /etc/rc.d/init.d/pulp_workers
ls -la /etc/rc.d/init.d/pulp_*

-rwxrwxrwx. 1 root root 7124 Feb 10 16:15 /etc/rc.d/init.d/pulp_celerybeat
-rwxr-xr-x. 1 root root 843 Feb 10 16:15 /etc/rc.d/init.d/pulp_resource_manager
-rwxrwxrwx. 1 root root 9328 Feb 10 16:15 /etc/rc.d/init.d/pulp_workers

for s in {qpidd,pulp_celerybeat,pulp_resource_manager,pulp_workers,httpd}; do sudo service $s restart; done;
Stopping Qpid AMQP daemon: [ OK ]
Starting Qpid AMQP daemon: [ OK ]
celery init v10.0.
Using configuration: /etc/default/pulp_workers, /etc/default/pulp_celerybeat
Restarting celery periodic task scheduler
Stopping pulp_celerybeat... OK
Starting pulp_celerybeat...
celery init v10.0.
Using config script: /etc/default/pulp_resource_manager
celery multi v3.1.11 (Cipater)

Stopping nodes...

> resource_manager@ip-XXX: QUIT -> 1506

Waiting for 1 node -> 1506.....

> resource_manager@ip-XXX: OK

Restarting node resource_manager@ip-XXX: OK

celery init v10.0.
Using config script: /etc/default/pulp_workers
celery multi v3.1.11 (Cipater)

Stopping nodes...

> reserved_resource_worker-1@ip-XXX: QUIT -> 1657
> reserved_resource_worker-3@ip-XXX: QUIT -> 1719
> reserved_resource_worker-0@ip-XXX: QUIT -> 1632
> reserved_resource_worker-2@ip-XXX: QUIT -> 1686

Waiting for 4 nodes -> 1657, 1719, 1632, 1686............

> reserved_resource_worker-1@ip-XXX: OK

Restarting node reserved_resource_worker-1@ip-XXX: OK
Waiting for 3 nodes -> 1719, 1632, 1686....

> reserved_resource_worker-3@ip-XXX: OK

Restarting node reserved_resource_worker-3@ip-XXX: OK
Waiting for 2 nodes -> 1632, 1686....

> reserved_resource_worker-0@ip-XXX: OK

Restarting node reserved_resource_worker-0@ip-XXX: OK
Waiting for 1 node -> 1686....

> reserved_resource_worker-2@ip-XXX: OK

Restarting node reserved_resource_worker-2@ip-XXX: OK

Stopping httpd: [ OK ]
Starting httpd: [ OK ]

pulp-admin rpm repo list

--------------------------------------------------------------------
RPM Repositories
--------------------------------------------------------------------

Id: epel6_1
Display Name: epel6_1
Description: None
Content Unit Counts:
Erratum: 3635
Package Category: 3
Package Group: 208
Rpm: 11178
Yum Repo Metadata File: 1

Id: gena
Display Name: gena
Description: None
Content Unit Counts:

+ This comment was cloned from Bugzilla #1183706 comment 8 +

Actions #9

Updated by igulina@redhat.com about 7 years ago

and it was on

rpm -qa pulp-server

pulp-server-2.6.0-0.7.beta.el6.noarch

+ This comment was cloned from Bugzilla #1183706 comment 9 +

Actions #10

Updated by bmbouter about 7 years ago

  • Severity changed from Medium to 2. Medium
Actions #11

Updated by rbarlow about 7 years ago

  • Status changed from 6 to CLOSED - CURRENTRELEASE
Actions #13

Updated by bmbouter about 3 years ago

  • Tags Pulp 2 added

Also available in: Atom PDF