Issue #2684
closedpulp-manage-db sometimes runs when it shouldn't
Description
pulp-manage-db
should refuse to run when any of pulp_celerybeat, pulp_resource_manager or pulp_workers are running, or when mongod is not running. It has built-in checks that enforce this behaviour, and the work on that is documented in https://pulp.plan.io/issues/2186.
I've written some tests that verify this behaviour. The negative tests are as follows:
1. Do one of the following:
- Stop mongod.
- Let all Pulp services run.
- Stop pulp_resource_manager and pulp_workers, but not pulp_celerybeat.
- Stop pulp_celerybeat and pulp_resource_manager, but not pulp_workers.
- Stop pulp_celerybeat and pulp_workers, but no pulp_pulp_resource_manager.
2. Run pulp-manage-db
. Assert a non-zero exit code is returned. (A non-zero exit code indicates that pulp-manage-db
refused to run.)
In my manual testing, I've found that the tests where pulp_resource_manager and pulp_workers are left running are especially likely to fail. However, the failures aren't completely consistent. Sometimes, all of the tests will pass.
Here's a summary of results from a particularly bad run. For context, test_required_stopped
stops mongod and test_conflicting_running
lets all Pulp services run. You can intuit the meaning of the remaining test names.
============================= ======== ======== ======== ======== ==========
Test F24-2.12 F24-2.13 F25-2.12 F25-2.13 RHEL7-2.12
============================= ======== ======== ======== ======== ==========
test_required_stopped ✓ ✓ ✓ ✓ ✓
test_conflicting_running ✓ ✓ ✓ ✓ ✓
test_celerybeat_running ✓ ✓ ✓ ✓ ✓
test_resource_manager_running ✗ ✗ ✗ ✗ ✗
test_workers_running ✗ ✓ ✗ ✓ ✗
============================= ======== ======== ======== ======== ==========
In an attempt to figure out why this is so, I've set the tests to capture the responses from pulp-manage-db
and systemctl status
. pulp-manage-db
sometimes prints to stdout, and always prints to stderr.
A concrete example would be helpful. The following is the result of the test_workers_running
test from the RHEL 7 2.12 system, above. Here stdout:
The following processes might still be running:
reserved_resource_worker-0@rhel-7-3-pulp-2-12
Please wait 1 seconds while Pulp confirms this.
Note that Pulp did correctly count down for 90 seconds, above. It just doesn't show up in the prettified output as interpreted by a terminal or by IDLE. Here's stderr:
Attempting to connect to localhost:27017
Attempting to connect to localhost:27017
Write concern for Mongo connection: {}
Loading content types.
Loading type descriptors []
Parsing type descriptors
Validating type descriptor syntactic integrity
Validating type descriptor semantic integrity
Loading unit model: erratum = pulp_rpm.plugins.db.models:Errata
Loading unit model: distribution = pulp_rpm.plugins.db.models:Distribution
Loading unit model: srpm = pulp_rpm.plugins.db.models:SRPM
Loading unit model: package_group = pulp_rpm.plugins.db.models:PackageGroup
Loading unit model: package_category = pulp_rpm.plugins.db.models:PackageCategory
Loading unit model: iso = pulp_rpm.plugins.db.models:ISO
Loading unit model: package_environment = pulp_rpm.plugins.db.models:PackageEnvironment
Loading unit model: drpm = pulp_rpm.plugins.db.models:DRPM
Loading unit model: package_langpacks = pulp_rpm.plugins.db.models:PackageLangpacks
Loading unit model: rpm = pulp_rpm.plugins.db.models:RPM
Loading unit model: yum_repo_metadata_file = pulp_rpm.plugins.db.models:YumMetadataFile
Loading unit model: docker_blob = pulp_docker.plugins.models:Blob
Loading unit model: docker_manifest = pulp_docker.plugins.models:Manifest
Loading unit model: docker_image = pulp_docker.plugins.models:Image
Loading unit model: docker_tag = pulp_docker.plugins.models:Tag
Loading unit model: puppet_module = pulp_puppet.plugins.db.models:Module
Loading unit model: ostree = pulp_ostree.plugins.db.model:Branch
Loading unit model: python_package = pulp_python.plugins.models:Package
Updating the database with types []
Found the following type definitions that were not present in the update collection [puppet_module, docker_tag, ostree, package_langpacks, erratum, docker_blob, docker_manifest, yum_repo_metadata_file, package_group, package_category, iso, package_environment, drpm, python_package, srpm, rpm, distribution, docker_image]
Updating the database with types [puppet_module, docker_tag, ostree, package_langpacks, erratum, docker_blob, docker_manifest, yum_repo_metadata_file, package_group, package_category, iso, package_environment, drpm, python_package, distribution, rpm, srpm, docker_image]
Content types loaded.
Ensuring the admin role and user are in place.
Admin role and user are in place.
Beginning database migrations.
Migration package pulp.server.db.migrations is up to date at version 27
Migration package pulp_docker.plugins.migrations is up to date at version 2
Migration package pulp_puppet.plugins.migrations is up to date at version 5
Migration package pulp_python.plugins.migrations is up to date at version 1
Migration package pulp_rpm.plugins.migrations is up to date at version 39
Loading unit model: erratum = pulp_rpm.plugins.db.models:Errata
Loading unit model: distribution = pulp_rpm.plugins.db.models:Distribution
Loading unit model: srpm = pulp_rpm.plugins.db.models:SRPM
Loading unit model: package_group = pulp_rpm.plugins.db.models:PackageGroup
Loading unit model: package_category = pulp_rpm.plugins.db.models:PackageCategory
Loading unit model: iso = pulp_rpm.plugins.db.models:ISO
Loading unit model: package_environment = pulp_rpm.plugins.db.models:PackageEnvironment
Loading unit model: drpm = pulp_rpm.plugins.db.models:DRPM
Loading unit model: package_langpacks = pulp_rpm.plugins.db.models:PackageLangpacks
Loading unit model:rpm = pulp_rpm.plugins.db.models:RPM
Loading unit model: yum_repo_metadata_file = pulp_rpm.plugins.db.models:YumMetadataFile
Loading unit model: docker_blob = pulp_docker.plugins.models:Blob
Loading unit model: docker_manifest = pulp_docker.plugins.models:Manifest
Loading unit model: docker_image = pulp_docker.plugins.models:Image
Loading unit model: docker_tag = pulp_docker.plugins.models:Tag
Loading unit model: puppet_module = pulp_puppet.plugins.db.models:Module
Loading unit model: ostree = pulp_ostree.plugins.db.model:Branch
Loading unit model: python_package = pulp_python.plugins.models:Package
Database migrations complete.
So far, everything looks totally normal. What about systemctl status
? Is there any funny business in there? Yes! pulp_worker-0.service
is active. That's not good.
● rhel-7-3-pulp-2-12
State: running
Jobs: 0 queued
Failed: 0 units
Since: Thu 2017-03-30 23:44:46 EDT; 7h left
CGroup: /
├─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 21
├─user.slice
│ └─user-0.slice
│ ├─session-25.scope
│ │ ├─5563 sshd: root@notty
│ │ └─5566 -bash
│ ├─session-21.scope
│ │ ├─5145 sshd: root@notty
│ │ └─5149 -bash
│ └─session-17.scope
│ ├─4247 sshd: root@notty
│ ├─5160 -bash
│ ├─5614 bash -c cd /root && /usr/bin/systemctl status
│ └─5619 /usr/bin/systemctl status
└─system.slice
├─pulp_worker-0.service
│ ├─5469 /usr/bin/python /usr/bin/celery worker -n reserved_resource_worker-0@%h -A pulp.server.async.app -c 1 --events --umask 18 --pidfile=/var/run/pulp/reserved_resource_worker-0.pid --heartbeat-interval=5
│ └─5547 /usr/bin/python /usr/bin/celery worker -n reserved_resource_worker-0@%h -A pulp.server.async.app -c 1 --events --umask 18 --pidfile=/var/run/pulp/reserved_resource_worker-0.pid --heartbeat-interval=5
├─mongod.service
│ └─5291 /usr/bin/mongod --quiet -f /etc/mongod.conf run
├─httpd.service
│ ├─4443 /usr/sbin/httpd -DFOREGROUND
│ ├─4456 (wsgi:pulp) -DFOREGROUND
│ ├─4457 (wsgi:pulp) -DFOREGROUND
│ ├─4458 (wsgi:pulp) -DFOREGROUND
│ ├─4459 (wsgi:pulp-cont -DFOREGROUND
│ ├─4460 (wsgi:pulp-cont -DFOREGROUND
│ ├─4461 (wsgi:pulp-cont -DFOREGROUND
│ ├─4462 (wsgi:pulp_forg -DFOREGROUND
│ ├─4463 (wsgi:pulp_forg -DFOREGROUND
│ ├─4464 (wsgi:pulp_forg -DFOREGROUND
│ ├─4465 /usr/sbin/httpd -DFOREGROUND
│ ├─4467 /usr/sbin/httpd -DFOREGROUND
│ ├─4468 /usr/sbin/httpd -DFOREGROUND
│ ├─4469 /usr/sbin/httpd -DFOREGROUND
│ └─4470 /usr/sbin/httpd -DFOREGROUND
├─rhnsd.service
│ └─1011 rhnsd
├─pulp_streamer.service
│ └─982 /usr/bin/python /usr/bin/pulp_streamer --nodaemon --syslog --prefix=pulp_streamer --pidfile= --python /usr/share/pulp/wsgi/streamer.tac
├─tuned.service
│ └─978 /usr/bin/python -Es /usr/sbin/tuned -l -P
├─rhsmcertd.service
│ └─981 /usr/bin/rhsmcertd
├─rsyslog.service
│ └─974 /usr/sbin/rsyslogd -n
├─qpidd.service
│ └─972 /usr/sbin/qpidd --config /etc/qpid/qpidd.conf
├─squid.service
│ ├─1053 /usr/sbin/squid -f /etc/squid/squid.conf
│ ├─1058 (squid-1) -f /etc/squid/squid.conf
│ └─1111 (logfile-daemon) /var/log/squid/access.log
├─postfix.service
│ ├─1579 /usr/libexec/postfix/master -w
│ ├─1584 pickup -l -t unix -u
│ └─1585 qmgr -l -t unix -u
├─sshd.service
│ └─1000 /usr/sbin/sshd
├─NetworkManager.service
│ ├─663 /usr/sbin/NetworkManager --no-daemon
│ └─767 /sbin/dhclient -d -q -sf /usr/libexec/nm-dhcp-helper -pf /var/run/dhclient-eth0.pid -lf /var/lib/NetworkManager/dhclient-3ec3ad7d-e0ef-4a81-911f-d7a77852abbe-eth0.lease -cf /var/lib/NetworkManager/dhclient-eth0.conf eth0
├─firewalld.service
│ └─662 /usr/bin/python -Es /usr/sbin/firewalld --nofork --nopid
├─crond.service
│ └─645 /usr/sbin/crond -n
├─polkit.service
│ └─628 /usr/lib/polkit-1/polkitd --no-debug
├─chronyd.service
│ └─631 /usr/sbin/chronyd
├─systemd-logind.service
│ └─624 /usr/lib/systemd/systemd-logind
├─dbus.service
│ └─615 /bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation
├─auditd.service
│ └─596 /sbin/auditd -n
├─systemd-udevd.service
│ └─492 /usr/lib/systemd/systemd-udevd
├─lvm2-lvmetad.service
│ └─483 /usr/sbin/lvmetad -f
├─system-getty.slice
│ └─getty@tty1.service
│ └─651 /sbin/agetty --noclear tty1 linux
└─systemd-journald.service
└─465 /usr/lib/systemd/systemd-journald
Here's the packages installed on this particular system:
# rpm -qa | grep -i pulp | sort
pulp-admin-client-2.12.3-0.1.alpha.git.1198.db2de38.el7.noarch
pulp-docker-admin-extensions-2.3.1-0.1.alpha.git.5.052c506.el7.noarch
pulp-docker-plugins-2.3.1-0.1.alpha.git.5.052c506.el7.noarch
pulp-ostree-admin-extensions-1.2.2-0.1.alpha.git.3.809be44.el7.noarch
pulp-ostree-plugins-1.2.2-0.1.alpha.git.3.809be44.el7.noarch
pulp-puppet-admin-extensions-2.12.3-0.1.alpha.git.1.6953897.el7.noarch
pulp-puppet-plugins-2.12.3-0.1.alpha.git.1.6953897.el7.noarch
pulp-python-admin-extensions-1.1.4-0.1.alpha.git.31.36b75e3.el7.noarch
pulp-python-plugins-1.1.4-0.1.alpha.git.31.36b75e3.el7.noarch
pulp-rpm-admin-extensions-2.12.3-0.1.alpha.git.11.d8adbfa.el7.noarch
pulp-rpm-plugins-2.12.3-0.1.alpha.git.11.d8adbfa.el7.noarch
pulp-selinux-2.12.3-0.1.alpha.git.1198.db2de38.el7.noarch
pulp-server-2.12.3-0.1.alpha.git.1198.db2de38.el7.noarch
python-isodate-0.5.0-4.pulp.el7.noarch
python-kombu-3.0.33-6.pulp.el7.noarch
python-pulp-bindings-2.12.3-0.1.alpha.git.1198.db2de38.el7.noarch
python-pulp-client-lib-2.12.3-0.1.alpha.git.1198.db2de38.el7.noarch
python-pulp-common-2.12.3-0.1.alpha.git.1198.db2de38.el7.noarch
python-pulp-docker-common-2.3.1-0.1.alpha.git.5.052c506.el7.noarch
python-pulp-oid_validation-2.12.3-0.1.alpha.git.1198.db2de38.el7.noarch
python-pulp-ostree-common-1.2.2-0.1.alpha.git.3.809be44.el7.noarch
python-pulp-puppet-common-2.12.3-0.1.alpha.git.1.6953897.el7.noarch
python-pulp-python-common-1.1.4-0.1.alpha.git.31.36b75e3.el7.noarch
python-pulp-repoauth-2.12.3-0.1.alpha.git.1198.db2de38.el7.noarch
python-pulp-rpm-common-2.12.3-0.1.alpha.git.11.d8adbfa.el7.noarch
python-pulp-streamer-2.12.3-0.1.alpha.git.1198.db2de38.el7.noarch
# rpm -qa | grep -i mongo | sort
mongodb-2.6.12-4.el7.x86_64
mongodb-server-2.6.12-4.el7.x86_64
python-mongoengine-0.10.5-1.el7.noarch
python-pymongo-3.2-1.el7.x86_64
python-pymongo-gridfs-3.2-1.el7.x86_64
Updated by ttereshc over 7 years ago
- Priority changed from Normal to High
- Triaged changed from No to Yes
Updated by bmbouter over 7 years ago
The feature where pulp-manage-db waits to run until other processes are exited uses the records that back the status API. In 2.12, for the pulp_resource_manager and pulp_workers to have their presence recorded as status API records the pulp_celerybeat process must be running. In 2.13+ each process records its own status API records so it shouldn't be dependant on other processes to be running.
All ^ is to ask, when pulp-manage-db continues when it shouldn't what do the status API records show? Since 2.13 is the current GA maybe this could be rerun w/ only 2.13 and include the output of the status API in cases where it fails to wait even though it should?
Updated by Ichimonji10 over 7 years ago
All ^ is to ask, when pulp-manage-db continues when it shouldn't what do the status API records show?
Is executing pulp-admin status
an appropriate way to discover this information? Or is there a better way?
Updated by bmbouter over 7 years ago
Ichimonji10 wrote:
All ^ is to ask, when pulp-manage-db continues when it shouldn't what do the status API records show?
Is executing
pulp-admin status
an appropriate way to discover this information? Or is there a better way?
pulp-admin status
is good but it also does some filtering. Really getting the db records with the mongo query db.workers.find().pretty()
would be the best. For example I get this:
> db.workers.find().pretty()
{
"_id" : "scheduler@dev",
"last_heartbeat" : ISODate("2017-05-12T16:12:39.183Z")
}
{
"_id" : "reserved_resource_worker-3@dev",
"last_heartbeat" : ISODate("2017-05-12T16:12:39.188Z")
}
{
"_id" : "reserved_resource_worker-0@dev",
"last_heartbeat" : ISODate("2017-05-12T16:12:39.357Z")
}
{
"_id" : "reserved_resource_worker-2@dev",
"last_heartbeat" : ISODate("2017-05-12T16:12:39.196Z")
}
{
"_id" : "reserved_resource_worker-1@dev",
"last_heartbeat" : ISODate("2017-05-12T16:12:39.202Z")
}
{
"_id" : "resource_manager@dev",
"last_heartbeat" : ISODate("2017-05-12T16:12:41.573Z")
}
Updated by Ichimonji10 over 7 years ago
Neato. That makes more sense. So is this an appropriate command to run?
mongo pulp_database --eval db.workers.find().pretty()
(/me just tries it and sees what happens.)
Updated by Ichimonji10 over 7 years ago
To reproduce this bug and get some additional debugging output, did the following:
1. Modify the Pulp Smash source code.
2. Run the modified test.
The modifications consisted of the following several changes to pulp_smash.tests.platform.cli.test_pulp_manage_db.NegativeTestCase
, like the following (this is not a full diff):
diff --git a/pulp_smash/tests/platform/cli/test_pulp_manage_db.py b/pulp_smash/tests/platform/cli/test_pulp_manage_db.py
index 10b180b..fb02726 100644
--- a/pulp_smash/tests/platform/cli/test_pulp_manage_db.py
+++ b/pulp_smash/tests/platform/cli/test_pulp_manage_db.py
@@ -106,11 +121,16 @@ class NegativeTestCase(BaseTestCase):
This test targets `Pulp #2684 <https://pulp.plan.io/issues/2684>`_.
"""
- if selectors.bug_is_untestable(2684, self.cfg.version):
- self.skipTest('https://pulp.plan.io/issues/2684')
cli.GlobalServiceManager(config.get_config()).stop((
CONFLICTING_SERVICES.difference(('pulp_workers',))
))
+ print('test_workers_running')
+ print(cli.Client(self.cfg).run((
+ 'mongo', 'pulp_database', '--eval',
+ 'db.workers.find().pretty()'
+ )).stdout)
self._do_test()
def _do_test(self):
The modified tests generally do the following:
1. Stop some number of services.
2. Print the newly inserted debugging output.
3. Execute runuser --shell /bin/sh --command pulp-manage-db - apache
, and assert that the command fails. If the command succeeds, print some additional debugging output.
I ran these modified tests against a matrix of nine systems, where one axis is F24, F25 and RHEL 7, and the other axis is Pulp 2.13 (stable), Pulp 2.13 (nightly), and Pulp 2.14 (nightly). I was able to reproduce test failures with test_resource_manager_running
on at least the following. Note that it may be possible to reproduce these test failures on the platforms marked with an ✗, too. I'm just showing what I was able to reproduce with a limited number (~3) of test runs.
=================== === === =====
Pulp Version F24 F25 RHEL7
=================== === === =====
Pulp 2.13 (stable) ✓ ✓ ✗
Pulp 2.13 (nightly) ✓ ✗ ✗
Pulp 2.14 (nightly) ✓ ✓ ✗
=================== === === =====
I was unable to reproduce the test_workers_running
test failure on any platform. The issue may still be there. Again, this is just what I was able to reproduce with a limited number of test runs.
Here's the output of mongo pulp_database --eval 'db.workers.find().pretty()'
, right before pulp-manage-db
executes. The output is similar for all test failures, with the only difference being timestamps. This output is taken from Fedora 24, with Pulp 2.13 stable.
test_resource_manager_running
MongoDB shell version: 3.2.12
connecting to: pulp_database
{
"_id" : "resource_manager@fedora-24-pulp-2-13-final",
"last_heartbeat" : ISODate("2017-05-12T17:15:55.451Z")
}
Here's the contents of stderr, from when the pulp-manage-db command ran. This output is taken from the same system.
Attempting to connect to localhost:27017
Attempting to connect to localhost:27017
Write concern for Mongo connection: {}
Loading content types.
Loading type descriptors []
Parsing type descriptors
Validating type descriptor syntactic integrity
Validating type descriptor semantic integrity
Loading unit model: erratum = pulp_rpm.plugins.db.models:Errata
Loading unit model: srpm = pulp_rpm.plugins.db.models:SRPM
Loading unit model: yum_repo_metadata_file = pulp_rpm.plugins.db.models:YumMetadataFile
Loading unit model: package_group = pulp_rpm.plugins.db.models:PackageGroup
Loading unit model: package_category = pulp_rpm.plugins.db.models:PackageCategory
Loading unit model: iso = pulp_rpm.plugins.db.models:ISO
Loading unit model: package_environment = pulp_rpm.plugins.db.models:PackageEnvironment
Loading unit model: drpm = pulp_rpm.plugins.db.models:DRPM
Loading unit model: distribution = pulp_rpm.plugins.db.models:Distribution
Loading unit model: rpm = pulp_rpm.plugins.db.models:RPM
Loading unit model: package_langpacks = pulp_rpm.plugins.db.models:PackageLangpacks
Loading unit model: puppet_module = pulp_puppet.plugins.db.models:Module
Loading unit model: docker_blob = pulp_docker.plugins.models:Blob
Loading unit model: docker_tag = pulp_docker.plugins.models:Tag
Loading unit model: docker_image = pulp_docker.plugins.models:Image
Loading unit model: docker_manifest = pulp_docker.plugins.models:Manifest
Loading unit model: ostree = pulp_ostree.plugins.db.model:Branch
Loading unit model: python_package = pulp_python.plugins.models:Package
Updating the database with types []
Found the following type definitions that were not present in the update collection [puppet_module, ostree, package_langpacks, erratum, docker_blob, docker_tag, distribution, package_group, package_category, iso, package_environment, drpm, python_package, srpm, rpm, y
um_repo_metadata_file, docker_image, docker_manifest]
Updating the database with types [puppet_module, ostree, package_langpacks, erratum, docker_blob, docker_tag, distribution, package_group, package_category, iso, package_environment, drpm, python_package, srpm, rpm, yum_repo_metadata_file, docker_image, docker_manifes
t]
Content types loaded.
Ensuring the admin role and user are in place.
Admin role and user are in place.
Beginning database migrations.
Migration package pulp.server.db.migrations is up to date at version 28
Migration package pulp_docker.plugins.migrations is up to date at version 3
Migration package pulp_puppet.plugins.migrations is up to date at version 5
Migration package pulp_python.plugins.migrations is up to date at version 2
Migration package pulp_rpm.plugins.migrations is up to date at version 39
Loading unit model: erratum = pulp_rpm.plugins.db.models:Errata
Loading unit model: srpm = pulp_rpm.plugins.db.models:SRPM
Loading unit model: yum_repo_metadata_file = pulp_rpm.plugins.db.models:YumMetadataFile
Loading unit model: package_group = pulp_rpm.plugins.db.models:PackageGroup
Loading unit model: package_category = pulp_rpm.plugins.db.models:PackageCategory
Loading unit model: iso = pulp_rpm.plugins.db.models:ISO
Loading unit model: package_environment = pulp_rpm.plugins.db.models:PackageEnvironment
Loading unit model: drpm = pulp_rpm.plugins.db.models:DRPM
Loading unit model: distribution = pulp_rpm.plugins.db.models:Distribution
Loading unit model: rpm = pulp_rpm.plugins.db.models:RPM
Loading unit model: package_langpacks = pulp_rpm.plugins.db.models:PackageLangpacks
Loading unit model: puppet_module = pulp_puppet.plugins.db.models:Module
Loading unit model: docker_blob = pulp_docker.plugins.models:Blob
Loading unit model: docker_tag = pulp_docker.plugins.models:Tag
Loading unit model: docker_image = pulp_docker.plugins.models:Image
Loading unit model: docker_manifest = pulp_docker.plugins.models:Manifest
Loading unit model: ostree = pulp_ostree.plugins.db.model:Branch
Loading unit model: python_package = pulp_python.plugins.models:Package
Database migrations complete.
Here's the output of stdout, from when systemctl status
ran:
● fedora-24-pulp-2-13-final
State: running
Jobs: 0 queued
Failed: 0 units
Since: Fri 2017-05-12 20:46:39 EDT; 7h left
CGroup: /
├─user.slice
│ └─user-0.slice
│ ├─session-44.scope
│ │ ├─8687 sshd: root [priv]
│ │ ├─8698 sshd: root@notty
│ │ └─8701 -bash
│ ├─user@0.service
│ │ └─init.scope
│ │ ├─1282 /usr/lib/systemd/systemd --user
│ │ └─1297 (sd-pam)
│ ├─session-43.scope
│ │ ├─8645 sshd: root [priv]
│ │ ├─8664 sshd: root@notty
│ │ └─8669 -bash
│ └─session-30.scope
│ ├─5458 sshd: root [priv]
│ ├─5461 sshd: root@notty
│ ├─8133 -bash
│ ├─8764 -bash
│ ├─8826 -bash
│ ├─8841 -bash
│ ├─8905 -bash
│ ├─8920 -bash
│ ├─8979 -bash
│ ├─9012 -bash
│ ├─9068 bash -c cd /root && /usr/bin/systemctl status
│ └─9077 /usr/bin/systemctl status
├─init.scope
│ └─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 23
└─system.slice
├─lvm2-lvmetad.service
│ └─562 /usr/sbin/lvmetad -f -t 3600
├─rngd.service
│ └─686 /sbin/rngd -f
├─firewalld.service
│ └─727 /usr/bin/python3 -Es /usr/sbin/firewalld --nofork --nopid
├─pulp_streamer.service
│ └─1030 /usr/bin/python /usr/bin/pulp_streamer --nodaemon --syslog --prefix=pulp_streamer --pidfile= --python /usr/share/pulp/wsgi/streamer.tac
├─gssproxy.service
│ └─697 /usr/sbin/gssproxy -D
├─mcelog.service
│ └─685 /usr/sbin/mcelog --ignorenodev --daemon --foreground
├─NetworkManager.service
│ ├─752 /usr/sbin/NetworkManager --no-daemon
│ └─955 /sbin/dhclient -d -q -sf /usr/libexec/nm-dhcp-helper -pf /var/run/dhclient-ens3.pid -lf /var/lib/NetworkManager/dhclient-917ba805-2398-3b85-a40d-a22af7988efd-ens3.lease -cf /var/lib/NetworkManager/dhclient-ens3.conf ens3
├─dbus.service
│ └─687 /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only
├─httpd.service
│ ├─7769 /usr/sbin/httpd -DFOREGROUND
│ ├─7776 (wsgi:pulp) -DFOREGROUND
│ ├─7777 (wsgi:pulp) -DFOREGROUND
│ ├─7778 (wsgi:pulp) -DFOREGROUND
│ ├─7779 (wsgi:pulp-cont -DFOREGROUND
│ ├─7780 (wsgi:pulp-cont -DFOREGROUND
│ ├─7781 (wsgi:pulp-cont -DFOREGROUND
│ ├─7782 (wsgi:pulp_forg -DFOREGROUND
│ ├─7783 (wsgi:pulp_forg -DFOREGROUND
│ ├─7784 (wsgi:pulp_forg -DFOREGROUND
│ ├─7785 /usr/sbin/httpd -DFOREGROUND
│ ├─7786 /usr/sbin/httpd -DFOREGROUND
│ ├─7789 /usr/sbin/httpd -DFOREGROUND
│ ├─7810 /usr/sbin/httpd -DFOREGROUND
│ └─7811 /usr/sbin/httpd -DFOREGROUND
├─sshd.service
│ └─785 /usr/sbin/sshd
├─abrt-xorg.service
│ └─741 /usr/bin/abrt-dump-journal-xorg -fxtD
├─squid.service
│ ├─ 805 /usr/sbin/squid -f /etc/squid/squid.conf
│ ├─ 807 (squid-1) -f /etc/squid/squid.conf
│ └─1000 (logfile-daemon) /var/log/squid/access.log
├─abrt-oops.service
│ └─742 /usr/bin/abrt-dump-journal-oops -fxtD
├─system-getty.slice
│ └─getty@tty1.service
│ └─743 /sbin/agetty --noclear tty1 linux
├─smartd.service
│ └─692 /usr/sbin/smartd -n -q never
├─abrtd.service
│ └─703 /usr/sbin/abrtd -d -s
├─systemd-logind.service
│ └─696 /usr/lib/systemd/systemd-logind
├─crond.service
│ └─732 /usr/sbin/crond -n
├─polkit.service
│ └─694 /usr/lib/polkit-1/polkitd --no-debug
├─mongod.service
│ └─8888 /usr/bin/mongod -f /etc/mongod.conf run
├─pulp_resource_manager.service
│ ├─8392 /usr/bin/python2 /usr/bin/celery worker -A pulp.server.async.app -n resource_manager@%h -Q resource_manager -c 1 --events --umask 18 --pidfile=/var/run/pulp/resource_manager.pid
│ └─8502 /usr/bin/python2 /usr/bin/celery worker -A pulp.server.async.app -n resource_manager@%h -Q resource_manager -c 1 --events --umask 18 --pidfile=/var/run/pulp/resource_manager.pid
├─systemd-udevd.service
│ └─581 /usr/lib/systemd/systemd-udevd
├─rsyslog.service
│ └─684 /usr/sbin/rsyslogd -n
├─atd.service
│ └─731 /usr/sbin/atd -f
├─systemd-journald.service
│ └─543 /usr/lib/systemd/systemd-journald
├─auditd.service
│ └─663 /sbin/auditd
├─qpidd.service
│ └─777 /usr/sbin/qpidd --config /etc/qpid/qpidd.conf
└─chronyd.service
└─707 /usr/sbin/chronyd
Updated by bmbouter over 7 years ago
Thanks a lot Ichimonji10!
It sounds like the check here[0] is having still_active_workers = [] when it should have (still_active_workers) > 0.
Updated by bmbouter over 5 years ago
- Status changed from NEW to CLOSED - WONTFIX
Updated by bmbouter over 5 years ago
Pulp 2 is approaching maintenance mode, and this Pulp 2 ticket is not being actively worked on. As such, it is being closed as WONTFIX. Pulp 2 is still accepting contributions though, so if you want to contribute a fix for this ticket, please reopen or comment on it. If you don't have permissions to reopen this ticket, or you want to discuss an issue, please reach out via the developer mailing list.