Project

Profile

Help

Issue #2468

closed

pulp-manage-db asks before continuing even when all Pulp services are gracefully stopped

Added by elyezer about 6 years ago. Updated over 3 years ago.

Status:
CLOSED - NOTABUG
Priority:
Normal
Assignee:
Category:
-
Sprint/Milestone:
-
Start date:
Due date:
Estimated time:
Severity:
2. Medium
Version:
2.11.0
Platform Release:
OS:
Triaged:
No
Groomed:
No
Sprint Candidate:
No
Tags:
Pulp 2
Sprint:
Quarter:

Description

During the process of upgrading from latest Pulp 2.10 stable to latest Pulp 2.11 beta pulp-manage-db asks the following:

There are still running workers, continuing could corrupt your Pulp installation. Are you sure you wish to continue? (y/N):

Even when all Pulp services are gracefully stopped.

Steps to reproduce:

  1. Install latest Pulp 2.10 stable (was used the pulp_packaging ansible playbook to do this)
  2. Upgrade the system following the steps:
  • Update the pulp.repo file to point to the latest 2.11 beta repository
  • Stop all Pulp services: httpd, pulp_workers, pulp_celerybeat, pulp_resource_manager
  • Clean package cache yum clean all
  • Update packages: yum -y update
  • Run pulp-manage-db: sudo -u apache pulp-manage-db

The upgrade steps were also executed from within an Ansible playbook and stuck when running the pulp-manage-db since it was expecting an input in order to proceed.

I've sshed into the machine and tried the pulp-manage-db command manually after checking the processes status:

# systemctl status httpd pulp_celerybeat pulp_resource_manager pulp_workers
● httpd.service - The Apache HTTP Server
   Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled; vendor preset: disabled)
   Active: inactive (dead) since Mon 2016-12-05 09:08:45 EST; 2h 33min ago
     Docs: man:httpd(8)
           man:apachectl(8)
 Main PID: 17163 (code=exited, status=0/SUCCESS)
   Status: "Total requests: 0; Current requests/sec: 0; Current traffic:   0 B/sec"

● pulp_celerybeat.service - Pulp's Celerybeat
   Loaded: loaded (/usr/lib/systemd/system/pulp_celerybeat.service; enabled; vendor preset: disabled)
   Active: inactive (dead) since Mon 2016-12-05 09:08:55 EST; 2h 32min ago
 Main PID: 17658 (code=exited, status=0/SUCCESS)

● pulp_resource_manager.service - Pulp Resource Manager
   Loaded: loaded (/usr/lib/systemd/system/pulp_resource_manager.service; enabled; vendor preset: disabled)
   Active: inactive (dead) since Mon 2016-12-05 09:09:00 EST; 2h 32min ago
 Main PID: 17701 (code=exited, status=0/SUCCESS)

● pulp_workers.service - Pulp Celery Workers
   Loaded: loaded (/usr/lib/systemd/system/pulp_workers.service; enabled; vendor preset: disabled)
   Active: inactive (dead) since Mon 2016-12-05 09:08:51 EST; 2h 32min ago
 Main PID: 17436 (code=exited, status=0/SUCCESS)

# sudo -u apache pulp-manage-db
Attempting to connect to localhost:27017
Attempting to connect to localhost:27017
Write concern for Mongo connection: {}
There are still running workers, continuing could corrupt your Pulp installation. Are you sure you wish to continue? (y/N): n

I've removed the logging entries from the systemctl output above.

All the steps were run on a RHEL7 machine and after the upgrade the system has the following pulp packages:

# rpm -qa | grep pulp
python-pulp-docker-common-2.2.0-0.2.beta.el7.noarch
python-pulp-puppet-common-2.11.0-0.4.beta.el7.noarch
pulp-admin-client-2.11.0-0.4.beta.el7.noarch
pulp-docker-plugins-2.2.0-0.2.beta.el7.noarch
pulp-ostree-admin-extensions-1.2.0-0.3.beta.el7.noarch
pulp-python-admin-extensions-1.1.3-1.el7.noarch
python-pulp-common-2.11.0-0.4.beta.el7.noarch
python-pulp-ostree-common-1.2.0-0.3.beta.el7.noarch
python-pulp-rpm-common-2.11.0-0.4.beta.el7.noarch
python-pulp-oid_validation-2.11.0-0.4.beta.el7.noarch
python-pulp-bindings-2.11.0-0.4.beta.el7.noarch
pulp-selinux-2.11.0-0.4.beta.el7.noarch
python-pulp-streamer-2.11.0-0.4.beta.el7.noarch
pulp-ostree-plugins-1.2.0-0.3.beta.el7.noarch
pulp-puppet-plugins-2.11.0-0.4.beta.el7.noarch
pulp-rpm-admin-extensions-2.11.0-0.4.beta.el7.noarch
pulp-docker-admin-extensions-2.2.0-0.2.beta.el7.noarch
python-kombu-3.0.33-6.pulp.el7.noarch
python-pulp-python-common-1.1.3-1.el7.noarch
pulp-python-plugins-1.1.3-1.el7.noarch
python-pulp-repoauth-2.11.0-0.4.beta.el7.noarch
python-pulp-client-lib-2.11.0-0.4.beta.el7.noarch
pulp-server-2.11.0-0.4.beta.el7.noarch
pulp-rpm-plugins-2.11.0-0.4.beta.el7.noarch
pulp-puppet-admin-extensions-2.11.0-0.4.beta.el7.noarch
python-isodate-0.5.0-4.pulp.el7.noarch
[root@sat-qe-4 ~]# rpm -qa | grep pulp | sort
pulp-admin-client-2.11.0-0.4.beta.el7.noarch
pulp-docker-admin-extensions-2.2.0-0.2.beta.el7.noarch
pulp-docker-plugins-2.2.0-0.2.beta.el7.noarch
pulp-ostree-admin-extensions-1.2.0-0.3.beta.el7.noarch
pulp-ostree-plugins-1.2.0-0.3.beta.el7.noarch
pulp-puppet-admin-extensions-2.11.0-0.4.beta.el7.noarch
pulp-puppet-plugins-2.11.0-0.4.beta.el7.noarch
pulp-python-admin-extensions-1.1.3-1.el7.noarch
pulp-python-plugins-1.1.3-1.el7.noarch
pulp-rpm-admin-extensions-2.11.0-0.4.beta.el7.noarch
pulp-rpm-plugins-2.11.0-0.4.beta.el7.noarch
pulp-selinux-2.11.0-0.4.beta.el7.noarch
pulp-server-2.11.0-0.4.beta.el7.noarch
python-isodate-0.5.0-4.pulp.el7.noarch
python-kombu-3.0.33-6.pulp.el7.noarch
python-pulp-bindings-2.11.0-0.4.beta.el7.noarch
python-pulp-client-lib-2.11.0-0.4.beta.el7.noarch
python-pulp-common-2.11.0-0.4.beta.el7.noarch
python-pulp-docker-common-2.2.0-0.2.beta.el7.noarch
python-pulp-oid_validation-2.11.0-0.4.beta.el7.noarch
python-pulp-ostree-common-1.2.0-0.3.beta.el7.noarch
python-pulp-puppet-common-2.11.0-0.4.beta.el7.noarch
python-pulp-python-common-1.1.3-1.el7.noarch
python-pulp-repoauth-2.11.0-0.4.beta.el7.noarch
python-pulp-rpm-common-2.11.0-0.4.beta.el7.noarch
python-pulp-streamer-2.11.0-0.4.beta.el7.noarch
Actions #1

Updated by elyezer about 6 years ago

  • Description updated (diff)
Actions #2

Updated by bmbouter about 6 years ago

I know why this is occurring. The cleanup of the database in the case of a graceful shutdown was added with 2.11 itself so if you gracefully shutdown 2.10-, upgrade, and then try to run pulp-manage-db then it will still think workers are running when they are not.

The only thing I can think to do is to clarify the release note on this feature. elyezer is that what you recommend be done or maybe something else?

Note also that if the workers have been shut off (gracefully or not) for 300 seconds then they will be considered stopped. So the upgrade must happen fast for this to be an issue. This is what I see from the code, but confirming that behavior with some testing would be helpful.

Actions #3

Updated by semyers about 6 years ago

  • Status changed from NEW to ASSIGNED
  • Assignee set to semyers

Added by semyers about 6 years ago

Revision 70f8726b

Update release notes about migration caveat

closes #2468 https://pulp.plan.io/issues/2468

Added by semyers about 6 years ago

Revision 70f8726b

Update release notes about migration caveat

closes #2468 https://pulp.plan.io/issues/2468

Actions #4

Updated by semyers about 6 years ago

  • Status changed from ASSIGNED to POST
Actions #5

Updated by semyers about 6 years ago

  • Status changed from POST to MODIFIED
Actions #6

Updated by semyers about 6 years ago

  • Status changed from MODIFIED to ASSIGNED
  • Assignee changed from semyers to bizhang

Added by bizhang about 6 years ago

Revision bf2d1d4a

Revert "Update release notes about migration caveat"

This reverts commit 70f8726b96decbcfa9ef1b799809beeb864d50a4.

re #2186 #2468 https://pulp.plan.io/issues/2186 https://pulp.plan.io/issues/2468

Added by bizhang about 6 years ago

Revision bf2d1d4a

Revert "Update release notes about migration caveat"

This reverts commit 70f8726b96decbcfa9ef1b799809beeb864d50a4.

re #2186 #2468 https://pulp.plan.io/issues/2186 https://pulp.plan.io/issues/2468

Actions #7

Updated by bizhang about 6 years ago

  • Status changed from ASSIGNED to CLOSED - NOTABUG

closed since the offending feature #2186 was removed

Actions #8

Updated by bmbouter over 3 years ago

  • Tags Pulp 2 added

Also available in: Atom PDF