Project

Profile

Help

Issue #8782

Intermittent psycopg2.errors.AdminShutdown errors in galaxy_ng dev env and pulp-all-in-one

Added by alikins 5 months ago. Updated 4 months ago.

Status:
NEW
Priority:
Normal
Assignee:
-
Category:
-
Sprint/Milestone:
-
Start date:
Due date:
Estimated time:
Severity:
2. Medium
Version:
Platform Release:
OS:
Triaged:
Yes
Groomed:
No
Sprint Candidate:
No
Tags:
Dev Environment, GalaxyNG
Sprint:
Quarter:

Description

A few folks have ran into an issue where when starting up the services including galaxy_ng, that all of the pulp services will start throwing errors like:

worker_1            | pulp [None]: rq.worker:ERROR: Worker rq:worker:1@440b6c94ceca: found an unhandled exception, quitting...
worker_1            | Traceback (most recent call last):
worker_1            |   File "/venv/lib64/python3.6/site-packages/django/db/backends/utils.py", line 84, in _execute
worker_1            |     return self.cursor.execute(sql, params)
worker_1            | psycopg2.errors.AdminShutdown: terminating connection due to administrator command
worker_1            | server closed the connection unexpectedly
worker_1            | 	This probably means the server terminated abnormally
worker_1            | 	before or while processing the request.

I've seen it with ~master ish galaxy_ng with the galaxy_ng dev containers. In that scenario it seems to get triggered by starting up the ./compose up ui ui component.

bmbouter saw it at least once with the pulp-all-in-one container, but it is hard to reproduce.

But filing here for reference.

compose_up.txt (38 KB) compose_up.txt alikins, 05/19/2021 05:31 PM

History

#1 Updated by alikins 5 months ago

A longer sample of the errors and tracebacks attached.

#2 Updated by bmbouter 5 months ago

I've observed this on the single container when putting together my demo recently. It happens about 25% of the time when I run this script:

#!/usr/bin/env bash

podman stop pulp
podman rm pulp
cd /home/bmbouter/Documents/Presentations/
sudo rm -rf pulp

mkdir pulp
cd pulp
mkdir settings pulp_storage pgsql containers

echo "CONTENT_ORIGIN='http://$(hostname):8080'
ANSIBLE_API_HOSTNAME='http://$(hostname):8080'
ANSIBLE_CONTENT_HOSTNAME='http://$(hostname):8080/pulp/content'
TOKEN_AUTH_DISABLED=True" >> settings/settings.py

tree .
printf "\n\nShowing contents of settings file settings/setitngs.py\n"
cat settings/settings.py

printf "\nStarting Pulp container...\n\n"

podman run --detach \
             --publish 8080:80 \
             --name pulp \
             --volume ./settings:/etc/pulp:Z \
             --volume ./pulp_storage:/var/lib/pulp:Z \
             --volume ./pgsql:/var/lib/pgsql:Z \
             --volume ./containers:/var/lib/containers:Z \
             --device /dev/fuse \
             pulp/pulp

sleep 10

podman exec -it pulp bash -c 'pulpcore-manager reset-admin-password'

This fails at the pulpcore-manager reset-admin-password step about 25% of the time.

#3 Updated by dkliban@redhat.com 4 months ago

  • Triaged changed from No to Yes

Please register to edit this issue

Also available in: Atom PDF