Issue #470
closedScheduler not able to recover from very long mongo and/or qpidd outage
Description
Celerybeat uses pulp.server.async.scheduler which provides correct reconnect support if either Mongo or Qpid go down and then come back later. For normal outage times, minutes or hours, the reconnect support works fine. For outages that last on the order of days the user will eventually receive the following message:
pulp.server.async.scheduler:ERROR: [Errno 24] Too many open files
One the user sees that message, reconnect support no longer works, and the celerybeat service would need to be restarted. Something about the reconnect support is using a file descriptor with each reconnect attempt.
I'm not sure if it is Qpid or Mongo that causes this, so I'm identifying them both as possible causes.
To reproduce:
1. Stop all Pulp services
2. Start Mongo
3. Start Qpid
4. Start celerybeat
5. stop Mongo
6. Stop Qpid
7. Observe the reconnects trying over and over
8. Wait a long time (like overnight)
9. Observe the error message above in the logs
+ This bug was cloned from Bugzilla Bug #1118404 +
Fixes a file descriptor leak in python-kombu
This is a fix for upstream issue 476. https://github.com/celery/kombu/issues/476
fixes #470 https://pulp.plan.io/issues/470