Project

Profile

Help

Issue #8566

closed

Content Migration to Pulp 3 with Katello fails (similar to #8377)

Added by cmaso almost 3 years ago. Updated over 2 years ago.

Status:
CLOSED - CURRENTRELEASE
Priority:
Normal
Assignee:
Sprint/Milestone:
Start date:
Due date:
Estimated time:
Severity:
2. Medium
Platform Release:
OS:
Triaged:
No
Groomed:
No
Sprint Candidate:
No
Tags:
Katello
Sprint:
Sprint 97
Quarter:

Description

On my Katello system (Katello: 3.18.2, Foreman: 2.3.3) the content migration to Pulp3 fails with this message:

ForemanTasks::TaskError: Task 62598cf8-3107-4255-b321-1362c889b3a3: Katello::Errors::Pulp3Error: No declared artifact with relative path ".treeinfo" for content "<DistributionTree: pk=64f44866-0207-4005-9c06-0f45e52cbdd1>"

Installed Pulp modules are:

pulp-2to3-migration (0.11.0)
pulp-certguard (1.0.3)
pulp-container (2.1.0)
pulp-deb (2.7.0)
pulp-file (1.3.0)
pulp-rpm (3.10.0)
pulpcore (3.7.5)

All my repositories are configured to sync immediately.

Database contains the following for the mentioned content_id:

pulpcore=# select pulp_id, path, platforms from rpm_image where distribution_tree_id = '64f44866-0207-4005-9c06-0f45e52cbdd1';
               pulp_id                |           path            |  platforms
--------------------------------------+---------------------------+-------------
 9f630766-ffc7-4147-a8b5-133d1075b102 | images/boot.iso           | x86_64
 c84efc63-5963-49b9-889c-01e4d8518e25 | images/efiboot.img        | x86_64
 5d7e5127-ad71-4d2e-84d0-eeb0df61fbad | images/pxeboot/initrd.img | x86_64, xen
 630b295b-86f4-4fe4-a965-58f59ed19768 | images/pxeboot/vmlinuz    | x86_64, xen
(4 rows)
pulpcore=# select * from core_contentartifact where content_id = '64f44866-0207-4005-9c06-0f45e52cbdd1';
               pulp_id                |         pulp_created          |       pulp_last_updated       | relative_path |             artifact_id              |
        content_id
--------------------------------------+-------------------------------+-------------------------------+---------------+--------------------------------------+------
--------------------------------
 3023c31b-78be-4728-a770-6140d1550811 | 2021-04-15 10:10:56.816636+02 | 2021-04-15 10:10:56.816651+02 | .treeinfo     | e76343f3-3bab-4b7b-912f-954ff3a24714 | 64f44
866-0207-4005-9c06-0f45e52cbdd1
(1 row)

Thanks in advance!


Related issues

Related to RPM Support - Issue #9583: Distribution tree uniqueness constraint is not enough for a suboptimal .treeinfoCLOSED - DUPLICATEttereshcActions
Actions #1

Updated by daviddavis almost 3 years ago

  • Project changed from Pulp to Migration Plugin
Actions #2

Updated by ttereshc almost 3 years ago

Hi, could you provide full traceback from your logs? They should be available in the /var/log/messages and/or via journalctl.

I see the reference to #8377 but having your traceback might still be helpful. Thanks.

Actions #3

Updated by cmaso almost 3 years ago

Sure, here it is:

Apr 21 07:36:18 stg-katello pulpcore-worker-3: /usr/lib/python3.6/site-packages/django/db/models/fields/__init__.py:1427: RuntimeWarning: DateTimeField Pulp2RepoContent.pulp2_created received a naive datetime (2021-04-19 23:05:22) while time zone support is active.
Apr 21 07:36:18 stg-katello pulpcore-worker-3: RuntimeWarning)
Apr 21 07:36:18 stg-katello pulpcore-worker-3: pulp: rq.worker:ERROR: Traceback (most recent call last):
Apr 21 07:36:18 stg-katello pulpcore-worker-3: File "/usr/lib/python3.6/site-packages/rq/worker.py", line 936, in perform_job
Apr 21 07:36:18 stg-katello pulpcore-worker-3: rv = job.perform()
Apr 21 07:36:18 stg-katello pulpcore-worker-3: File "/usr/lib/python3.6/site-packages/rq/job.py", line 684, in perform
Apr 21 07:36:18 stg-katello pulpcore-worker-3: self._result = self._execute()
Apr 21 07:36:18 stg-katello pulpcore-worker-3: File "/usr/lib/python3.6/site-packages/rq/job.py", line 690, in _execute
Apr 21 07:36:18 stg-katello pulpcore-worker-3: return self.func(*self.args, **self.kwargs)
Apr 21 07:36:18 stg-katello pulpcore-worker-3: File "/usr/lib/python3.6/site-packages/pulp_2to3_migration/app/tasks/migrate.py", line 81, in migrate_from_pulp2
Apr 21 07:36:18 stg-katello pulpcore-worker-3: migrate_content(plan, skip_corrupted=skip_corrupted)
Apr 21 07:36:18 stg-katello pulpcore-worker-3: File "/usr/lib/python3.6/site-packages/pulp_2to3_migration/app/migration.py", line 47, in migrate_content
Apr 21 07:36:18 stg-katello pulpcore-worker-3: plugin.migrator.migrate_content_to_pulp3(skip_corrupted=skip_corrupted)
Apr 21 07:36:18 stg-katello pulpcore-worker-3: File "/usr/lib/python3.6/site-packages/pulp_2to3_migration/app/plugin/rpm/migrator.py", line 145, in migrate_content_to_pulp3
Apr 21 07:36:18 stg-katello pulpcore-worker-3: loop.run_until_complete(dm.create())
Apr 21 07:36:18 stg-katello pulpcore-worker-3: File "/usr/lib64/python3.6/asyncio/base_events.py", line 484, in run_until_complete
Apr 21 07:36:18 stg-katello pulpcore-worker-3: return future.result()
Apr 21 07:36:18 stg-katello pulpcore-worker-3: File "/usr/lib/python3.6/site-packages/pulp_2to3_migration/app/plugin/content.py", line 89, in create
Apr 21 07:36:18 stg-katello pulpcore-worker-3: await pipeline
Apr 21 07:36:18 stg-katello pulpcore-worker-3: File "/usr/lib/python3.6/site-packages/pulpcore/plugin/stages/api.py", line 225, in create_pipeline
Apr 21 07:36:18 stg-katello pulpcore-worker-3: await asyncio.gather(*futures)
Apr 21 07:36:18 stg-katello pulpcore-worker-3: File "/usr/lib/python3.6/site-packages/pulpcore/plugin/stages/api.py", line 43, in __call__
Apr 21 07:36:18 stg-katello pulpcore-worker-3: await self.run()
Apr 21 07:36:18 stg-katello pulpcore-worker-3: File "/usr/lib/python3.6/site-packages/pulpcore/plugin/stages/artifact_stages.py", line 244, in run
Apr 21 07:36:18 stg-katello pulpcore-worker-3: RemoteArtifact.objects.bulk_get_or_create(self._needed_remote_artifacts(batch))
Apr 21 07:36:18 stg-katello pulpcore-worker-3: File "/usr/lib/python3.6/site-packages/pulpcore/plugin/stages/artifact_stages.py", line 301, in _needed_remote_artifacts
Apr 21 07:36:18 stg-katello pulpcore-worker-3: msg.format(rp=content_artifact.relative_path, c=d_content.content)
Apr 21 07:36:18 stg-katello pulpcore-worker-3: ValueError: No declared artifact with relative path ".treeinfo" for content "<DistributionTree: pk=d0a0b30f-8cfc-4d76-96da-c2317f720819>"
Apr 21 07:36:18 stg-katello pulpcore-worker-3: Traceback (most recent call last):
Apr 21 07:36:18 stg-katello pulpcore-worker-3: File "/usr/lib/python3.6/site-packages/rq/worker.py", line 936, in perform_job
Apr 21 07:36:18 stg-katello pulpcore-worker-3: rv = job.perform()
Apr 21 07:36:18 stg-katello pulpcore-worker-3: File "/usr/lib/python3.6/site-packages/rq/job.py", line 684, in perform
Apr 21 07:36:18 stg-katello pulpcore-worker-3: self._result = self._execute()
Apr 21 07:36:18 stg-katello pulpcore-worker-3: File "/usr/lib/python3.6/site-packages/rq/job.py", line 690, in _execute
Apr 21 07:36:18 stg-katello pulpcore-worker-3: return self.func(*self.args, **self.kwargs)
Apr 21 07:36:18 stg-katello pulpcore-worker-3: File "/usr/lib/python3.6/site-packages/pulp_2to3_migration/app/tasks/migrate.py", line 81, in migrate_from_pulp2
Apr 21 07:36:18 stg-katello pulpcore-worker-3: migrate_content(plan, skip_corrupted=skip_corrupted)
Apr 21 07:36:18 stg-katello pulpcore-worker-3: File "/usr/lib/python3.6/site-packages/pulp_2to3_migration/app/migration.py", line 47, in migrate_content
Apr 21 07:36:18 stg-katello pulpcore-worker-3: plugin.migrator.migrate_content_to_pulp3(skip_corrupted=skip_corrupted)
Apr 21 07:36:18 stg-katello pulpcore-worker-3: File "/usr/lib/python3.6/site-packages/pulp_2to3_migration/app/plugin/rpm/migrator.py", line 145, in migrate_content_to_pulp3
Apr 21 07:36:18 stg-katello pulpcore-worker-3: loop.run_until_complete(dm.create())
Apr 21 07:36:18 stg-katello pulpcore-worker-3: File "/usr/lib64/python3.6/asyncio/base_events.py", line 484, in run_until_complete
Apr 21 07:36:18 stg-katello pulpcore-worker-3: return future.result()
Apr 21 07:36:18 stg-katello pulpcore-worker-3: File "/usr/lib/python3.6/site-packages/pulp_2to3_migration/app/plugin/content.py", line 89, in create
Apr 21 07:36:18 stg-katello pulpcore-worker-3: await pipeline
Apr 21 07:36:18 stg-katello pulpcore-worker-3: File "/usr/lib/python3.6/site-packages/pulpcore/plugin/stages/api.py", line 225, in create_pipeline
Apr 21 07:36:18 stg-katello pulpcore-worker-3: await asyncio.gather(*futures)
Apr 21 07:36:18 stg-katello pulpcore-worker-3: File "/usr/lib/python3.6/site-packages/pulpcore/plugin/stages/api.py", line 43, in __call__
Apr 21 07:36:18 stg-katello pulpcore-worker-3: await self.run()
Apr 21 07:36:18 stg-katello pulpcore-worker-3: File "/usr/lib/python3.6/site-packages/pulpcore/plugin/stages/artifact_stages.py", line 244, in run
Apr 21 07:36:18 stg-katello pulpcore-worker-3: RemoteArtifact.objects.bulk_get_or_create(self._needed_remote_artifacts(batch))
Apr 21 07:36:18 stg-katello pulpcore-worker-3: File "/usr/lib/python3.6/site-packages/pulpcore/plugin/stages/artifact_stages.py", line 301, in _needed_remote_artifacts
Apr 21 07:36:18 stg-katello pulpcore-worker-3: msg.format(rp=content_artifact.relative_path, c=d_content.content)
Apr 21 07:36:18 stg-katello pulpcore-worker-3: ValueError: No declared artifact with relative path ".treeinfo" for content "<DistributionTree: pk=d0a0b30f-8cfc-4d76-96da-c2317f720819>"
Apr 21 07:36:18 stg-katello pulpcore-worker-3: pulp: rq.worker:INFO: Cleaning registries for queue: 44763@stg-katello.gtstg.lan
Apr 21 07:36:18 stg-katello pulpcore-worker-3: pulp: rq.worker:INFO: 44763@stg-katello.gtstg.lan: e26d231a-d1bc-49d6-aaea-de9b7fb59000
Apr 21 07:36:18 stg-katello pulpcore-worker-3: pulp: rq.worker:INFO: 44763@stg-katello.gtstg.lan: Job OK (e26d231a-d1bc-49d6-aaea-de9b7fb59000)

Don't know why the content id is different now, but when I (today) run it multiple times, the error now is always at this id.

Actions #4

Updated by cmaso almost 3 years ago

I did some further testing today. With the above information I was able to find out which repository this content_id relates to: it was the High Availability repo. I've removed it from Katello and tried the migration again. Now it's again failing and the related repo is the PowerTools repo.

So for me it looks like I now need to remove all repos additionally to the Base repo, then I should be able to run the migration?

Actions #5

Updated by jsherril@redhat.com almost 3 years ago

  • Tags Katello added
Actions #6

Updated by ttereshc almost 3 years ago

Hi cmaso,

I'm trying to reproduce your issue, could you share the repos you are talking about? Preferably provide with the urls you synced them from. Are you referring to CentOS 8 repos? E.g. http://ftp.cvut.cz/centos/8/

Thanks!

Actions #7

Updated by cmaso almost 3 years ago

Hi ttereshc,

yes, it's about the CentOS 8 repos. In my Katello I have synchronized the following:

For a test I removed all but BaseOS and then the preparation was successful (but that's currently only on my test system, the productive still has all of them and needs all of them)

Actions #8

Updated by ttereshc almost 3 years ago

I ran into an issue when all the repos are on_demand, all additional repos apart from BaseOS are not migrated.
This is very Centos8 specific, because of how they construct those distribution trees.

If I make the repos immediate, then migration works fine for me (I'm on the same versions as mentioned in the description).

@cmaso, on your pulp2 system, do you have other distribution/kickstart repositories? or only centos8 ones? Do you have any content views in katello which are based on your centos8 repos? I'm trying to guess what can potentially be different, since I can't reproduce it.

Actions #9

Updated by jsherril@redhat.com almost 3 years ago

Would you be able to upload your pulp2 database? It is a large file so you could use the red hat upload sftp server here: https://access.redhat.com/solutions/2112#secureftp

creating a tarball of /var/lib/mongodb/ and then uploading that would be helpful.

Actions #10

Updated by gvde almost 3 years ago

I think I am hitting the same issue. Just discussing this at the foreman community

https://community.theforeman.org/t/contentmigration-katello-pulp3error/23331

Actions #11

Updated by caseybea almost 3 years ago

I along with gvde am hitting this exact same issue.

What is quite likely bringing this to the forefront is Katello just released super brand-new version 4. And a requirement for eventually upgrading to that version requires that you migrate your pulp2 content to pulp3. I have been putting that off for a while, just attempted this week. and... hit this issue. Verified caused by Centos8 repos in m environment.

Actions #12

Updated by cmaso almost 3 years ago

I ended up removing all my additional CentOS repos (kept only BaseOS) and then I was able to complete the migration.

After that I've updated Katello to 4.0 and added the other repos again and everything is now working (on Katello 4 and pulp3).

Actions #13

Updated by caseybea almost 3 years ago

Sadly in my situation wiping out my centos8-like repos is not an option. I am deep in the middle of testing all of the popular Centos8 variants. Hoping for an update in the next week or so? I cannot migrate my katello to pulp3 currently. I do suspect more and more people will start hitting this.

Actions #14

Updated by gvde almost 3 years ago

Is there anything I can help to get this fixed? I don't really want to remove all CentOS 8 repos or reinstall my katello servers up from scratch again, so I would be nice if this could be fixed.

Actions #15

Updated by ttereshc almost 3 years ago

It would be helpful if someone could share their pulp2 mongodb dump, as asked in comment 9 https://pulp.plan.io/issues/8566#note-9. So far we can't reproduce this behavior locally.

If you are using immediate download policy (like the reporter) for your repositories, please provide pulp2 mongodb dump if you can. That will really help.

Actions #16

Updated by iballou almost 3 years ago

I've managed to reproduce what might be the same issue by syncing all of the repos that cmaso reported syncing. I also synced the centos stream 8 version of HighAvailability (http://mirror.centos.org/centos/8-stream/HighAvailability/x86_64/os/). I only was able to reproduce after changing the two HighAvailability repos from On Demand to Immediate and then re-syncing. Maybe that is related?

Traceback:

Prepare content for Pulp 3: 
Starting task.
2021-05-14 20:41:15 +0000: Migrating rpm content to Pulp 3 rpm 9001/9688Migration failed, You will want to investigate: https://centos7-katello-3-18.cannolo.example.com/foreman_tasks/tasks/30171456-47b9-490d-a951-bfc0686024b3
rake aborted!
ForemanTasks::TaskError: Task 30171456-47b9-490d-a951-bfc0686024b3: Katello::Errors::Pulp3Error: No declared artifact with relative path "images/boot.iso" for content ""
/opt/theforeman/tfm/root/usr/share/gems/gems/katello-3.18.2.1/lib/katello/tasks/pulp3_migration.rake:33:in `block (2 levels) in '
/opt/rh/rh-ruby25/root/usr/share/gems/gems/rake-12.3.0/exe/rake:27:in `'
Tasks: TOP => katello:pulp3_migration
(See full trace by running task with --trace)

---
pulp_tasks:
- pulp_href: "/pulp/api/v3/tasks/40c6a76b-6ea2-40ad-9bfc-35cb0800cad9/"
  pulp_created: '2021-05-14T20:39:45.926+00:00'
  state: failed
  name: pulp_2to3_migration.app.tasks.migrate.migrate_from_pulp2
  started_at: '2021-05-14T20:39:46.024+00:00'
  finished_at: '2021-05-14T20:41:13.078+00:00'
  error:
    traceback: |2
        File "/usr/lib/python3.6/site-packages/rq/worker.py", line 936, in perform_job
          rv = job.perform()
        File "/usr/lib/python3.6/site-packages/rq/job.py", line 684, in perform
          self._result = self._execute()
        File "/usr/lib/python3.6/site-packages/rq/job.py", line 690, in _execute
          return self.func(*self.args, **self.kwargs)
        File "/usr/lib/python3.6/site-packages/pulp_2to3_migration/app/tasks/migrate.py", line 81, in migrate_from_pulp2
          migrate_content(plan, skip_corrupted=skip_corrupted)
        File "/usr/lib/python3.6/site-packages/pulp_2to3_migration/app/migration.py", line 47, in migrate_content
          plugin.migrator.migrate_content_to_pulp3(skip_corrupted=skip_corrupted)
        File "/usr/lib/python3.6/site-packages/pulp_2to3_migration/app/plugin/rpm/migrator.py", line 145, in migrate_content_to_pulp3
          loop.run_until_complete(dm.create())
        File "/usr/lib64/python3.6/asyncio/base_events.py", line 484, in run_until_complete
          return future.result()
        File "/usr/lib/python3.6/site-packages/pulp_2to3_migration/app/plugin/content.py", line 89, in create
          await pipeline
        File "/usr/lib/python3.6/site-packages/pulpcore/plugin/stages/api.py", line 225, in create_pipeline
          await asyncio.gather(*futures)
        File "/usr/lib/python3.6/site-packages/pulpcore/plugin/stages/api.py", line 43, in __call__
          await self.run()
        File "/usr/lib/python3.6/site-packages/pulpcore/plugin/stages/artifact_stages.py", line 244, in run
          RemoteArtifact.objects.bulk_get_or_create(self._needed_remote_artifacts(batch))
        File "/usr/lib/python3.6/site-packages/pulpcore/plugin/stages/artifact_stages.py", line 293, in _needed_remote_artifacts
          continue
    description: 'No declared artifact with relative path "images/boot.iso" for content
      ""'
  worker: "/pulp/api/v3/workers/6fedcc81-7aac-4b98-b107-3e257525a40d/"
  child_tasks: []
  task_group: "/pulp/api/v3/task-groups/7bfad8b5-d57f-40df-89f6-32993f7377bf/"
  progress_reports:
  - message: Pre-migrating Pulp 2 RPM content (general info)
    code: premigrating.content.general
    state: completed
    total: 9688
    done: 9688
  - message: Pre-migrating Pulp 2 RPM content (detail info)
    code: premigrating.content.detail
    state: completed
    total: 9688
    done: 9688
  - message: Processing Pulp 2 repositories, importers, distributors
    code: processing.repositories
    state: completed
    total: 6
    done: 6
  - message: Pre-migrating Pulp 2 SRPM content (general info)
    code: premigrating.content.general
    state: completed
    total: 0
    done: 0
  - message: Pre-migrating Pulp 2 SRPM content (detail info)
    code: premigrating.content.detail
    state: completed
    total: 0
    done: 0
  - message: Pre-migrating Pulp 2 ERRATUM content (detail info)
    code: premigrating.content.detail
    state: completed
    total: 0
    done: 0
  - message: Pre-migrating Pulp 2 DISTRIBUTION content (general info)
    code: premigrating.content.general
    state: completed
    total: 6
    done: 6
  - message: Pre-migrating Pulp 2 DISTRIBUTION content (detail info)
    code: premigrating.content.detail
    state: completed
    total: 6
    done: 6
  - message: Pre-migrating Pulp 2 ERRATUM content (general info)
    code: premigrating.content.general
    state: completed
    total: 0
    done: 0
  - message: Pre-migrating Pulp 2 MODULEMD content (general info)
    code: premigrating.content.general
    state: completed
    total: 83
    done: 83
  - message: Pre-migrating Pulp 2 MODULEMD content (detail info)
    code: premigrating.content.detail
    state: completed
    total: 83
    done: 83
  - message: Pre-migrating Pulp 2 MODULEMD_DEFAULTS content (general info)
    code: premigrating.content.general
    state: completed
    total: 44
    done: 44
  - message: Pre-migrating Pulp 2 MODULEMD_DEFAULTS content (detail info)
    code: premigrating.content.detail
    state: completed
    total: 44
    done: 44
  - message: Pre-migrating Pulp 2 PACKAGE_ENVIRONMENT content (general info)
    code: premigrating.content.general
    state: completed
    total: 7
    done: 7
  - message: Pre-migrating Pulp 2 PACKAGE_ENVIRONMENT content (detail info)
    code: premigrating.content.detail
    state: completed
    total: 7
    done: 7
  - message: Creating repositories in Pulp 3
    code: creating.repositories
    state: completed
    total: 6
    done: 6
  - message: Pre-migrating Pulp 2 YUM_REPO_METADATA_FILE content (general info)
    code: premigrating.content.general
    state: completed
    total: 1
    done: 1
  - message: Pre-migrating Pulp 2 YUM_REPO_METADATA_FILE content (detail info)
    code: premigrating.content.detail
    state: completed
    total: 1
    done: 1
  - message: Migrating importers to Pulp 3
    code: migrating.importers
    state: completed
    total: 6
    done: 6
  - message: Pre-migrating Pulp 2 PACKAGE_LANGPACKS content (general info)
    code: premigrating.content.general
    state: completed
    total: 0
    done: 0
  - message: Pre-migrating Pulp 2 PACKAGE_LANGPACKS content (detail info)
    code: premigrating.content.detail
    state: completed
    total: 0
    done: 0
  - message: Migrating rpm content to Pulp 3 yum_repo_metadata_file
    code: migrating.rpm.content
    state: completed
    total: 1
    done: 1
  - message: Pre-migrating Pulp 2 PACKAGE_GROUP content (general info)
    code: premigrating.content.general
    state: completed
    total: 98
    done: 98
  - message: Pre-migrating Pulp 2 PACKAGE_GROUP content (detail info)
    code: premigrating.content.detail
    state: completed
    total: 98
    done: 98
  - message: Migrating rpm content to Pulp 3 package_langpacks
    code: migrating.rpm.content
    state: completed
    total: 0
    done: 0
  - message: Migrating rpm content to Pulp 3 package_group
    code: migrating.rpm.content
    state: completed
    total: 98
    done: 98
  - message: Pre-migrating Pulp 2 PACKAGE_CATEGORY content (general info)
    code: premigrating.content.general
    state: completed
    total: 9
    done: 9
  - message: Pre-migrating Pulp 2 PACKAGE_CATEGORY content (detail info)
    code: premigrating.content.detail
    state: completed
    total: 9
    done: 9
  - message: Migrating rpm content to Pulp 3 package_category
    code: migrating.rpm.content
    state: completed
    total: 9
    done: 9
  - message: Migrating rpm content to Pulp 3 package_environment
    code: migrating.rpm.content
    state: completed
    total: 7
    done: 7
  - message: Migrating content to Pulp 3
    code: migrating.content
    state: failed
    total: 0
    done: 0
  - message: Migrating rpm content to Pulp 3 rpm
    code: migrating.rpm.content
    state: completed
    total: 9688
    done: 9688
  - message: Migrating rpm content to Pulp 3 srpm
    code: migrating.rpm.content
    state: completed
    total: 0
    done: 0
  - message: Migrating rpm content to Pulp 3 distribution
    code: migrating.rpm.content
    state: completed
    total: 6
    done: 3
  - message: Migrating rpm content to Pulp 3 erratum
    code: migrating.rpm.content
    state: completed
    total: 0
    done: 0
  - message: Migrating rpm content to Pulp 3 modulemd
    code: migrating.rpm.content
    state: completed
    total: 83
    done: 83
  - message: Migrating rpm content to Pulp 3 modulemd_defaults
    code: migrating.rpm.content
    state: completed
    total: 44
    done: 44
  created_resources:
  - "/pulp/api/v3/task-groups/7bfad8b5-d57f-40df-89f6-32993f7377bf/"
  reserved_resources_record:
  - pulp_2to3_migration
task_groups:
- pulp_href: "/pulp/api/v3/task-groups/7bfad8b5-d57f-40df-89f6-32993f7377bf/"
  description: Migration Sub-tasks
  all_tasks_dispatched: false
  waiting: 0
  skipped: 0
  running: 0
  completed: 0
  canceled: 0
  failed: 1
  group_progress_reports:
  - message: Repo version creation
    code: create.repo_version
    total: 0
    done: 0
  - message: Distribution creation
    code: create.distribution
    total: 0
    done: 0
poll_attempts:
  total: 26
  failed: 1
Actions #17

Updated by iballou almost 3 years ago

I uploaded my tar of /var/lib/mongodb/ to the RH sftp server. It's called iballou_mongodb.tar.gz.

Actions #18

Updated by gvde almost 3 years ago

I have just uploaded my mongodb as file gvde-mongodb.tar.gz. I have the .treeinfo error.

I think all my repositories are set to immediate download. However, IIRC, I have set up the CentOS 8 Stream repos as "on demand" initially, as I don't use them, yet. I think the CentOS 8 repos were always on immediate download.

The initial installation of that server was a year ago on foreman 2.0/katello 3.15, upgrading through all version to the latest katello 3.18.2...

Actions #19

Updated by caseybea almost 3 years ago

I just uploaded mine as well. Hopefully with the multiplle uploads you now have, you'll be able to find the common demoninator!

filename: brodie-mongodb.tar.gz

Actions #20

Updated by ttereshc almost 3 years ago

  • Status changed from NEW to ASSIGNED
  • Assignee set to ttereshc
Actions #21

Updated by ttereshc almost 3 years ago

  • Sprint set to Sprint 97
Actions #22

Updated by gvde almost 3 years ago

Hoping for a workaround (as I am worried that there will be more migration problems after this has been solved) I have removed the complete CentOS 8 Stream product with all its repositories as I didn't use them, yet. I have also removed the kickstart repositories which I had for CentOS 8 leaving only the main 6 CentOS 8 repositories which I need for my CentOS 8 servers.

Yet, migration fails with the same error showing the UUID for the HighAvailability variant.

Actions #23

Updated by ttereshc almost 3 years ago

Thanks to ipanova for reproducing the issue! The ability to reproduce depends on the order in which distribution trees are saved to the DB during migration, that explains why it was so hard to get to it, even with all the pulp 2 databases provided.

BaseOS should be migrated the last to reproduce the issue.

Current steps to reproduce:

  • sync into pulp2 and then migrate CentOS 8 Appstream or PowerTools or HighAvailability (not Extras and not BaseOS).
  • then sync into pulp2 CentOS 8 BaseOS and migrate, observe the error.

The reason for the issue is that CentOS repos are created in an unusual way.
Usual way: One distribution tree repo with variants and addons.
Unusual way (CentOS8 repos only so far): N repos with the same name and release distinguishable only by the build_timestamp.

Please try this patch to see if it solves your problem:

diff --git a/pulp_2to3_migration/app/plugin/rpm/pulp_2to3_models.py b/pulp_2to3_migration/app/plugin/rpm/pulp_2to3_models.py
index 1fc11f3..b99876e 100644
--- a/pulp_2to3_migration/app/plugin/rpm/pulp_2to3_models.py
+++ b/pulp_2to3_migration/app/plugin/rpm/pulp_2to3_models.py
@@ -764,8 +764,14 @@ class Pulp2Distribution(Pulp2to3Content):
                 if variant['repository'] == '.':
                     variants[name] = variant
             treeinfo_serialized['variants'] = variants
-            # Reset build_timestamp so Pulp will fetch all the addons during the next sync
-            treeinfo_serialized['distribution_tree']['build_timestamp'] = 0
+            # Set older timestamp so Pulp will fetch all the addons during the next sync
+            # We can't reset it to 0, some creative repository providers use the same
+            # name/release in many distribution trees and the only way to distinguish them is by
+            # the build timestamp. e.g. CentOS 8 BaseOs, Appstream, PowerTools, HighAvailability.
+            orig_build_timestamp = int(
+                float(treeinfo_serialized['distribution_tree']['build_timestamp'])
+            )
+            treeinfo_serialized['distribution_tree']['build_timestamp'] = orig_build_timestamp - 1
             return treeinfo_serialized
 
         return None
Actions #24

Updated by caseybea almost 3 years ago

And of course I randomly pop my head in here to see if there's an update. And there is!

I patched my file /usr/lib/python3.6/site-packages/pulp_2to3_migration/app/plugin/rpm/pulp_2to3_models.py

Restarted foreman, and am running the content prepare. It is chugging along and has long since passed where it blew up before. I will follow up here after I've tried the entire migration process. But- looking good thus far.

Actions #25

Updated by caseybea almost 3 years ago

Well, 6 hours later the content migratin blew up, but because of a totally different error (that I believe has a separate thread, I will be hunting for that momentarily)

As far as I can tell, THIS issue, the pulp2 to pulp3 thing with centos8-like repos appears to be OK now with the patch.

Now off to hunt this baby down:

ForemanTasks::TaskError: Task c696d0f4-7e7b-4c17-ad73-2a894204552f: ActiveRecord::StatementInvalid: PG::UndefinedColumn: ERROR: column katello_debs.migrated_pulp3_href does not exist LINE 1: SELECT COUNT(*) FROM "katello_debs" WHERE "katello_debs"."mi... ^ /opt/theforeman/tfm/root/usr/share/gems/gems/katello-3.18.2.1/lib/katello/tasks/pulp3_migration.rake:33:in block (2 levels) in <top (required)>' /opt/rh/rh-ruby25/root/usr/share/gems/gems/rake-12.3.0/exe/rake:27:in <top (required)>' Tasks: TOP => katello:pulp3_migration

Actions #26

Updated by gvde almost 3 years ago

That patch worked for me.

I have run a

# foreman-rake katello:pulp3_migration_reset
# foreman-rake katello:delete_orphaned_content RAILS_ENV=production
# foreman-maintain service stop
# foreman-maintain service start
# foreman-maintain content prepare

and after 3 hrs it finished successfully. I'll have to test the switchover next...

Actions #27

Updated by ttereshc almost 3 years ago

  • Status changed from ASSIGNED to POST
Actions #28

Updated by ttereshc almost 3 years ago

Thanks everyone for your patience and all the provided info!

Added by ttereshc almost 3 years ago

Revision c1c0e026 | View on GitHub

Set build_timestamp for distribution trees

For some distribution trees, their build timestamp is the only way to identify them uniquely. Repositories like CentOS 8 BaseOS, Appstream, PowerTools, HighAvailability, all use the same distribution tree name and release.

For migration/stages pipeline to work correctly, each Artifact of multi-artifact content needs its ContentArtifact which is created at ContentSaver stage only for non-saved content. If the build timestamp is not set and distribution trees are not uniquely identified, depending on the order of migration, some images might be without a ContentArtifact which will lead to the migration error: ValueError: No declared artifact with relative path ".treeinfo" for content "<DistributionTree: pk=...>".

closes #8566 https://pulp.plan.io/issues/8566

Required PR: https://github.com/pulp/pulp_rpm/pull/2002 (Alternatively, just downgrade your productmd to 1.32.)

Added by ttereshc almost 3 years ago

Revision c1c0e026 | View on GitHub

Set build_timestamp for distribution trees

For some distribution trees, their build timestamp is the only way to identify them uniquely. Repositories like CentOS 8 BaseOS, Appstream, PowerTools, HighAvailability, all use the same distribution tree name and release.

For migration/stages pipeline to work correctly, each Artifact of multi-artifact content needs its ContentArtifact which is created at ContentSaver stage only for non-saved content. If the build timestamp is not set and distribution trees are not uniquely identified, depending on the order of migration, some images might be without a ContentArtifact which will lead to the migration error: ValueError: No declared artifact with relative path ".treeinfo" for content "<DistributionTree: pk=...>".

closes #8566 https://pulp.plan.io/issues/8566

Required PR: https://github.com/pulp/pulp_rpm/pull/2002 (Alternatively, just downgrade your productmd to 1.32.)

Added by ttereshc almost 3 years ago

Revision c1c0e026 | View on GitHub

Set build_timestamp for distribution trees

For some distribution trees, their build timestamp is the only way to identify them uniquely. Repositories like CentOS 8 BaseOS, Appstream, PowerTools, HighAvailability, all use the same distribution tree name and release.

For migration/stages pipeline to work correctly, each Artifact of multi-artifact content needs its ContentArtifact which is created at ContentSaver stage only for non-saved content. If the build timestamp is not set and distribution trees are not uniquely identified, depending on the order of migration, some images might be without a ContentArtifact which will lead to the migration error: ValueError: No declared artifact with relative path ".treeinfo" for content "<DistributionTree: pk=...>".

closes #8566 https://pulp.plan.io/issues/8566

Required PR: https://github.com/pulp/pulp_rpm/pull/2002 (Alternatively, just downgrade your productmd to 1.32.)

Actions #30

Updated by ttereshc almost 3 years ago

  • Status changed from POST to MODIFIED
Actions #31

Updated by ttereshc almost 3 years ago

  • Sprint/Milestone set to 0.11.2

Added by ttereshc almost 3 years ago

Revision ef60495d | View on GitHub

Set build_timestamp for distribution trees

For some distribution trees, their build timestamp is the only way to identify them uniquely. Repositories like CentOS 8 BaseOS, Appstream, PowerTools, HighAvailability, all use the same distribution tree name and release.

For migration/stages pipeline to work correctly, each Artifact of multi-artifact content needs its ContentArtifact which is created at ContentSaver stage only for non-saved content. If the build timestamp is not set and distribution trees are not uniquely identified, depending on the order of migration, some images might be without a ContentArtifact which will lead to the migration error: ValueError: No declared artifact with relative path ".treeinfo" for content "<DistributionTree: pk=...>".

closes #8566 https://pulp.plan.io/issues/8566

Required PR: https://github.com/pulp/pulp_rpm/pull/2002 (Alternatively, just downgrade your productmd to 1.32.)

Added by ttereshc almost 3 years ago

Revision ef60495d | View on GitHub

Set build_timestamp for distribution trees

For some distribution trees, their build timestamp is the only way to identify them uniquely. Repositories like CentOS 8 BaseOS, Appstream, PowerTools, HighAvailability, all use the same distribution tree name and release.

For migration/stages pipeline to work correctly, each Artifact of multi-artifact content needs its ContentArtifact which is created at ContentSaver stage only for non-saved content. If the build timestamp is not set and distribution trees are not uniquely identified, depending on the order of migration, some images might be without a ContentArtifact which will lead to the migration error: ValueError: No declared artifact with relative path ".treeinfo" for content "<DistributionTree: pk=...>".

closes #8566 https://pulp.plan.io/issues/8566

Required PR: https://github.com/pulp/pulp_rpm/pull/2002 (Alternatively, just downgrade your productmd to 1.32.)

Added by ttereshc almost 3 years ago

Revision ef60495d | View on GitHub

Set build_timestamp for distribution trees

For some distribution trees, their build timestamp is the only way to identify them uniquely. Repositories like CentOS 8 BaseOS, Appstream, PowerTools, HighAvailability, all use the same distribution tree name and release.

For migration/stages pipeline to work correctly, each Artifact of multi-artifact content needs its ContentArtifact which is created at ContentSaver stage only for non-saved content. If the build timestamp is not set and distribution trees are not uniquely identified, depending on the order of migration, some images might be without a ContentArtifact which will lead to the migration error: ValueError: No declared artifact with relative path ".treeinfo" for content "<DistributionTree: pk=...>".

closes #8566 https://pulp.plan.io/issues/8566

Required PR: https://github.com/pulp/pulp_rpm/pull/2002 (Alternatively, just downgrade your productmd to 1.32.)

Actions #32

Updated by pulpbot almost 3 years ago

  • Status changed from MODIFIED to CLOSED - CURRENTRELEASE
Actions #33

Updated by gvde over 2 years ago

With AlmaLinux the patch doesn't work because the build_timestamp isn't unique with all variants:

AlmaLinux/8.4/AppStream/x86_64/kickstart/.treeinfo:build_timestamp = 1622014553
AlmaLinux/8.4/AppStream/x86_64/os/.treeinfo:build_timestamp = 1622014553
AlmaLinux/8.4/BaseOS/x86_64/kickstart/.treeinfo:build_timestamp = 1622014553
AlmaLinux/8.4/BaseOS/x86_64/os/.treeinfo:build_timestamp = 1622014553
AlmaLinux/8.4/HighAvailability/x86_64/kickstart/.treeinfo:build_timestamp = 1622014558
AlmaLinux/8.4/HighAvailability/x86_64/os/.treeinfo:build_timestamp = 1622014558
AlmaLinux/8.4/PowerTools/x86_64/kickstart/.treeinfo:build_timestamp = 1622014558
AlmaLinux/8.4/PowerTools/x86_64/os/.treeinfo:build_timestamp = 1622014558
AlmaLinux/8.4/extras/x86_64/kickstart/.treeinfo:build_timestamp = 1622014558
AlmaLinux/8.4/extras/x86_64/os/.treeinfo:build_timestamp = 1622014558
Actions #34

Updated by gvde over 2 years ago

CentOS 8 Stream has the same issue:

CentOS/8-stream/AppStream/x86_64/os/.treeinfo:build_timestamp = 1625615144
CentOS/8-stream/BaseOS/x86_64/os/.treeinfo:build_timestamp = 1625615155
CentOS/8-stream/HighAvailability/x86_64/os/.treeinfo:build_timestamp = 1625026406
CentOS/8-stream/PowerTools/x86_64/os/.treeinfo:build_timestamp = 1625026406
CentOS/8-stream/RT/x86_64/os/.treeinfo:build_timestamp = 1625026406

Looking [here]{https://github.com/pulp/pulp_rpm/blob/b73042c6daff3d306b193bcaebdf23fc7c5c0e6f/pulp_rpm/app/migrations/0001_initial.py#L39) and in the postgresql database it seems it requires build_timestamp to be unique. I guess that assumption is wrong...

Actions #35

Updated by ttereshc over 2 years ago

  • Related to Issue #9583: Distribution tree uniqueness constraint is not enough for a suboptimal .treeinfo added
Actions #36

Updated by ttereshc over 2 years ago

@gdve, thanks for pointing out to the examples and the root cause.
Apologies for the late response, it's hard to rack issues after they are closed. To attract our attention, please file a new issue or ping on matrix.org in pulp channel. Thanks!
I filed an issue here https://pulp.plan.io/issues/9583

...in the postgresql database it seems it requires build_timestamp to be unique. I guess that assumption is wrong...

To be precise, build stamp is not required to be unique but the combination of fields which includes build_timestamp must be unique.

We'll see what we can do, I hope to figure out why centos does it at all, the distributions in repositories which are supposed to be add-ons look useless to me. Maybe they merge them all later into one :/, not sure. But if you look into any .treeinfo, all of them apart from BaseOS have no images and they can only work in combination with BaseOS one, never on their own.

Also available in: Atom PDF