Refactor #4132
closedMetadata is not downloaded in parallel
0%
Description
Metadata for the python plugin is all retrieved in the PythonFirstStage[0]. Project level metadata is downloaded in a for loop [1]. Since we are using asyncio, the `await` does cede control while it is downloading a metadata file, so after the first metadata file is downloaded and processed, the PythonFirstStage continues to download 1 metadata file at a time while all later stages from pulpcore continue.
My assumption is that downloading the project level metadata in parallel would be a relatively small performance improvement when each project (corresponding to 1 metadata file) has many Python distributions associated with it. However, for lazy sync or cases where each project has a small number of Python distributions, downloading the metadata in parallel could be a very large performance increase.
Background¶
Unless a Stage implements parallel calls with asyncio, each Stage only operates 1 at a time (or 1 batch at a time). The only pulpcore Stage that runs multiple calls in parallel is the ArtifactDownloader[1], which uses the ArtifactDownloaderRunner to ensure_futures[2], which is what enables many calls to happen simultaneously.[2]
Design Options¶
We can download the metadata in parallel in at least 2 ways.
- Split up the PythonFirstStage. Even though the Python plugin doesn't have a "Project" Content unit, we can still create DeclarativeContent objects that can flow to later stages.
- ProjectListStage - Rather than downloading metadata in the first stage[3], we could create a DeclarativeContent object for each project, and out_q.put(dc).
- ArtifactDownloader (from core, unchanged) - This would download (in parallel) each of the project metadata files.
- ProcessProjectMetadata -This Stage would open and read the project dc.d_artifact file, and create dcs for PythonPackageContent, and out_q.put(python_package_content_dc). It would not continue to pass project dcs down the pipeline. This would roughly correspond to part of the first stage[4], but would also need to include package filtering[5].
- Suggested Pipeline: ProjectListStage -> ArtifactDownloader -> ProcessProjectMetadata -> ArtifactDownloader -> save artifacts, save content,etc
- It's probably also possible to implement parallel downloads directly into a monolithic first stage by using asyncio.ensure_future, but IMO this should not be idiomatic use of the Stages API. It would be complex and would require knowledge of asyncio that is not really expected of plugin writers. Besides, the tricky part of this is already implemented by the ArtifactDownloader.
[0]: https://github.com/pulp/pulp_python/blob/master/pulp_python/app/tasks/sync.py#L57
[1]: https://github.com/pulp/pulp/blob/master/plugin/pulpcore/plugin/stages/artifact_stages.py#L215
[2]: https://github.com/pulp/pulp/blob/master/plugin/pulpcore/plugin/stages/artifact_stages.py#L121-L132
[3]: https://github.com/pulp/pulp_python/blob/master/pulp_python/app/tasks/sync.py#L96
[4]: https://github.com/pulp/pulp_python/blob/master/pulp_python/app/tasks/sync.py#L121-L129
[5]: https://github.com/pulp/pulp_python/blob/master/pulp_python/app/tasks/sync.py#L151
Related issues