Project

Profile

Help

Actions

31+ Ideas (post MVP) » History » Revision 22

« Previous | Revision 22/28 (diff) | Next »
bmbouter, 12/14/2017 08:23 PM
Removing the red since it's a futures page, it doesn't need "confirmed" "unconfirmed" content like the MVP does.


Core 3.1+ Ideas (post MVP)

Authentication

  • External
  • Expiring passwords
  • REMOTE_USER support?

Authorization

<put ideas here>

Versioned Repository

As an authenticated user, I can get a list of a repo's versions.

As an authenticated user, I can specify how many versions to keep at a time.

As an authenticated user, I can get a reference to a new repo version from any task that adds or removes content.

As an authenticated user, I can publish a repo and have it default to the latest version.

As an authenticated user, I can run a publisher with a repository version.

Content Manipulation

  • Sync can have "sync" options.
  • Sync options are different than "importer" attributes.

As an authenticated user, I can remove one or more units from one or more repositories

  • filtering support on the unit(s)
  • filtering support on the repositories

As an authenticated user I can specify a filter to identify content to be added to a repo

Content Deletion

As an authenticated user, artifacts are deleted if they were exclusively used by the content unit

As an authenticated user, I can delete multiple content units with filtering

  • If a content unit is still in at least one repository the delete fails with a listing of all repositories the unit is part of.
  • Artifacts and associated files from deleted units are cleaned up

Content Filtering

As a user, I can search all content for a specific content unit regardless of type

As a user, I can find out all the repos in which a piece of content appears

  • example: bad content the user wants to remove from all repos in Pulp

Publications

As an authenticated user, I have filters on the Publication list:

  • id: (id_in_list) # id equality is not necessary, objects are referenced by id
  • filter by created range
  • filter by not associated w/ distribution.

Upload

  • Allow for a large single file to have its chunks uploaded in parallel.

Repositories 3.1+
filter by content type(ex. repository_contains_type: rpm)
last_content_added(content_added_since)
last_content_removed(content_removed_since)
"partial" repo name search (name: substring)
"tagged" repo names (name: substring)

Importer 3.1+

Importers

  • Sync from multiple importers
  • We need to support multiple importers to properly support distributions and ostree (with distributions).
  • add auto-publish feature
  • As an authenticated user I have a notes field I can use to store arbitrary <key, value> tuples with both key and value being strings.
  • Add the force-full option.
  • add filter for last_synced, either last_synced_lt or last_syced_in_range
  • add filter by repository if we no longer nest
  • add filter for feed_url: (equality, substring)

Publishers

  • Add an auto_publish feature. Possibly a field called auto_publish [default to true] that indicates that the publisher will publish automatically when the associated repository's content has changed.
  • Add the force-full option.
  • Add a no-op publishing feature

Task Management

Allow filtering of tasks on 'completed' or 'started'. These 'meta' states are not states directly, but they represent a group of states. For instance 'completed' would be represent 'skipped', 'completed', 'failed', and 'canceled'.

Additional filtering support:

  • worker
  • started_at, filtered by less than, greater than
  • finished_at, filtered by less than, greater than
  • resource field on an associated ReservedResource

As an authenticated user I can DELETE a task

Task Scheduling

  • As an authenticated user, I can schedule a task.

Data Exports

As an authenticated user, I can export a group of published repositories to a single media

As an authenticated user, I can export an incremental publish of a group of repositories to a single media

For both use cases ^, the layout needs some more discussion.

  • maybe it is specified by the user?
  • maybe it is maintains the published filesystem tree structure?

Also there are two main options in this area.

1. One is a publish bundler that bundles up all the units published to disk. Then this media (e.g. an iso) is mounted and brought into another Pulp installation using a sync. This will only work for content types that don't require 'live APIs'

2. Another option is to export database model data and disk content units from one Pulp to media and then import by directly adding those units to another Pulp. This could be done through the API possibly. This would allow things like Docker to be exported and imported, but it may not work for OSTree??

Also there was discussion about OSTree possibly never supporting and incremental export/import due to how OSTree stores content.

Server Plugins (which content types are available and importers and publishers)

Orphans

As an authenticated user, I can force delete content units even when associated with repositories.
As an authenticated user, I can cleanup orphaned content units for a specific "type" without specifying the units specifically.
As an authenticated user, I can filter orphan cleanup to only remove orphaned content units and artifacts created before a specified datetime.
As an authenticated user, I list all orphaned content units and orphaned artifacts that are not in any repositories

Plugin API

Incremental publishing support

Status API

Status API return status of Squid (aka Proxy), web server, streamer

API to view an overall health attribute that returns a message when something is not operating properly or True.

I can view information about unapplied migrations

I can view a verbose Status API which returns a Pulp version for each component along with a list of all plugins and their versions.

Alternate content source support

Deployment

As a user, I can deploy the Pulp content serving view without all of Pulp's requirements.

Plugin Feature Set 3.1+ Ideas (post MVP)

Python

  • Add a mirror sync policy

Event Listener Notifier

I can receive serialized task info via AMQP on each task save

Can this be restated in more pedantic terms? Does this mean that an arbitrary host can attach itself to Pulp's AMQP message bus and get updates on the progress of tasks?

Updated by bmbouter about 7 years ago · 22 revisions