Round 1: 100k: 15 minutes 20 seconds 1k: 0 minutes 7 seconds %timeit {c['_type']: c['count'] for c in repo_version.content.values('_type').annotate(count=Count('_type'))} >20 minutes (cut off) %timeit list(Content.objects.filter(pk__in=repo_version.content)) 4.52 s ± 156 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) Round 2 (101,000 pre-existing units): 100k: 10 minutes 8 seconds 1k: 0 minutes 17 seconds %timeit {c['_type']: c['count'] for c in repo_version.content.values('_type').annotate(count=Count('_type'))} >20 minutes (cut off) %timeit list(Content.objects.filter(pk__in=repo_version.content)) 6.18 s ± 30.4 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) Round 3 (202,000 pre-existing units): 100k: 11 minutes 27 seconds 1k: 0 minutes 22 seconds %timeit {c['_type']: c['count'] for c in repo_version.content.values('_type').annotate(count=Count('_type'))} >20 minutes (cut off) %timeit list(Content.objects.filter(pk__in=repo_version.content)) 10.4 s ± 66.3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) Round 4 (303,000 pre-existing units): 100k: 22 minutes 5 seconds 1k: 0 minutes 26 seconds %timeit {c['_type']: c['count'] for c in repo_version.content.values('_type').annotate(count=Count('_type'))} >20 minutes (cut off) %timeit list(Content.objects.filter(pk__in=repo_version.content)) 16.4 s ± 360 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) Round 5 (fresh repo, background 404,000 pre-existing units): 100k: 1k: %timeit {c['_type']: c['count'] for c in repo_version.content.values('_type').annotate(count=Count('_type'))} %timeit list(Content.objects.filter(pk__in=repo_version.content)) Final database size: