-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reduce prune memory usage #4354
Conversation
36ef4d2
to
9ce9717
Compare
95d2381
to
017e69b
Compare
Compared to restic 0.16.4, this PR + #4812 drastically reduce the prune memory usage (using numbers from gcvis, which are slightly lower than those shown by htop, but still appear to be reasonable): After (this PR + #4812): (The repository has 400GB with 8 million blobs. The repack phase differs as the previous prune run already removed a bit of data) This PR optimizes the "search used blobs"-phase, whereas #4812 optimizes the "rebuild index"-phase. |
a3d31f3
to
042cf17
Compare
042cf17
to
ae5e739
Compare
use the same index size for compressed and uncompressed indexes. Otherwise, decoding the index of a compressed repository requires significantly more memory.
ae5e739
to
436afbf
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
What does this PR change? What problem does it solve?
The main memory usage of prune (besides the repository index) is a huge CountedBlobSet. The blob ids stored in this set are already stored in the repository index. This PR contains a proof of concept for a data structure that no longer duplicates these ids, but rather reuses them.
This PR fundamentally depends on #4352, which stores all indexEntries in a large array. Thus an indexEntry and its contained blob id becomes identifiable by its array index. As an indexMap can only add but not remove entries, this array index is guaranteed to be stable. Thus the AssociatedData structure in the PR can allocate an array that only consists of the values that were stored in a CountedBlobSet and use the array index to look up the matching blob id in the indexMap.
Using this PR,
prune
requires up to 50% less memory while determining which blobs to keep.Was the change previously discussed in an issue or on the forum?
Alternative to the minimal perfect hashes discussed in #3328
Checklist
[ ] I have added documentation for relevant changes (in the manual).changelog/unreleased/
that describes the changes for our users (see template).gofmt
on the code in all commits.