-
-
Notifications
You must be signed in to change notification settings - Fork 752
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
borg prune usage #8584
Comments
So, do you only have that one archive? |
hi Thomas There are many archive files in there. Here is the full directory contents of the backups for this linode VPS: (I do think it is probably user error as I don't fully understand how the borg prune command works) [root@vps borg]# ls -la /mnt/storage/backups/borg /mnt/storage/backups/borg/vps.linode.zzzzzzzzzzz.com/data/0 Here is my configuration as well (I grep'ed out all the stuff not configured) $ grep -v "^#|^\s*#|^$" config.yaml warm regards |
You don't need to list Please post the complete output of |
hi Thomas, Thanks for the help. Please see below: (Note that 12-13 is missing because the /mnt/storage was at 99%. I freed up some space with a different backup and as you can see there is a 12-14 backup that was successful)
I now see what you mean archives vs. segment files. I ran the same command with 1d and as expected it will prune the 12-12 archive backup
I see this quote from you in another thread: "Please be aware that (for borg 1.2) borg compact (without --cleanup-commits) is part of regular repo maintenance (not just a one time upgrade thing). One does not necessarily need to run it after each delete/prune command, but it needs to be run on a regular basis to free up disk space." So if i do the borg compact it should remove a lot of the segments correct? I am running borg 1.4. warm regards |
Hmm, I thought you had run Yes, to free space, you need to do that. Everthing that is left after that is used by archive(s). |
hi Thomas Yes i had run it prior but i think it was not pruning anything because the keep-within was 10d and the single archive (at that point) was at 12-12. So nothing was labeled as pruned. I just ran the borg compact now with the prune on 12-12 archive, as expected it got rid of the 12-12 archive [root@vps borg]# borg compact vps.linode.zzzzzzzz.com/ But there was very little reduction in current /mnt/storage usage (it dropped from 72% to 71%) Does this mean that all of those data segments in my earlier post are required to reconstruct the full backup and that there is no way to reduce the total memory footprint of the backup any further? (see below at 20G): [root@vps borg]# du -sch warm regards |
@dmastrop If there is only 1 archive in there left and you ran compact, all the data is referenced by that archive. The only other possibility is that the repo somehow has a lot of orphan chunks, you can use |
hi Thomas @ThomasWaldmann Thanks for clearing up my confusion on the matter I ran the borg check and it looks like the backup is ok. [root@vps borg]# borg check -v vps.linode.zzzzzzzzz.com/ I manually ran borgmatic just now just to see the effect of adding another archive on the disk space and as expected the 1% is added back (71% to 72%) so the delta is not that large. This is with 2 archives. I will have to live with the storage riding at around this level for now and edit the backups for other stuff. My retention levels are below. They are set to 1 for day, week and month To enforce the retention policy does borg prune have to be manually run ? Or is the borg prune implicitly run periodically in accordance to my retention policy above (in /etc/borgmatic folder) Thanks for helping me to understand this warm regards =====reference A "Borg retention policy" refers to the set of rules within the Borg backup software that determines how long backups are kept before being automatically deleted, allowing you to specify how many daily, weekly, monthly, or yearly backups to retain, preventing your backup storage from filling up indefinitely; essentially, it's a way to manage the lifespan of your backups based on a defined time frame.
Time-based options: |
Maybe some borgmatic user can answer that. ^ |
Have you checked borgbackup docs, FAQ, and open GitHub issues?
Yes
Is this a BUG / ISSUE report or a QUESTION?
I don't know. Perhaps I do not understand how to use the command. I don't think it is a bug, but I am probably not using the command correctly.
System information. For client/server mode post info for both machines.
This is not using client/server mode. Borg is installed on same VPS as it is backing up.
Your borg version (borg -V).
[root@vps borg]# borg --version
borg 1.4.0
Operating system (distribution) and version.
It is Archlinux so no version. It is a rolling release and has been updated fairly recently with pacman
Hardware / network configuration, and filesystems used.
It is a linode VPS 4 vCPU, 16GB RAM 320 GB disk and 60 GB storage volume for borg and other stuff.
How much data is handled by borg?
It has filled up 30 GB of my /mnt/storage even with retention policies that are fairly restrictive. I want to just prune everything to the last 7-10 days of backups. I am probably using the command incorrectly but I have read a lot of the docs and don't know what i am doing wrong
Full borg commandline that lead to the problem (leave away excludes and passwords)
I run the following command but nothing gets freed up. Perhaps I am not using the command correctly or misunderstanding what the command is doing.
[root@vps borg]# borg prune -v --list --keep-within=10d vps.linode.**********.com/
Keeping archive (rule: within #1): vps-2024-12-12T03:16:01.290874 Thu, 2024-12-12 03:16:01 [a12b5cbbf0284e267e3bab731da548dd855b6aa9c606b6c5c91e4defcc2d6bdf]
[root@vps borg]# du -sch
20G .
20G total
[root@vps borg]# borg compact vps.linode.**************.com/
[root@vps borg]# du -sch
20G .
20G total
Describe the problem you're observing.
I am expecting just the last 10 days of archives kept but I don't see any change and my backups are going back to mid November. It is Dec 13 at the time of this writing. I can append the ls -la of the actual archive if required.
I assume it is the one in this directory below:
[root@vps borg]# pwd
/mnt/storage/backups/borg
[root@vps borg]# ls -la
total 12
drwxr-xr-x 3 root root 4096 Dec 13 20:58 .
drwxr-xr-x 9 root root 4096 Oct 8 22:01 ..
drwx------ 3 root root 4096 Dec 13 21:41 vps.linode..com
[root@vps borg]# cd vps.linode..com/
[root@vps vps.linode..com]# ls
config data hints.481 index.481 integrity.481 README
[root@vps vps.linode..com]# cd data/
[root@vps data]# ls -la
total 12
drwx------ 3 root root 4096 Dec 13 21:10 .
drwx------ 3 root root 4096 Dec 13 21:41 ..
drwx------ 2 root root 4096 Dec 13 21:41 0
[root@vps data]# cd 0
[root@vps 0]# pwd
/mnt/storage/backups/borg/vps.linode.***********.com/data/0
Please note that it is not in append_only mode. See below:
[root@vps vps.linode.**************com]# cat config
[repository]
version = 1
segments_per_dir = 1000
max_segment_size = 524288000
append_only = 0 <<<<<<<<<<<<<<<<<<<<<<<<<<
storage_quota = 0
additional_free_space = 0
id = ******************
Can you reproduce the problem? If so, describe how. If not, describe troubleshooting steps you took before opening the issue.
Include any warning/errors/backtraces from the system logs
The text was updated successfully, but these errors were encountered: