Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

borg prune usage #8584

Open
dmastrop opened this issue Dec 13, 2024 · 9 comments
Open

borg prune usage #8584

dmastrop opened this issue Dec 13, 2024 · 9 comments
Labels

Comments

@dmastrop
Copy link

Have you checked borgbackup docs, FAQ, and open GitHub issues?

Yes

Is this a BUG / ISSUE report or a QUESTION?

I don't know. Perhaps I do not understand how to use the command. I don't think it is a bug, but I am probably not using the command correctly.

System information. For client/server mode post info for both machines.

This is not using client/server mode. Borg is installed on same VPS as it is backing up.

Your borg version (borg -V).

[root@vps borg]# borg --version
borg 1.4.0

Operating system (distribution) and version.

It is Archlinux so no version. It is a rolling release and has been updated fairly recently with pacman

Hardware / network configuration, and filesystems used.

It is a linode VPS 4 vCPU, 16GB RAM 320 GB disk and 60 GB storage volume for borg and other stuff.

How much data is handled by borg?

It has filled up 30 GB of my /mnt/storage even with retention policies that are fairly restrictive. I want to just prune everything to the last 7-10 days of backups. I am probably using the command incorrectly but I have read a lot of the docs and don't know what i am doing wrong

Full borg commandline that lead to the problem (leave away excludes and passwords)

I run the following command but nothing gets freed up. Perhaps I am not using the command correctly or misunderstanding what the command is doing.

[root@vps borg]# borg prune -v --list --keep-within=10d vps.linode.**********.com/
Keeping archive (rule: within #1): vps-2024-12-12T03:16:01.290874 Thu, 2024-12-12 03:16:01 [a12b5cbbf0284e267e3bab731da548dd855b6aa9c606b6c5c91e4defcc2d6bdf]
[root@vps borg]# du -sch
20G .
20G total
[root@vps borg]# borg compact vps.linode.**************.com/
[root@vps borg]# du -sch
20G .
20G total

Describe the problem you're observing.

I am expecting just the last 10 days of archives kept but I don't see any change and my backups are going back to mid November. It is Dec 13 at the time of this writing. I can append the ls -la of the actual archive if required.
I assume it is the one in this directory below:

[root@vps borg]# pwd
/mnt/storage/backups/borg
[root@vps borg]# ls -la
total 12
drwxr-xr-x 3 root root 4096 Dec 13 20:58 .
drwxr-xr-x 9 root root 4096 Oct 8 22:01 ..
drwx------ 3 root root 4096 Dec 13 21:41 vps.linode..com
[root@vps borg]# cd vps.linode..com/
[root@vps vps.linode..com]# ls
config data hints.481 index.481 integrity.481 README
[root@vps vps.linode.
.com]# cd data/
[root@vps data]# ls -la
total 12
drwx------ 3 root root 4096 Dec 13 21:10 .
drwx------ 3 root root 4096 Dec 13 21:41 ..
drwx------ 2 root root 4096 Dec 13 21:41 0
[root@vps data]# cd 0
[root@vps 0]# pwd
/mnt/storage/backups/borg/vps.linode.
***********.com/data/0

Please note that it is not in append_only mode. See below:

[root@vps vps.linode.**************com]# cat config
[repository]
version = 1
segments_per_dir = 1000
max_segment_size = 524288000
append_only = 0 <<<<<<<<<<<<<<<<<<<<<<<<<<
storage_quota = 0
additional_free_space = 0
id = ******************

Can you reproduce the problem? If so, describe how. If not, describe troubleshooting steps you took before opening the issue.

Include any warning/errors/backtraces from the system logs

@ThomasWaldmann
Copy link
Member

borg prune --list usually lists all archives and tells whether it keeps or deletes.

So, do you only have that one archive?

@dmastrop
Copy link
Author

hi Thomas There are many archive files in there. Here is the full directory contents of the backups for this linode VPS:

(I do think it is probably user error as I don't fully understand how the borg prune command works)

[root@vps borg]# ls -la
total 12
drwxr-xr-x 3 root root 4096 Dec 13 20:58 .
drwxr-xr-x 9 root root 4096 Oct 8 22:01 ..
drwx------ 3 root root 4096 Dec 13 21:55 vps.linode.czzzzzzzzzz.com

/mnt/storage/backups/borg

/mnt/storage/backups/borg/vps.linode.zzzzzzzzzzz.com/data/0
[root@vps 0]# ls -la
total 20350680
drwx------ 2 root root 4096 Dec 13 21:55 .
drwx------ 3 root root 4096 Dec 13 21:10 ..
-rw------- 1 root root 525005202 Nov 17 02:12 10
-rw------- 1 root root 524289408 Nov 17 02:12 11
-rw------- 1 root root 528682274 Nov 17 02:12 12
-rw------- 1 root root 526211319 Nov 17 02:12 13
-rw------- 1 root root 527101402 Nov 17 02:12 14
-rw------- 1 root root 527930141 Nov 17 02:12 15
-rw------- 1 root root 527856625 Nov 17 02:12 16
-rw------- 1 root root 524946037 Nov 17 02:12 17
-rw------- 1 root root 70252924 Nov 23 03:17 173
-rw------- 1 root root 528206123 Nov 17 02:12 18
-rw------- 1 root root 524878009 Nov 17 02:12 19
-rw------- 1 root root 524475057 Nov 17 02:10 2
-rw------- 1 root root 527028211 Nov 17 02:12 20
-rw------- 1 root root 527810693 Nov 17 02:12 21
-rw------- 1 root root 524922415 Nov 17 02:12 22
-rw------- 1 root root 525565076 Nov 17 02:13 24
-rw------- 1 root root 524847007 Nov 17 02:13 25
-rw------- 1 root root 525533792 Nov 17 02:13 26
-rw------- 1 root root 524642089 Nov 17 02:13 27
-rw------- 1 root root 525435164 Dec 1 03:17 277
-rw------- 1 root root 524825440 Dec 1 03:17 279
-rw------- 1 root root 529084007 Nov 17 02:13 28
-rw------- 1 root root 50926935 Dec 1 03:17 280
-rw------- 1 root root 524358255 Dec 2 03:17 293
-rw------- 1 root root 524995695 Dec 2 03:17 295
-rw------- 1 root root 109220129 Nov 17 02:13 30
-rw------- 1 root root 364495075 Dec 5 03:17 336
-rw------- 1 root root 528811898 Dec 6 03:17 349
-rw------- 1 root root 140795205 Dec 6 03:17 350
-rw------- 1 root root 258102782 Dec 10 03:17 403
-rw------- 1 root root 296637536 Dec 11 03:17 416
-rw------- 1 root root 464882330 Dec 12 03:17 418
-rw------- 1 root root 299478504 Dec 12 03:17 429
-rw------- 1 root root 524311014 Dec 13 21:11 442
-rw------- 1 root root 524415589 Dec 13 21:11 443
-rw------- 1 root root 524680876 Dec 13 21:12 444
-rw------- 1 root root 524746391 Dec 13 21:12 445
-rw------- 1 root root 524393124 Dec 13 21:12 446
-rw------- 1 root root 528008813 Dec 13 21:12 447
-rw------- 1 root root 378706729 Dec 13 21:12 448
-rw------- 1 root root 554 Dec 13 21:55 483
-rw------- 1 root root 17 Dec 13 21:55 485
-rw------- 1 root root 524298434 Nov 17 02:10 5
-rw------- 1 root root 528259509 Nov 17 02:11 6
-rw------- 1 root root 525777757 Nov 17 02:11 7
-rw------- 1 root root 524361203 Nov 17 02:11 8
-rw------- 1 root root 524663090 Nov 17 02:12 9

Here is my configuration as well (I grep'ed out all the stuff not configured)
I have a restricted retention policy so not sure why so many files are accumulating still. Please see below

$ grep -v "^#|^\s*#|^$" config.yaml
location:
source_directories:
- /boot
- /etc
- /opt
- /root
- /usr
- /var
repositories:
- /mnt/storage/backups/borg/vps.linode.zzzzzzzzzzzz.com
exclude_patterns:
- /var/lib/docker
retention: <<<<<<<<<<<<<<<<<<
keep_daily: 1
keep_weekly: 1
keep_monthly: 1
hooks:
before_backup:
- /etc/borgmatic/before.sh
after_backup:
- /etc/borgmatic/after.sh
on_error:
- /etc/borgmatic/error.sh

warm regards
Dave

@ThomasWaldmann
Copy link
Member

You don't need to list repo_dir/data/ contents here, it is not relevant. What you see there are segment files and NOT archives.

Please post the complete output of borg prune -v --list --keep-within=10d vps.linode.**********.com/ and borg list vps.linode.**********.com/.

@dmastrop
Copy link
Author

dmastrop commented Dec 14, 2024

hi Thomas, Thanks for the help. Please see below:

(Note that 12-13 is missing because the /mnt/storage was at 99%. I freed up some space with a different backup and as you can see there is a 12-14 backup that was successful)
The cronjob runs a borg backup once a day

# borg prune -v --list --keep-within=10d vps.linode.zzzzzzzzzzz.com/
Keeping archive (rule: within #1):           vps-2024-12-14T03:16:01.348459       Sat, 2024-12-14 03:16:01 [77..09]
Keeping archive (rule: within #2):           vps-2024-12-12T03:16:01.290874       Thu, 2024-12-12 03:16:01 [a1..df]

I now see what you mean archives vs. segment files. I ran the same command with 1d and as expected it will prune the 12-12 archive backup

# borg prune -v --list --keep-within=1d vps.linode.zzzzzzzzzzzzzz.com/
Keeping archive (rule: within #1):           vps-2024-12-14T03:16:01.348459       Sat, 2024-12-14 03:16:01 [77..09]
**Pruning archive (1/1):**                       vps-2024-12-12T03:16:01.290874       Thu, 2024-12-12 03:16:01 [a1..df]

[root@vps borg]# borg list vps.linode.zzzzzzzz.com
vps-2024-12-14T03:16:01.348459       Sat, 2024-12-14 03:16:01 [77..09]

I see this quote from you in another thread: "Please be aware that (for borg 1.2) borg compact (without --cleanup-commits) is part of regular repo maintenance (not just a one time upgrade thing).

One does not necessarily need to run it after each delete/prune command, but it needs to be run on a regular basis to free up disk space."

So if i do the borg compact it should remove a lot of the segments correct?
I have not run the borg compact yet.

I am running borg 1.4.

warm regards
Dave

@ThomasWaldmann
Copy link
Member

Hmm, I thought you had run borg compact (see top post)?

Yes, to free space, you need to do that.

Everthing that is left after that is used by archive(s).

@dmastrop
Copy link
Author

hi Thomas

Yes i had run it prior but i think it was not pruning anything because the keep-within was 10d and the single archive (at that point) was at 12-12. So nothing was labeled as pruned.

I just ran the borg compact now with the prune on 12-12 archive, as expected it got rid of the 12-12 archive

[root@vps borg]# borg compact vps.linode.zzzzzzzz.com/
[root@vps borg]# borg list vps.linode.zzzzzzz.com
vps-2024-12-14T03:16:01.348459 Sat, 2024-12-14 03:16:01 [77dfaeac0ffd3fde8e5ac50d229ac881f54f6fe82baaba2a88d61f3cdab0f109]

But there was very little reduction in current /mnt/storage usage (it dropped from 72% to 71%)

Does this mean that all of those data segments in my earlier post are required to reconstruct the full backup and that there is no way to reduce the total memory footprint of the backup any further? (see below at 20G):

[root@vps borg]# du -sch
20G .
20G total

warm regards
Dave

@ThomasWaldmann
Copy link
Member

@dmastrop If there is only 1 archive in there left and you ran compact, all the data is referenced by that archive.

The only other possibility is that the repo somehow has a lot of orphan chunks, you can use borg check -v to check that and borg check --repair -v to fix the repo if that is the case.

@dmastrop
Copy link
Author

hi Thomas @ThomasWaldmann

Thanks for clearing up my confusion on the matter

I ran the borg check and it looks like the backup is ok.

[root@vps borg]# borg check -v vps.linode.zzzzzzzzz.com/
Starting repository check
finished segment check at segment 504
Starting repository index check
Index object count match.
Finished full repository check, no problems found.
Starting archive consistency check...
Analyzing archive vps-2024-12-14T03:16:01.348459 (1/1)
Archive consistency check complete, no problems found.

I manually ran borgmatic just now just to see the effect of adding another archive on the disk space and as expected the 1% is added back (71% to 72%) so the delta is not that large. This is with 2 archives. I will have to live with the storage riding at around this level for now and edit the backups for other stuff.

My retention levels are below. They are set to 1 for day, week and month
retention:
keep_daily: 1
keep_weekly: 1
keep_monthly: 1

To enforce the retention policy does borg prune have to be manually run ? Or is the borg prune implicitly run periodically in accordance to my retention policy above (in /etc/borgmatic folder)
The reference is below at the bottom of this posting.
If automatically this might explain why there was only 2 archives to begin with (one daily and one perhaps a weekly)

Thanks for helping me to understand this

warm regards
Dave

=====reference

A "Borg retention policy" refers to the set of rules within the Borg backup software that determines how long backups are kept before being automatically deleted, allowing you to specify how many daily, weekly, monthly, or yearly backups to retain, preventing your backup storage from filling up indefinitely; essentially, it's a way to manage the lifespan of your backups based on a defined time frame.
Key points about Borg retention policy:

"Prune" command:
**The primary way to enforce a retention policy in Borg is through the "borg prune" command,** which analyzes existing backups and deletes those that fall outside the specified retention rules. 

Time-based options:
When using "borg prune", you can specify how many backups to keep based on different time units like days, weeks, months, or years using options like --keep-daily, --keep-weekly, and --keep-monthly.
Flexibility:
You can customize your retention policy to suit your needs by combining different time-based options to retain specific backup sets based on their age.

@ThomasWaldmann
Copy link
Member

Maybe some borgmatic user can answer that. ^

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants