-
Notifications
You must be signed in to change notification settings - Fork 75
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[6.11, 6.12] Constant heavy reads when there is unfinishable "Pending rebalance work" #795
Comments
I can confirm this also happens on actual hardware. There are heavy reads when background target is full. Writes are unaffected. |
It looks like the issue is related to For example, we can create a filesystem with 2 disks (
show-super-
Now, write some data to a folder having
We have enough free space in the
This causes heavy reads on filesystem. |
On multi-device filesystem, I have noticed that whenever background_target becomes full, there are constant heavy reads by the rebalance thread.
Steps to reproduce:
Create two loop devices. One will be used as foreground_target (disk0), other will be background_target (disk1)-
Here, both are 40GB disks.
Add them as loop devices (for mounting)-
Format the loop devices as bcachefs.
disk0
label isssd
(foreground_target) &disk1
label ishdd
(background_target)-Mount the filesystem and write 60GB file (bigger than background_target)-
bcachefs fs usage-
There is some pending rebalance work but background_target is full, so it cannot move the data. I can see rebalance thread doing constant reads even after data is written-
I expect some constant I/O by filesystem to check if background_target has free space, but 300+MB/s seems excessive. I tried waiting for more than an hour but it did not stop. It triggers again if I remount the drive. It only stops if I delete the file I created and free up the background_target.
Underlying filesystem (where loop devices are created) is
btrfs
(with compression=zstd:3).Host-
I will do some more testing on actual hardware.
The text was updated successfully, but these errors were encountered: