Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Different retention periods by tenant are not working as expected #10210

Open
KrishnaJyothika opened this issue Dec 11, 2024 · 1 comment
Open
Labels
bug Something isn't working

Comments

@KrishnaJyothika
Copy link

What is the bug?

Hi,

Different retention periods by tenant are not working as expected. Despite updating the runtime configuration, the retention policy still defaults to 30 days for all tenants.

overrides:
  tenant:
    compactor_blocks_retention_period: 60d

Image

Runtime Configuration: The retention periods appear updated in the /overrides-exporter/runtime_config?mode=diff API.

Cortex Limits Metric: The cortex_limits_overrides metric does not reflect the updated retention periods under limit_name. (Is this expected? can see for other limits for defined tenants but not on retention period)

Storage Account: Only the last 30 days of blocks are present for the tenants, indicating that the default retention period (30 days) is still being used.

How to reproduce it?

Configure different retention periods for two tenants in the runtime configuration.

Check the /overrides-exporter/runtime_config?mode=diff API to confirm the updates.

Observe the cortex_limits_overrides metric to see if the retention periods are reflected.

Check the storage account to see which blocks are present for each tenant.

What did you think would happen?

Followed according to below document
https://grafana.com/docs/mimir/latest/configure/configure-metrics-storage-retention/

What was your environment?

Infrastructure - Kubernetes
Deployment - helm

Any additional context to share?

Mimir version - 2.14.2

@KrishnaJyothika KrishnaJyothika added the bug Something isn't working label Dec 11, 2024
@KrishnaJyothika
Copy link
Author

Further debugging on this, seems like there is no issue from runtime configuration. Can see retention period is properly getting picked up for different tenants but the data blocks are getting deleted from the backend storage.

Compactor logs
ts=2024-12-11T10:37:56.080414097Z caller=blocks_cleaner.go:629 level=info component=cleaner run_id=123456 task=clean_up_users user=tenant msg="marked blocks for deletion" num_blocks=0 retention=1440h0m0s

Requesting help to fix this issue

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant