metrics-generator causing a memory leak #4249
-
hi everyone |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 1 reply
-
You have quite a few dimensions defined in your config. My guess is you are generating a very large number of series. I would check the value of the metric |
Beta Was this translation helpful? Give feedback.
-
The number of dimensions matters less than the cardinality. You want to avoid dimensions like IDs and stick to dimensions like http status code and http route.
The reason to add them is so that span metrics can be broken down by these dimensions in Prometheus.
Are you saying you see memory impact 7 hours after no longer sending traces due to metrics? The go runtime will hold on to memory it's not using from an OS perspective as long as there's not memory pressure. This is common for garbage collected languages. Perhaps you're seeing the impact of this behavior? |
Beta Was this translation helpful? Give feedback.
You have quite a few dimensions defined in your config. My guess is you are generating a very large number of series. I would check the value of the metric
tempo_metrics_generator_registry_active_series
to see how many series you're generating and remove some of the less valuable dimensions in your config.