Is timm/utils/metrics.py reliable for completely imbalanced datasets? #1272
Unanswered
bryanpiguave
asked this question in
Q&A
Replies: 1 comment
-
does this repo have an example of how to build and display the confusion matrix after train.py is run? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I am currently working on long-tailed recognition. Everything seems to be ok when I trained the model. The model obtained a 91% accuracy in top1 in the validation set. However, when I created the confusion matrix, the weighted average accuracy was 72%. It seems to me that the accuracy does not consider a weighted accuracy, it is calculated in terms of the batch and it is gradually updated. Should I consider the implemented function as reliable for my study case?
Beta Was this translation helpful? Give feedback.
All reactions