You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello there! Firstly, thank you for your paper for the AI City Challenge!
I was going through your paper but had difficulty understanding some things that I hope you can spare the time to clarify. It is mentioned in section 4.2 Validation Data:
Since each team has only 20 submissions, it is necessary
to use the validation set to evaluate methods offline. We
split the training set of CityFlow-V2 into the training set
and the validation set. For convenience, the validation set is
denoted as Split-Test. Split-Test includes 18701 images of
88 vehicles.
So according to this, the (52,717 images, 440 vehicles) in the Original Training Set of CityFlowV2-ReID are split into
New Training Set consisting of 34016 images, 352 vehicles
New Validation Set (Split-Test) consisting of 18701 images, 88 vehicles
The Original Test Set from CityFlowV2-ReID is untouched, as well as the queries for it.
For the results shown in Table 1 and 2 in the paper, it seems we should train on augmented New Training Set, and evaluate on the New Validation Set (Split-Test), with no validation happening during training.
However, in the repository, I can't seem to find code for splitting Original Training Set into New Training Set and New Validation Set (Split-Test). And also instead of evaluating on New Validation Set (Split-Test), we are doing evaluation on New Validation Set (Split-Test)
Is this understanding correct? If it is, then could you advise how we should generate the New Validation Set (Split-Test)? I am hoping to compare other to yours using it as an evaluation dataset
The text was updated successfully, but these errors were encountered:
Hello there! Firstly, thank you for your paper for the AI City Challenge!
I was going through your paper but had difficulty understanding some things that I hope you can spare the time to clarify. It is mentioned in section
4.2 Validation Data
:So according to this, the (52,717 images, 440 vehicles) in the Original Training Set of CityFlowV2-ReID are split into
The Original Test Set from CityFlowV2-ReID is untouched, as well as the queries for it.
For the results shown in Table 1 and 2 in the paper, it seems we should train on augmented New Training Set, and evaluate on the New Validation Set (Split-Test), with no validation happening during training.
However, in the repository, I can't seem to find code for splitting Original Training Set into New Training Set and New Validation Set (Split-Test). And also instead of evaluating on New Validation Set (Split-Test), we are doing evaluation on New Validation Set (Split-Test)
Instead, it seems like we are just training on the Original Training Set and then evaluating on Original Test Set? I say this because it seems that in https://github.com/michuanhaohao/AICITY2021_Track2_DMT/blob/50f27363532ae712868ff1ceaf128a3bbec426ac/datasets/aic.py it seems that
self.query
andself.gallery
is just the Original Test Set. And then in https://github.com/michuanhaohao/AICITY2021_Track2_DMT/blob/50f27363532ae712868ff1ceaf128a3bbec426ac/test.py we use the Original Test Set.Is this understanding correct? If it is, then could you advise how we should generate the New Validation Set (Split-Test)? I am hoping to compare other to yours using it as an evaluation dataset
The text was updated successfully, but these errors were encountered: