Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Query regarding Split-Test #15

Open
evantkchong opened this issue Feb 11, 2022 · 0 comments
Open

Query regarding Split-Test #15

evantkchong opened this issue Feb 11, 2022 · 0 comments

Comments

@evantkchong
Copy link

Hello there! Firstly, thank you for your paper for the AI City Challenge!

I was going through your paper but had difficulty understanding some things that I hope you can spare the time to clarify. It is mentioned in section 4.2 Validation Data:

Since each team has only 20 submissions, it is necessary
to use the validation set to evaluate methods offline. We
split the training set of CityFlow-V2 into the training set
and the validation set. For convenience, the validation set is
denoted as Split-Test. Split-Test includes 18701 images of
88 vehicles.

So according to this, the (52,717 images, 440 vehicles) in the Original Training Set of CityFlowV2-ReID are split into

  • New Training Set consisting of 34016 images, 352 vehicles
  • New Validation Set (Split-Test) consisting of 18701 images, 88 vehicles

The Original Test Set from CityFlowV2-ReID is untouched, as well as the queries for it.


image
image

For the results shown in Table 1 and 2 in the paper, it seems we should train on augmented New Training Set, and evaluate on the New Validation Set (Split-Test), with no validation happening during training.

However, in the repository, I can't seem to find code for splitting Original Training Set into New Training Set and New Validation Set (Split-Test). And also instead of evaluating on New Validation Set (Split-Test), we are doing evaluation on New Validation Set (Split-Test)

Instead, it seems like we are just training on the Original Training Set and then evaluating on Original Test Set? I say this because it seems that in https://github.com/michuanhaohao/AICITY2021_Track2_DMT/blob/50f27363532ae712868ff1ceaf128a3bbec426ac/datasets/aic.py it seems that self.query and self.gallery is just the Original Test Set. And then in https://github.com/michuanhaohao/AICITY2021_Track2_DMT/blob/50f27363532ae712868ff1ceaf128a3bbec426ac/test.py we use the Original Test Set.

Is this understanding correct? If it is, then could you advise how we should generate the New Validation Set (Split-Test)? I am hoping to compare other to yours using it as an evaluation dataset

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant