Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Are image segmentations necessary for training? #14

Open
precondition opened this issue Jan 5, 2024 · 2 comments
Open

Are image segmentations necessary for training? #14

precondition opened this issue Jan 5, 2024 · 2 comments

Comments

@precondition
Copy link

I'm interested in leveraging features learned on Japanese manga to do transfer learning on the problem of detecting individual characters in a dataset of handwritten Japanese. While looking through the train_*.py scripts, I saw that you were providing a train_mask_dir argument. I realize that one of the outputs of comic-text-detector is an image mask so it makes sense that the model was trained on image segmentations annotations but I'm only interested in the text block detection module. How can I check out a model pretrained on comics and then fine-tune it with my dataset?

Auxiliary to this, you mention that “All models were trained on around 13 thousand anime & comic style images, 1/3 from Manga109-s” in the README. Does this mean that the entirety of Manga109-s was used during training and that the whole Manga109-s composed a third of the overall training dataset or that you only took one third of Manga-109s and then used this smaller subsample of Manga-109s in your overall training data? I'm wondering because Manga-109s does not provide image segmentation annotations. Did you just use the bounding box annotations or did you make use of the Manga109 image segmentation annotations made in the paper Unconstrained Text Detection in Manga?

@precondition precondition changed the title Are image segmentation necessary for training? Are image segmentations necessary for training? Jan 5, 2024
@dmMaze
Copy link
Owner

dmMaze commented Jan 6, 2024

no, the detector and segmentation head only share the same feature extractor, as you can see they have their own training script. the only reason for this back then is to save vram (on a 6gb card) and it did easy the training.
subsamples from manga109-s, i mainly use the unstrained model to obtain mask from manga and pick those seems less erroneous manually

@dmMaze
Copy link
Owner

dmMaze commented Jan 6, 2024

the detection head is initialized from segmentation head, you may follow the train_db script (or load the full model and delete the segmentation head) to train a new detection head. but i don't think it's a necessarily a prerequisite, it could ease the training, or you can just train a dbnet from scratch if you have enough data and vram

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants