You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm interested in leveraging features learned on Japanese manga to do transfer learning on the problem of detecting individual characters in a dataset of handwritten Japanese. While looking through the train_*.py scripts, I saw that you were providing a train_mask_dir argument. I realize that one of the outputs of comic-text-detector is an image mask so it makes sense that the model was trained on image segmentations annotations but I'm only interested in the text block detection module. How can I check out a model pretrained on comics and then fine-tune it with my dataset?
Auxiliary to this, you mention that “All models were trained on around 13 thousand anime & comic style images, 1/3 from Manga109-s” in the README. Does this mean that the entirety of Manga109-s was used during training and that the whole Manga109-s composed a third of the overall training dataset or that you only took one third of Manga-109s and then used this smaller subsample of Manga-109s in your overall training data? I'm wondering because Manga-109s does not provide image segmentation annotations. Did you just use the bounding box annotations or did you make use of the Manga109 image segmentation annotations made in the paper Unconstrained Text Detection in Manga?
The text was updated successfully, but these errors were encountered:
precondition
changed the title
Are image segmentation necessary for training?
Are image segmentations necessary for training?
Jan 5, 2024
no, the detector and segmentation head only share the same feature extractor, as you can see they have their own training script. the only reason for this back then is to save vram (on a 6gb card) and it did easy the training.
subsamples from manga109-s, i mainly use the unstrained model to obtain mask from manga and pick those seems less erroneous manually
the detection head is initialized from segmentation head, you may follow the train_db script (or load the full model and delete the segmentation head) to train a new detection head. but i don't think it's a necessarily a prerequisite, it could ease the training, or you can just train a dbnet from scratch if you have enough data and vram
I'm interested in leveraging features learned on Japanese manga to do transfer learning on the problem of detecting individual characters in a dataset of handwritten Japanese. While looking through the
train_*.py
scripts, I saw that you were providing atrain_mask_dir
argument. I realize that one of the outputs ofcomic-text-detector
is an image mask so it makes sense that the model was trained on image segmentations annotations but I'm only interested in the text block detection module. How can I check out a model pretrained on comics and then fine-tune it with my dataset?Auxiliary to this, you mention that “All models were trained on around 13 thousand anime & comic style images, 1/3 from Manga109-s” in the README. Does this mean that the entirety of Manga109-s was used during training and that the whole Manga109-s composed a third of the overall training dataset or that you only took one third of Manga-109s and then used this smaller subsample of Manga-109s in your overall training data? I'm wondering because Manga-109s does not provide image segmentation annotations. Did you just use the bounding box annotations or did you make use of the Manga109 image segmentation annotations made in the paper Unconstrained Text Detection in Manga?
The text was updated successfully, but these errors were encountered: