You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
However wikidata_to_mentions_normalized.json is only really used when generating the DeezyMatch training set, so we could load it only in this case. It is also used to filter mentions in:
However, this could be done directly in wiki2gaz, it'd make more sense.
Finally, mentions_to_wikidata_normalized.json and mentions_to_wikidata.json could be merged into just one resource, with a tuple with both scores for each mention-wikidata pair.
On the other hand, we also require the following two resources:
At the moment, we always need to load:
However
wikidata_to_mentions_normalized.json
is only really used when generating the DeezyMatch training set, so we could load it only in this case. It is also used to filter mentions in:T-Res/geoparser/ranking.py
Lines 100 to 141 in 1d887ea
However, this could be done directly in
wiki2gaz
, it'd make more sense.Finally,
mentions_to_wikidata_normalized.json
andmentions_to_wikidata.json
could be merged into just one resource, with a tuple with both scores for each mention-wikidata pair.On the other hand, we also require the following two resources:
Which could be merged because they share the same ID. That'd require modifying the
wiki2gaz
scripts and also the T-Res linker.The text was updated successfully, but these errors were encountered: