[PDF][PDF] Climbing the tower of treebanks: Improving low-resource dependency parsing via hierarchical source selection
Findings of the Association for Computational Linguistics: ACL-IJCNLP …, 2021•aclanthology.org
Recent work on multilingual dependency parsing focused on developing highly multilingual
parsers that can be applied to a wide range of low-resource languages. In this work, we
substantially outperform such “one model to rule them all” approach with a heuristic
selection of languages and treebanks on which to train the parser for a specific target
language. Our approach, dubbed TOWER, first hierarchically clusters all Universal
Dependencies languages based on their mutual syntactic similarity computed from human …
parsers that can be applied to a wide range of low-resource languages. In this work, we
substantially outperform such “one model to rule them all” approach with a heuristic
selection of languages and treebanks on which to train the parser for a specific target
language. Our approach, dubbed TOWER, first hierarchically clusters all Universal
Dependencies languages based on their mutual syntactic similarity computed from human …
Abstract
Recent work on multilingual dependency parsing focused on developing highly multilingual parsers that can be applied to a wide range of low-resource languages. In this work, we substantially outperform such “one model to rule them all” approach with a heuristic selection of languages and treebanks on which to train the parser for a specific target language. Our approach, dubbed TOWER, first hierarchically clusters all Universal Dependencies languages based on their mutual syntactic similarity computed from human-coded URIEL vectors. For each low-resource target language, we then climb this language hierarchy starting from the leaf node of that language and heuristically choose the hierarchy level at which to collect training treebanks. This treebank selection heuristic is based on:(i) the aggregate size of all treebanks subsumed by the hierarchy level and (ii) the similarity of the languages in the training sample with the target language. For languages without development treebanks, we additionally use (ii) for model selection (ie, early stopping) in order to prevent overfitting to development treebanks of closest languages. Our TOWER approach shows substantial gains for low-resource languages over two state-ofthe-art multilingual parsers, with more than 20 LAS point gains for some of those languages. Parsing models and code available at: https:
//github. com/codogogo/towerparse.
aclanthology.org
顯示最佳搜尋結果。 查看所有結果