Augmenting Biomedical Named Entity Recognition with General-domain Resources

Y Yin, H Kim, X Xiao, CH Wei, J Kang, Z Lu… - arXiv preprint arXiv …, 2024 - arxiv.org
Y Yin, H Kim, X Xiao, CH Wei, J Kang, Z Lu, H Xu, M Fang, Q Chen
arXiv preprint arXiv:2406.10671, 2024arxiv.org
Training a neural network-based biomedical named entity recognition (BioNER) model
usually requires extensive and costly human annotations. While several studies have
employed multi-task learning with multiple BioNER datasets to reduce human effort, this
approach does not consistently yield performance improvements and may introduce label
ambiguity in different biomedical corpora. We aim to tackle those challenges through
transfer learning from easily accessible resources with fewer concept overlaps with …
Training a neural network-based biomedical named entity recognition (BioNER) model usually requires extensive and costly human annotations. While several studies have employed multi-task learning with multiple BioNER datasets to reduce human effort, this approach does not consistently yield performance improvements and may introduce label ambiguity in different biomedical corpora. We aim to tackle those challenges through transfer learning from easily accessible resources with fewer concept overlaps with biomedical datasets. In this paper, we proposed GERBERA, a simple-yet-effective method that utilized a general-domain NER dataset for training. Specifically, we performed multi-task learning to train a pre-trained biomedical language model with both the target BioNER dataset and the general-domain dataset. Subsequently, we fine-tuned the models specifically for the BioNER dataset. We systematically evaluated GERBERA on five datasets of eight entity types, collectively consisting of 81,410 instances. Despite using fewer biomedical resources, our models demonstrated superior performance compared to baseline models trained with multiple additional BioNER datasets. Specifically, our models consistently outperformed the baselines in six out of eight entity types, achieving an average improvement of 0.9% over the best baseline performance across eight biomedical entity types sourced from five different corpora. Our method was especially effective in amplifying performance on BioNER datasets characterized by limited data, with a 4.7% improvement in F1 scores on the JNLPBA-RNA dataset.
arxiv.org
顯示最佳搜尋結果。 查看所有結果