default search action
27th CoNLL 2023: Singapore
- Jing Jiang, David Reitter, Shumin Deng:
Proceedings of the 27th Conference on Computational Natural Language Learning, CoNLL 2023, Singapore, December 6-7, 2023. Association for Computational Linguistics 2023, ISBN 979-8-89176-039-4 - Frontmatter.
- Yuhan Zhang, Edward Gibson, Forrest Davis:
Can Language Models Be Tricked by Language Illusions? Easier with Syntax, Harder with Semantics. 1-14 - Xiaomeng Ma, Lingyu Gao, Qihui Xu:
ToMChallenges: A Principle-Guided Dataset and Diverse Evaluation Tasks for Exploring Theory of Mind. 15-26 - Christian Bentz:
The Zipfian Challenge: Learning the statistical fingerprint of natural languages. 27-37 - Xiang Zhang, Shizhu He, Kang Liu, Jun Zhao:
On the Effects of Structural Modeling for Neural Semantic Parsing. 38-57 - Aditya R. Vaidya, Javier Turek, Alexander Huth:
Humans and language models diverge when predicting repeating text. 58-69 - Urban Knuples, Diego Frassinelli, Sabine Schulte im Walde:
Investigating the Nature of Disagreements on Mid-Scale Ratings: A Case Study on the Abstractness-Concreteness Continuum. 70-86 - Mohammad Akbari, Saeed Ranjbar Alvar, Behnam Kamranian, Amin Banitalebi-Dehkordi, Yong Zhang:
ArchBERT: Bi-Modal Understanding of Neural Architectures and Natural Languages. 87-107 - Karin de Langis, Dongyeop Kang:
A Comparative Study on Textual Saliency of Styles from Eye Tracking, Annotations, and Language Models. 108-121 - Daiki Asami, Saku Sugawara:
PROPRES: Investigating the Projectivity of Presupposition with Various Triggers and Environments. 122-137 - Dongwon Ryu, Meng Fang, Gholamreza Haffari, Shirui Pan, Ehsan Shareghi:
A Minimal Approach for Natural Language Action Space in Text-based Games. 138-154 - Gijs Wijnholds, Michael Moortgat:
Structural Ambiguity and its Disambiguation in Language Model Based Parsers: the Case of Dutch Clause Relativization. 155-164 - Yassir El Mesbahi, Atif Mahmud, Abbas Ghaddar, Mehdi Rezagholizadeh, Philippe Langlais, Prasanna Parthasarathi:
On the utility of enhancing BERT syntactic bias with Token Reordering Pretraining. 165-182 - Risako Owan, Maria L. Gini, Dongyeop Kang:
Quirk or Palmer: A Comparative Study of Modal Verb Frameworks with Annotated Datasets. 183-199 - Donghyun Lee, Minkyung Park, Byung-Jun Lee:
Quantifying Information of Tokens for Simple and Flexible Simultaneous Machine Translation. 200-210 - Dama Sravani, Radhika Mamidi:
Enhancing Code-mixed Text Generation Using Synthetic Data Filtering in Neural Machine Translation. 211-220 - Ondrej Skopek, Rahul Aralikatte, Sian Gooding, Victor Carbune:
Towards Better Evaluation of Instruction-Following: A Case-Study in Summarization. 221-237 - Luke Gessler, Nathan Schneider:
Syntactic Inductive Bias in Transformer Language Models: Especially Helpful for Low-Resource Languages? 238-253 - Aron Molnar, Jaap Jumelet, Mario Giulianelli, Arabella Sinclair:
Attribution and Alignment: Effects of Local Context Repetition on Utterance Production and Comprehension in Dialogue. 254-273 - Kaiser Sun, Adina Williams, Dieuwke Hupkes:
The Validity of Evaluation Results: Assessing Concurrence Across Compositionality Benchmarks. 274-293 - Lucas Weber, Elia Bruni, Dieuwke Hupkes:
Mind the instructions: a holistic evaluation of consistency and interactions in prompt-based learning. 294-313 - Ankit Pal, Logesh Kumar Umapathi, Malaikannan Sankarasubbu:
Med-HALT: Medical Domain Hallucination Test for Large Language Models. 314-334 - Brielen Madureira, Pelin Çelikkol, David Schlangen:
Revising with a Backward Glance: Regressions and Skips during Reading as Cognitive Signals for Revision Policies in Incremental Processing. 335-351 - Bram van Dijk, Max J. van Duijn, Suzan Verberne, Marco Spruit:
ChiSCor: A Corpus of Freely-Told Fantasy Stories by Dutch Children for Computational Linguistics and Cognitive Science. 352-363 - Esra Dönmez, Pascal Tilli, Hsiu-Yu Yang, Ngoc Thang Vu, Carina Silberer:
HNC: Leveraging Hard Negative Captions towards Models with Fine-Grained Visual-Linguistic Comprehension Capabilities. 364-388 - Max J. van Duijn, Bram van Dijk, Tom Kouwenhoven, Werner de Valk, Marco Spruit, Peter van der Putten:
Theory of Mind in Large Language Models: Examining Performance of 11 State-of-the-Art models vs. Children Aged 7-10 on Advanced Tests. 389-402 - Jarad Forristal, Fatemehsadat Mireshghallah, Greg Durrett, Taylor Berg-Kirkpatrick:
A Block Metropolis-Hastings Sampler for Controllable Energy-based Text Generation. 403-413 - Yiwei Wang, Bryan Hooi, Fei Wang, Yujun Cai, Yuxuan Liang, Wenxuan Zhou, Jing Tang, Manjuan Duan, Muhao Chen:
How Fragile is Relation Extraction under Entity Replacements? 414-423 - Yuiga Wada, Kanta Kaneda, Komei Sugiura:
JaSPICE: Automatic Evaluation Metric Using Predicate-Argument Structures for Image Captioning Models. 424-435 - Taelin Karidi, Leshem Choshen, Gal Patel, Omri Abend:
MuLER: Detailed and Scalable Reference-based Evaluation. 436-455 - Yunke He, Xixian Liao, Jialing Liang, Gemma Boleda:
The Impact of Familiarity on Naming Variation: A Study on Object Naming in Mandarin Chinese. 456-475 - Nathan Roll, Calbert Graham, Simon Todd:
PSST! Prosodic Speech Segmentation with Transformers. 476-487 - Shinjini Ghosh, Yoon Kim, Ramón Fernandez Astudillo, Tahira Naseem, Jacob Andreas:
Alignment via Mutual Information. 488-497 - Mathieu Dehouck:
Challenging the "One Single Vector per Token" Assumption. 498-507 - Gulinigeer Abudouwaili, Wayit Abliz, Kahaerjiang Abiderexiti, Aishan Wumaier, Nian Yi:
Strategies to Improve Low-Resource Agglutinative Languages Morphological Inflection. 508-520 - Clayton Fields, Casey Kennington:
Exploring Transformers as Compact, Data-efficient Language Models. 521-531 - Taiga Ishii, Yusuke Miyao:
Tree-shape Uncertainty for Analyzing the Inherent Branching Bias of Unsupervised Parsing Models. 532-547 - Koyena Pal, Jiuding Sun, Andrew Yuan, Byron C. Wallace, David Bau:
Future Lens: Anticipating Subsequent Tokens from a Single Hidden State. 548-560 - Jin Zhao, Nianwen Xue, Bonan Min:
Cross-Document Event Coreference Resolution: Instruct Humans or Instruct GPT? 561-574 - Sagnik Ray Choudhury, Jushaan Kalra:
Implications of Annotation Artifacts in Edge Probing Test Datasets. 575-586 - Mohammad Reza Ghasemi Madani, Pasquale Minervini:
REFER: An End-to-end Rationale Extraction Framework for Explanation Regularization. 587-602
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.