Alex Warstadt
Alex Warstadt
Postdoc at ETH Zürich (Previous: PhD at NYU)
Verified email at - Homepage
Cited by
Cited by
Neural network acceptability judgments
A Warstadt, A Singh, SR Bowman
Transactions of the Association for Computational Linguistics 7, 625--641, 2019
Beyond the imitation game: Quantifying and extrapolating the capabilities of language models
A Srivastava, A Rastogi, A Rao, AAM Shoeb, A Abid, A Fisch, AR Brown, ...
arXiv preprint arXiv:2206.04615, 2022
BLiMP: The benchmark of linguistic minimal pairs for English
A Warstadt, A Parrish, H Liu, A Mohananey, W Peng, SF Wang, ...
Transactions of the Association for Computational Linguistics 8, 377-392, 2020
Learning which features matter: RoBERTa acquires a preference for linguistic generalizations (eventually)
A Warstadt, Y Zhang, HS Li, H Liu, SR Bowman
EMNLP, 2020
When do you need billions of words of pretraining data?
Y Zhang, A Warstadt, HS Li, SR Bowman
ACL, 2020
Investigating BERT's knowledge of language: five analysis methods with NPIs
A Warstadt, Y Cao, I Grosu, W Peng, H Blix, Y Nie, A Alsop, S Bordia, ...
EMNLP, 2019
Are natural language inference models IMPPRESsive? Learning IMPlicature and PRESupposition
P Jeretic, A Warstadt, S Bhooshan, A Williams
ACL, 2020
What artificial neural networks can tell us about human language acquisition
A Warstadt, SR Bowman
Algebraic structures in natural language, 17-60, 2022
Findings of the BabyLM Challenge: Sample-efficient pretraining on developmentally plausible corpora
A Warstadt, A Mueller, L Choshen, E Wilcox, C Zhuang, J Ciro, ...
Proceedings of the BabyLM Challenge at the 27th Conference on Computational …, 2023
Can neural networks acquire a structural bias from raw linguistic data?
A Warstadt, SR Bowman
CogSci, 2020
Verb argument structure alternations in word and sentence embeddings
K Kann, A Warstadt, A Williams, SR Bowman
SCiL, 2018
Linguistic analysis of pretrained sentence encoders with acceptability judgments
A Warstadt, SR Bowman
arXiv preprint arXiv:1901.03438, 2019
Does putting a linguist in the loop improve NLU data collection?
A Parrish, W Huang, O Agha, SH Lee, N Nangia, A Warstadt, K Aggarwal, ...
ACL findings, 2021
CLiMP: A benchmark for Chinese language model evaluation
B Xiang, C Yang, Y Li, A Warstadt, K Kann
EACL, 2021
What ingredients make for an effective crowdsourcing protocol for difficult NLU data collection tasks?
N Nangia, S Sugawara, H Trivedi, A Warstadt, C Vania, SR Bowman
ACL, 2021
NOPE: A corpus of naturally-occurring presuppositions in English
A Parrish, S Schuster, A Warstadt, O Agha, SH Lee, Z Zhao, SR Bowman, ...
CoNLL, 2021
Entailment semantics can be extracted from an ideal language model
W Merrill, A Warstadt, T Linzen
Proceedings of the 26th Conference on Computational Natural Language …, 2022
What Makes Reading Comprehension Questions Difficult?
S Sugawara, N Nangia, A Warstadt, SR Bowman
arXiv preprint arXiv:2203.06342, 2022
"Just" don’t ask: Exclusives and potential questions
A Warstadt
Proceedings of Sinn und Bedeutung 24 (2), 373-390, 2020
A geometric notion of causal probing
C Guerner, A Svete, T Liu, A Warstadt, R Cotterell
arXiv preprint arXiv:2307.15054, 2023
The system can't perform the operation now. Try again later.
Articles 1–20