Follow
William Merrill
Title
Cited by
Cited by
Year
CORD-19: The COVID-19 open research dataset
LL Wang, K Lo, Y Chandrasekhar, R Reas, J Yang, D Eide, K Funk, ...
Workshop on NLP for COVID-19, 2020
918*2020
How language model hallucinations can snowball
M Zhang, O Press, W Merrill, A Liu, NA Smith
arXiv preprint arXiv:2305.13534, 2023
1132023
Competency problems: On finding and removing artifacts in language data
M Gardner, W Merrill, J Dodge, ME Peters, A Ross, S Singh, N Smith
Empirical Methods in Natural Language Processing, 2021
772021
ReCLIP: A Strong Zero-Shot Baseline for Referring Expression Comprehension
S Subramanian, W Merrill, T Darrell, M Gardner, S Singh, A Rohrbach
Empirical Methods in Natural Language Processing, 2022
632022
A formal hierarchy of RNN architectures
W Merrill, G Weiss, Y Goldberg, R Schwartz, NA Smith, E Yahav
Association of Computational Linguistics, 2020
632020
Provable limitations of acquiring meaning from ungrounded form: What will future language models understand?
W Merrill, Y Goldberg, R Schwartz, NA Smith
Transactions of the Association for Computational Linguistics 9, 1047-1060, 2021
592021
Sequential neural networks as automata
W Merrill
Deep Learning and Formal Languages (ACL workshop), 2019
562019
Saturated transformers are constant-depth threshold circuits
W Merrill, A Sabharwal, NA Smith
Transactions of the Association for Computational Linguistics 10, 843-856, 2022
552022
Context-free transductions with neural stacks
Y Hao, W Merrill, D Angluin, R Frank, N Amsel, A Benz, S Mendelsohn
BlackboxNLP, 2018
342018
The Parallelism Tradeoff: Limitations of Log-Precision Transformers
W Merrill, A Sabharwal
arXiv preprint arXiv:2207.00729, 2022
22*2022
Effects of parameter norm growth during transformer training: Inductive bias from gradient descent
W Merrill, V Ramanujan, Y Goldberg, R Schwartz, N Smith
Empirical Methods in Natural Language Processing, 2021
222021
A tale of two circuits: Grokking as competition of sparse and dense subnetworks
W Merrill, N Tsilivis, A Shukla
arXiv preprint arXiv:2303.11873, 2023
152023
End-to-end graph-based TAG parsing with neural networks
J Kasai, R Frank, P Xu, W Merrill, O Rambow
NAACL, 2018
142018
Entailment Semantics Can Be Extracted from an Ideal Language Model
W Merrill, A Warstadt, T Linzen
CoNLL 2022, 2022
112022
Formal language theory meets modern NLP
W Merrill
arXiv preprint arXiv:2102.10094, 2021
112021
On the linguistic capacity of real-time counter automata
W Merrill
arXiv preprint arXiv:2004.06866, 2020
112020
The Expressive Power of Transformers with Chain of Thought
W Merrill, A Sabharwal
arXiv preprint arXiv:2310.07923, 2023
82023
Olmo: Accelerating the science of language models
D Groeneveld, I Beltagy, P Walsh, A Bhagia, R Kinney, O Tafjord, AH Jha, ...
arXiv preprint arXiv:2402.00838, 2024
72024
Finding hierarchical structure in neural stacks using unsupervised parsing
W Merrill, L Khazan, N Amsel, Y Hao, S Mendelsohn, R Frank
Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting …, 2019
7*2019
Transformers as recognizers of formal languages: A survey on expressivity
L Strobl, W Merrill, G Weiss, D Chiang, D Angluin
arXiv preprint arXiv:2311.00208, 2023
62023
The system can't perform the operation now. Try again later.
Articles 1–20