Follow
William Merrill
Title
Cited by
Cited by
Year
CORD-19: The COVID-19 open research dataset
LL Wang, K Lo, Y Chandrasekhar, R Reas, J Yang, D Eide, K Funk, ...
Workshop on NLP for COVID-19, 2020
1026*2020
How language model hallucinations can snowball
M Zhang, O Press, W Merrill, A Liu, NA Smith
arXiv preprint arXiv:2305.13534, 2023
2282023
ReCLIP: A Strong Zero-Shot Baseline for Referring Expression Comprehension
S Subramanian, W Merrill, T Darrell, M Gardner, S Singh, A Rohrbach
Empirical Methods in Natural Language Processing, 2022
1112022
Competency problems: On finding and removing artifacts in language data
M Gardner, W Merrill, J Dodge, ME Peters, A Ross, S Singh, N Smith
Empirical Methods in Natural Language Processing, 2021
952021
Saturated transformers are constant-depth threshold circuits
W Merrill, A Sabharwal, NA Smith
Transactions of the Association for Computational Linguistics 10, 843-856, 2022
882022
Olmo: Accelerating the science of language models
D Groeneveld, I Beltagy, P Walsh, A Bhagia, R Kinney, O Tafjord, AH Jha, ...
arXiv preprint arXiv:2402.00838, 2024
872024
A formal hierarchy of RNN architectures
W Merrill, G Weiss, Y Goldberg, R Schwartz, NA Smith, E Yahav
Association of Computational Linguistics, 2020
842020
Sequential neural networks as automata
W Merrill
Deep Learning and Formal Languages (ACL workshop), 2019
782019
Provable limitations of acquiring meaning from ungrounded form: What will future language models understand?
W Merrill, Y Goldberg, R Schwartz, NA Smith
Transactions of the Association for Computational Linguistics 9, 1047-1060, 2021
732021
The Parallelism Tradeoff: Limitations of Log-Precision Transformers
W Merrill, A Sabharwal
TACL, 2022
612022
The Expressive Power of Transformers with Chain of Thought
W Merrill, A Sabharwal
ICLR 2024, 2023
602023
Context-free transductions with neural stacks
Y Hao, W Merrill, D Angluin, R Frank, N Amsel, A Benz, S Mendelsohn
BlackboxNLP, 2018
452018
What formal languages can transformers express? a survey
L Strobl, W Merrill, G Weiss, D Chiang, D Angluin
Transactions of the Association for Computational Linguistics 12, 543-561, 2024
42*2024
A tale of two circuits: Grokking as competition of sparse and dense subnetworks
W Merrill, N Tsilivis, A Shukla
arXiv preprint arXiv:2303.11873, 2023
322023
Effects of parameter norm growth during transformer training: Inductive bias from gradient descent
W Merrill, V Ramanujan, Y Goldberg, R Schwartz, N Smith
Empirical Methods in Natural Language Processing, 2021
322021
Let's Think Dot by Dot: Hidden Computation in Transformer Language Models
J Pfau, W Merrill, SR Bowman
arXiv preprint arXiv:2404.15758, 2024
282024
A Logic for Expressing Log-Precision Transformers
W Merrill, A Sabharwal
NeurIPS 2023, 2022
27*2022
The illusion of state in state-space models
W Merrill, J Petty, A Sabharwal
arXiv preprint arXiv:2404.08819, 2024
232024
End-to-end graph-based TAG parsing with neural networks
J Kasai, R Frank, P Xu, W Merrill, O Rambow
NAACL, 2018
162018
Entailment Semantics Can Be Extracted from an Ideal Language Model
W Merrill, A Warstadt, T Linzen
CoNLL 2022, 2022
142022
The system can't perform the operation now. Try again later.
Articles 1–20