Follow
Sam Shleifer
Sam Shleifer
Facebook AI Research
Verified email at fb.com
Title
Cited by
Cited by
Year
Transformers: State-of-the-art natural language processing
T Wolf, L Debut, V Sanh, J Chaumond, C Delangue, A Moi, P Cistac, ...
Proceedings of the 2020 conference on empirical methods in natural language …, 2020
38892020
Huggingface's transformers: State-of-the-art natural language processing
T Wolf, L Debut, V Sanh, J Chaumond, C Delangue, A Moi, P Cistac, ...
arXiv preprint arXiv:1910.03771, 2019
25872019
Opt: Open pre-trained transformer language models
S Zhang, S Roller, N Goyal, M Artetxe, M Chen, S Chen, C Dewan, ...
arXiv preprint arXiv:2205.01068, 2022
13132022
Beyond the imitation game: Quantifying and extrapolating the capabilities of language models
A Srivastava, A Rastogi, A Rao, AAM Shoeb, A Abid, A Fisch, AR Brown, ...
arXiv preprint arXiv:2206.04615, 2022
5662022
8-bit optimizers via block-wise quantization
T Dettmers, M Lewis, S Shleifer, L Zettlemoyer
arXiv preprint arXiv:2110.02861, 2021
962021
Pre-trained summarization distillation
S Shleifer, AM Rush
arXiv preprint arXiv:2010.13002, 2020
872020
Few-shot learning with multilingual language models
XV Lin, T Mihaylov, M Artetxe, T Wang, S Chen, D Simig, M Ott, N Goyal, ...
arXiv preprint arXiv:2112.10668, 2021
832021
Low resource text classification with ulmfit and backtranslation
S Shleifer
arXiv preprint arXiv:1903.09244, 2019
632019
Huggingface’s transformers: State-of-the-art natural language processing. arXiv
T Wolf, L Debut, V Sanh, J Chaumond, C Delangue, A Moi, P Cistac, ...
arXiv preprint arXiv:1910.03771, 2019
612019
Opt: Open pre-trained transformer language models, 2022
S Zhang, S Roller, N Goyal, M Artetxe, M Chen, S Chen, C Dewan, ...
URL https://arxiv. org/abs/2205.01068, 0
59
Efficient large scale language modeling with mixtures of experts
M Artetxe, S Bhosale, N Goyal, T Mihaylov, M Ott, S Shleifer, XV Lin, J Du, ...
arXiv preprint arXiv:2112.10684, 2021
462021
Pytorch FSDP: experiences on scaling fully sharded data parallel
Y Zhao, A Gu, R Varma, L Luo, CC Huang, M Xu, L Wright, H Shojanazeri, ...
arXiv preprint arXiv:2304.11277, 2023
452023
Normformer: Improved transformer pretraining with extra normalization
S Shleifer, J Weston, M Ott
arXiv preprint arXiv:2110.09456, 2021
392021
Using small proxy datasets to accelerate hyperparameter search
S Shleifer, E Prokop
arXiv preprint arXiv:1906.04887, 2019
202019
Incrementally improving graph WaveNet performance on traffic prediction
S Shleifer, C McCreery, V Chitters
arXiv preprint arXiv:1912.07390, 2019
192019
Few-shot Learning with Multilingual Generative Language Models
XV Lin, T Mihaylov, M Artetxe, T Wang, S Chen, D Simig, M Ott, N Goyal, ...
Proceedings of the 2022 Conference on Empirical Methods in Natural Language …, 2022
162022
Efficient language modeling with sparse all-mlp
P Yu, M Artetxe, M Ott, S Shleifer, H Gong, V Stoyanov, X Li
arXiv preprint arXiv:2203.06850, 2022
102022
Classification As Decoder: Trading Flexibility For Control In Neural Dialogue
S Shleifer, M Chablani, N Katariya, A Kannan, X Amatriain
arXiv preprint arXiv:1910.03476, 2019
2019
Classification as Decoder: Trading Flexibility for Control in Multi Domain Dialogue
S Shleifer, M Chablani, N Katariya, A Kannan, X Amatriain
2019
The system can't perform the operation now. Try again later.
Articles 1–19