Urmăriți
Dario Amodei
Dario Amodei
CEO and Co-Founder at Anthropic
Adresă de e-mail confirmată pe anthropic.com
Titlu
Citat de
Citat de
Anul
Language models are few-shot learners
T Brown, B Mann, N Ryder, M Subbiah, JD Kaplan, P Dhariwal, ...
Advances in neural information processing systems 33, 1877-1901, 2020
284302020
Language models are unsupervised multitask learners
A Radford, J Wu, R Child, D Luan, D Amodei, I Sutskever
OpenAI blog 1 (8), 9, 2019
122022019
Deep speech 2: End-to-end speech recognition in english and mandarin
D Amodei, S Ananthanarayanan, R Anubhai, J Bai, E Battenberg, C Case, ...
International conference on machine learning, 173-182, 2016
36722016
A cross-platform toolkit for mass spectrometry and proteomics
MC Chambers, B Maclean, R Burke, D Amodei, DL Ruderman, ...
Nature biotechnology 30 (10), 918-920, 2012
32702012
Concrete problems in AI safety
D Amodei, C Olah, J Steinhardt, P Christiano, J Schulman, D Mané
arXiv preprint arXiv:1606.06565, 2016
27172016
Deep reinforcement learning from human preferences
PF Christiano, J Leike, T Brown, M Martic, S Legg, D Amodei
Advances in neural information processing systems 30, 2017
24662017
Evaluating large language models trained on code
M Chen, J Tworek, H Jun, Q Yuan, HPDO Pinto, J Kaplan, H Edwards, ...
arXiv preprint arXiv:2107.03374, 2021
24222021
Scaling laws for neural language models
J Kaplan, S McCandlish, T Henighan, TB Brown, B Chess, R Child, ...
arXiv preprint arXiv:2001.08361, 2020
17602020
Learning to summarize with human feedback
N Stiennon, L Ouyang, J Wu, D Ziegler, R Lowe, C Voss, A Radford, ...
Advances in Neural Information Processing Systems 33, 3008-3021, 2020
12792020
Fine-tuning language models from human preferences
DM Ziegler, N Stiennon, J Wu, TB Brown, A Radford, D Amodei, ...
arXiv preprint arXiv:1909.08593, 2019
10442019
The malicious use of artificial intelligence: Forecasting, prevention, and mitigation
M Brundage, S Avin, J Clark, H Toner, P Eckersley, B Garfinkel, A Dafoe, ...
arXiv preprint arXiv:1802.07228, 2018
10062018
Training a helpful and harmless assistant with reinforcement learning from human feedback
Y Bai, A Jones, K Ndousse, A Askell, A Chen, N DasSarma, D Drain, ...
arXiv preprint arXiv:2204.05862, 2022
9412022
Constitutional ai: Harmlessness from ai feedback
Y Bai, S Kadavath, S Kundu, A Askell, J Kernion, A Jones, A Chen, ...
arXiv preprint arXiv:2212.08073, 2022
7502022
AI and Compute
D Amodei, D Hernandez, G Sastry, J Clark, G Brockman, I Sutskever
4332018
Characterizing deformability and surface friction of cancer cells
S Byun, S Son, D Amodei, N Cermak, J Shaw, JH Kang, VC Hecht, ...
Proceedings of the National Academy of Sciences 110 (19), 7580-7585, 2013
3962013
Benchmarking safe exploration in deep reinforcement learning
A Ray, J Achiam, D Amodei
arXiv preprint arXiv:1910.01708 7 (1), 2, 2019
3882019
Reward learning from human preferences and demonstrations in atari
B Ibarz, J Leike, T Pohlen, G Irving, S Legg, D Amodei
Advances in neural information processing systems 31, 2018
3682018
Building high-quality assay libraries for targeted analysis of SWATH MS data
OT Schubert, LC Gillet, BC Collins, P Navarro, G Rosenberger, WE Wolski, ...
Nature protocols 10 (3), 426-441, 2015
3512015
Physical principles for scalable neural recording
AH Marblestone, BM Zamft, YG Maguire, MG Shapiro, TR Cybulski, ...
Frontiers in computational neuroscience 7, 137, 2013
3012013
Red teaming language models to reduce harms: Methods, scaling behaviors, and lessons learned
D Ganguli, L Lovitt, J Kernion, A Askell, Y Bai, S Kadavath, B Mann, ...
arXiv preprint arXiv:2209.07858, 2022
2912022
Sistemul nu poate realiza operația în acest moment. Încercați din nou mai târziu.
Articole 1–20