Xuezhi Wang
Xuezhi Wang
Research Scientist, Google DeepMind
Adresă de e-mail confirmată pe google.com - Pagina de pornire
Citat de
Citat de
Chain of thought prompting elicits reasoning in large language models
J Wei, X Wang, D Schuurmans, M Bosma, E Chi, Q Le, D Zhou
Neural Information Processing Systems (NeurIPS), 2022, 2022
Palm: Scaling language modeling with pathways
A Chowdhery, S Narang, J Devlin, M Bosma, G Mishra, A Roberts, ...
Journal of Machine Learning Research (JMLR), 2023, 2023
Scaling instruction-finetuned language models
HW Chung, L Hou, S Longpre, B Zoph, Y Tay, W Fedus, Y Li, X Wang, ...
JMLR 2024, 2024
Self-consistency improves chain of thought reasoning in language models
X Wang, J Wei, D Schuurmans, Q Le, E Chi, S Narang, A Chowdhery, ...
ICLR 2023, 2023
Palm 2 technical report
R Anil, AM Dai, O Firat, M Johnson, D Lepikhin, A Passos, S Shakeri, ...
arXiv preprint arXiv:2305.10403, 2023
Gemini: a family of highly capable multimodal models
G Team, R Anil, S Borgeaud, Y Wu, JB Alayrac, J Yu, R Soricut, ...
arXiv preprint arXiv:2312.11805, 2023
Least-to-most prompting enables complex reasoning in large language models
D Zhou, N Schärli, L Hou, J Wei, N Scales, X Wang, D Schuurmans, ...
ICLR 2023, 2023
Underspecification presents challenges for credibility in modern machine learning
A D'Amour, K Heller, D Moldovan, B Adlam, B Alipanahi, A Beutel, ...
Journal of Machine Learning Research 23 (226), 1-61, 2022
Fairness without demographics through adversarially reweighted learning
P Lahoti, A Beutel, J Chen, K Lee, F Prost, N Thain, X Wang, EH Chi
34th Conference on Neural Information Processing Systems (NeurIPS 2020), 2020
Large language models as optimizers
C Yang, X Wang, Y Lu, H Liu, QV Le, D Zhou, X Chen
ICLR 2024, 2024
ToTTo: A Controlled Table-To-Text Generation Dataset
AP Parikh, X Wang, S Gehrmann, M Faruqui, B Dhingra, D Yang, D Das
EMNLP 2020, 2020
Large language models can self-improve
J Huang, SS Gu, L Hou, Y Wu, X Wang, H Yu, J Han
EMNLP 2023, 2023
ESCAPES: evacuation simulation with children, authorities, parents, emotions, and social comparison.
J Tsai, N Fridman, E Bowring, M Brown, S Epstein, GA Kaminka, ...
AAMAS 11, 457-464, 2011
Unifying Language Learning Paradigms
Y Tay, M Dehghani, VQ Tran, X Garcia, J Wei, X Wang, HW Chung, ...
ICLR 2023, 2023
Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context
M Reid, N Savinov, D Teplyashin, D Lepikhin, T Lillicrap, J Alayrac, ...
arXiv preprint arXiv:2403.05530, 2024
Language models are multilingual chain-of-thought reasoners
F Shi, M Suzgun, M Freitag, X Wang, S Srivats, S Vosoughi, HW Chung, ...
ICLR 2023, 2023
Measuring and reducing gendered correlations in pre-trained models
K Webster, X Wang, I Tenney, A Beutel, E Pitler, E Pavlick, J Chen, E Chi, ...
arXiv preprint arXiv:2010.06032, 2020
Adana: Active name disambiguation
X Wang, J Tang, H Cheng, SY Philip
2011 IEEE 11th international conference on data mining, 794-803, 2011
Measure and Improve Robustness in NLP Models: A Survey
X Wang, H Wang, D Yang
NAACL 2022, 2022
Flexible transfer learning under support and model shift
X Wang, J Schneider
Advances in Neural Information Processing Systems 27, 2014
Sistemul nu poate realiza operația în acest moment. Încercați din nou mai târziu.
Articole 1–20