My primary research interest is building machines that can reason.
I have chosen mathematics as a starting point to study reasoning, with the aim of creating an automated mathematician.
A New York Times story covering our work.
Our work is featured in Quanta Magazine!
Releasing Draft, Sketch, and Prove: Autoformalize the entire natural language proofs [arxiv]!
I gave a talk on autoformalization at FLAIM conference.
I gave a guest lecture on autoformalization at UIUC proof automation class.
8 papers accepted to NeurIPS 2022.
Our length generalization paper is accepted as an Oral Presentation at NeurIPS 2022.
We are organizing the second MATHAI workshop at NeurIPS 2022.
I gave a talk at AITP 2022.
Sharing a systematic study on synthetic pre-training [arXiv]. Understanding pre-training via synthetic tasks!
I gave a talk at the University of Cambridge [Link].
I gave talk at UC Berkeley Center for Human-Compatible AI (CHAI).
I gave a talk at Covariant.ai.
We released Thor [arXiv]. Integrate symbolic tools to neural theorem provers for premise selection!
We released STaR [arXiv]. Bootstrapping Reasoning with Reasoning!
We released Block-Recurrent Transformer [arXiv]. Recurrence is coming back!
Gave a talk at the University of Oxford.
Gave a talk at Harvard University.
Memorizing Transformers accepted as a spotlight presentation at ICLR 2022.
Three papers accepted to ICLR 2022.
Subgoal search algorithm accepted to NeurIPS 2021.
Co-organized the MATHAI4ED workshop at NeurIPS 2021: Math AI for education: Bridging the gap between research and smart education.
Led the Reasoning section in the Foundation Model white paper.
Two posters in ICML 2021.
Two posters in ICLR 2021.
Draft, Sketch, and Prove: Guiding Formal Theorem Provers with Informal Proofs
Albert Qiaochu Jiang*, Sean Welleck*, Jin Peng Zhou*, Timothee Lacroix, Jiacheng Liu, Wenda Li,
Mateja Jamnik, Guillaume Lample, Yuhuai Wu
The 10th International Conference on Learning Representations, 2023.PDF
Minerva: Solving Quantitative Reasoning Problems with Language Models
Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski,
Vinay Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo,
Yuhuai Wu, Behnam Neyshabur, Guy Gur-Ari, Vedant Misra
NeurIPS, 2022.PDF Google AI Blog
Exploring Length Generalization in Large Language Models
Cem Anil, Yuhuai Wu, Anders Andreassen, Aitor Lewkowycz, Vedant Misra,
Vinay Ramasesh, Ambrose Slone, Guy Gur-Ari, Ethan Dyer, Behnam Neyshabur
The 36th Conference on Neural Information Processing Systems, 2022.PDF
STaR: Bootstrapping Reasoning With Reasoning
Eric Zelikman*, Yuhuai Wu*, Noah D. Goodman
Subgoal Search For Complex Reasoning Tasks.
Konrad Czechowski, Tomasz Odrzygozdz, Marek Zbysinski, Michal Zawalski,
Krzysztof Olejnik,Yuhuai Wu, Lukasz Kucinski, Piotr Milos
The 35th Conference on Neural Information Processing Systems, 2021.PDF
Modelling High-Level Mathematical Reasoning in Mechanised Declarative Proofs.
Wenda Li, Lei Yu, Yuhuai Wu, Lawrence C. Paulson
The 9th International Conference on Learning Representations, 2021.PDF
On the Opportunities and Risks of Foundation Models.
Rishi Bommasani, Drew A. Hudson, Percy Liang et. al.
Options as REsponses: Grounding Behavioural Hierarchies in Multi-agent Reinforcement Learning.
Yuhuai Wu*, Alexander Sasha Vezhnevets*, Maria Eckstein, Remi Leblond, Joel Z. Leibo.
The 37th International Conference on Machine Learning, 2020.PDF
Grandmaster Level in StarCraft II using Multi-gent Reinforcement Learning.
Vinyals, O., Babuschkin, I., Czarnecki, W.M. et al.