My primary research interest is building machines that can reason. I'm an author of / core contributor to STaR, Minerva, AlphaGeometry, Autoformalization, Memorizing transformer, AlphaStar.
Solving olympiad geometry without human demonstrations
Trieu H. Trinh, Yuhuai Wu, Quoc V. Le, He He, Thang Luong.
Nature, 2024.
PDF
Draft, Sketch, and Prove: Guiding Formal Theorem Provers with Informal Proofs
Albert Qiaochu Jiang*, Sean Welleck*, Jin Peng Zhou*, Timothee Lacroix, Jiacheng Liu, Wenda Li,
Mateja Jamnik, Guillaume Lample, Yuhuai Wu
The 10th International Conference on Learning Representations, 2023.
PDF
Minerva: Solving Quantitative Reasoning Problems with Language Models
Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski,
Vinay Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo,
Yuhuai Wu, Behnam Neyshabur, Guy Gur-Ari, Vedant Misra
NeurIPS, 2022.
PDF Google AI Blog
Exploring Length Generalization in Large Language Models
Cem Anil, Yuhuai Wu, Anders Andreassen, Aitor Lewkowycz, Vedant Misra,
Vinay Ramasesh, Ambrose Slone, Guy Gur-Ari, Ethan Dyer, Behnam Neyshabur
The 36th Conference on Neural Information Processing Systems, 2022.
PDF
Insights into Pre-training via Simpler Synthetic Tasks
Yuhuai Wu*,
Felix Li*,
Percy Liang
NeurIPS, 2022.
PDF
Autoformalization with Large Language Models
Yuhuai Wu*,
Albert Q. Jiang,
Wenda Li,
Markus Rabe,
Charles Staats,
Mateja Jamnik,
Christian Szegedy
NeurIPS, 2022.
PDF Interview with NewScientist
STaR: Bootstrapping Reasoning With Reasoning
Eric Zelikman*, Yuhuai Wu*, Noah D. Goodman
arXiv, 2022.
PDF
Memorizing Transformers.
Yuhuai Wu,
Markus Rabe,
DeLesley Hutchins,
Christian Szegedy
The 10th International Conference on Learning Representations, 2022.
PDF #4 on HackerNews
Grandmaster Level in StarCraft II using Multi-gent Reinforcement Learning.
Vinyals, O., Babuschkin, I., Czarnecki, W.M. et al.
Nature, 2019.
PDF