AI and Reasoning
Eric Zelikman is an AI researcher at xAI, where he focuses on developing systems that reason, represent, and learn with human-like flexibility. His work bridges the gap between human and machine learning, aiming to build models that generalize from limited experience.
He is best known for co-authoring Quiet-STaR, a method enabling language models to generate internal rationales before producing output, leading to significant improvements in tasks like CommonsenseQA and GSM8K. Additionally, he developed Parsel, a framework that enhances algorithmic reasoning in language models by composing decompositions, achieving over 75% higher pass rates on complex programming tasks compared to previous methods.
Eric's research has been recognized with spotlight selections at top conferences, including NeurIPS, ICLR, COLM, and TMLR. He holds a Ph.D. from Stanford University, where he was advised by Nick Haber and Noah Goodman.