Atlas / Fields / Detail
Reasoning Models
Researchers connected to this field in the public atlas.
Junyang Lin
Alibaba Qwen
Junyang Lin (Justin Lin) is a researcher and open-source maintainer known for the Qwen family of models. His public profiles list interests in LLMs, AI agents, multimodal learning, long-horizon reasoning, world models, and reinforcement learning; multiple March 2026 news reports said he stepped down from the Qwen tech lead role.
Shyamal Anadkat
OpenAI
Engineer and product leader who worked on OpenAI's Applied AI team and now advises startups on AI products. He writes publicly about agents, retrieval, and evaluation on his personal site.
Raphael Ho
OpenAI
Research scientist at OpenAI whose public profiles describe work on generative AI and reinforcement learning, after PhD study at the University of Cambridge.
Alec Radford
OpenAI
Alec Radford is a researcher and a co-author of the GPT-4 Technical Report. His GitHub profile links to his personal website at newmu.github.io.
Ahmed H. Awadallah
Microsoft
Ahmed H. Awadallah is a Partner Research Manager at Microsoft Research. His work spans large language models, search, conversational AI, and applied machine learning.
Alex Ficek
Microsoft
Alex Ficek is a final-year PhD student at the University of Michigan and incoming applied scientist at Microsoft working on language models, agents, reinforcement learning, and infrastructure.
Ameet Talwalkar
NVIDIA
Ameet Talwalkar is an associate professor in the Machine Learning Department at Carnegie Mellon University and Chief Scientist at Datadog. His public research spans AI for science, human-AI interaction, and specialized models and agents.
Josh Batson
OpenAI
Researcher and engineer focused on AI systems, applications, alignment, and interpretability; contributed to OpenAI reasoning-model system-card work.
Sandhini Agarwal
OpenAI
Sandhini Agarwal is a researcher at OpenAI. Her OpenReview profile lists her as a researcher at OpenAI (2020–present) and an undergraduate student at Stanford University (2015–2019).
Alexander Kirillov
OpenAI
Alexander Kirillov is a researcher at OpenAI working on computer vision, deep learning, and multimodal systems. He previously held research roles at FAIR, UC Berkeley, and Carnegie Mellon University.
Guillaume Lample
Microsoft
Chief scientist and co-founder of Mistral AI known for work on multilingual language modeling, machine translation, theorem proving, and mathematical reasoning; previously a research scientist at FAIR.
Thomas Wolf
Mistral AI
Researcher and engineer whose public work focuses on large language models, reasoning systems, and open-source machine learning.
Ching-Yao Chuang
NVIDIA
Ching-Yao Chuang is a research scientist at NVIDIA working on computer vision, machine learning, and generative AI.
Bobak Shahriari
OpenAI
AI researcher and engineering leader at OpenAI working on multimedia, generation, reasoning, and representation learning. His public work focuses on maximizing the benefits of AI while minimizing harms.
J. J. Anumanchipalli
OpenAI
Assistant professor at UC Berkeley whose public profiles focus on speech interfaces, speech synthesis, and language grounding.
Ali Payani
NVIDIA
Stanford AI Lab PhD researcher focused on large language models, multimodal language models, and NLP; contributed to NVIDIA multimodal model work.
Aditya Krishna
OpenAI
Machine learning researcher and Columbia PhD student working on NLP, diffusion models, reinforcement learning, and applied probability; contributed to OpenAI reasoning-model system-card work.
Dianne Penn
OpenAI
Member of technical staff at OpenAI focused on code generation and synthetic data for post-training, with interests in coding, reasoning, human data, reinforcement learning, and self-improving systems.
Jason Phang
OpenAI
Research scientist at OpenAI focused on large language models, post-training, reasoning, and evaluation; previously completed a PhD at NYU.
Yash Patil
OpenAI
Founder of REWORKd and an applied AI engineer previously at OpenAI, Scale AI, and Stanford research, with work spanning autonomous agents and LLM applications.
Alexandre Rame
Mistral AI
Alexandre Rame is a research scientist at Mistral AI working on post-training, evaluation, and reasoning for large language models.
Barry Haddow
Microsoft
Barry Haddow is a natural language processing researcher affiliated with Microsoft Research and the University of Edinburgh.
Jonathan Welsh
OpenAI
Postdoctoral researcher at Stanford Graduate School of Business studying how humans and AI learn from experience and language, and also working on AI training at OpenAI.
Amin Firooz
NVIDIA
Senior research scientist at NVIDIA focused on large language models, reinforcement learning, and inference-time scaling for AI agents. His public NVIDIA author page also notes prior work on robotic grasping, pose estimation, and language understanding.
Jacob Walsman
OpenAI
Member of technical staff at OpenAI working on reasoning, reinforcement learning, and post-training.
Jeffrey Tworek
OpenAI
Jeff Tworek is a researcher at OpenAI working on GPT-4.1. Before joining OpenAI in 2022, his public profile notes work on image generation, robotics, reinforcement learning, and POMDPs.
Jingren Zhou
Moonshot AI / Alibaba Qwen
Alibaba senior technology leader and researcher associated with Qwen. Public profiles list him with Alibaba Group, and official Alibaba Cloud coverage identifies him as a chief technology officer leading large-model work.
Zhifeng Chen
Google Gemini / Z.ai
Distinguished software engineer at Google Brain focused on large-scale computer systems and machine learning applications.
Fei Huang
Alibaba Qwen
Researcher at Alibaba Group working on natural language processing and multimodal AI.
Chris Hallacy
OpenAI
Engineer at OpenAI working on inference infrastructure, product engineering, and prototyping for systems including GPT-4 and DALL-E 2.
Dilek Hakkani-Tur
NVIDIA
Vice President of AI Research at NVIDIA leading generative AI and conversational AI research; previously led Amazon Alexa AI and held faculty roles at the University of Illinois Urbana-Champaign and Bilkent University.
Rogerio Feris
NVIDIA
Rogerio Feris is a principal research scientist and manager at the MIT-IBM Watson AI Lab. His work focuses on multimodal AI, efficient representation learning, and large language models with long-term memory.
Rahul Gupta
Microsoft
Rahul Gupta is a Senior Applied Science Manager at Amazon Nova whose work spans speech, language, multimodal AI, and generative models.
Thomas Scialom
Microsoft
Thomas Scialom is a research scientist at Microsoft working on large language model training, post-training, safety, and evaluation. His public profile highlights reinforcement learning, scalable training, and reliable language model behavior.
Marcin Michalski
Microsoft
Marcin Michalski is a researcher at Microsoft whose public profile and Google Scholar page highlight work on small language models, reasoning, and machine learning systems.
Guillaume Lample
Meta AI / Mistral AI
Chief AI scientist at Mistral AI and co-founder of Kyutai. Previously worked on large language models and machine translation at Meta and earned a PhD in computer science at Sorbonne University and Inria Paris.
Bryan Catanzaro
NVIDIA
Vice President of Applied Deep Learning Research at NVIDIA, leading work on conversational AI, generative AI, and accelerated deep learning software.
Jeff Wu
OpenAI
Researcher at OpenAI working on language model training and evaluation, and co-author of the GPT-4 Technical Report.
Yipeng Wang
Z.ai
Research scientist at Z.ai focused on multimodal understanding and generation, large language models, and reinforcement learning. He works on pre-training, post-training, and evaluation of multimodal models.
Zihan Jiang
Z.ai
Research scientist at Z.ai focused on multimodal understanding and generation, reinforcement learning, AI agents, and end-to-end models. He received a bachelor's degree from Tsinghua University and a master's degree from UCLA.
Johannes Heidecke
OpenAI
Head of Safety Systems at OpenAI.
Pierre Stock
Mistral AI
Research scientist at Mistral AI focused on efficient and robust machine learning, with interests including optimization, scaling laws, interpretability, and post-training.
Phil Tillet
OpenAI
OpenAI researcher and software engineer known for creating Triton, an open-source GPU programming language, and co-authoring the GPT-4 Technical Report.
Xifeng Yan
Microsoft
Xifeng Yan is a professor at UC Santa Barbara whose research focuses on data mining, machine learning, and graph analytics. His public profile highlights work on graph mining, large-scale data systems, and AI methods for structured data.
Yongqi Wang
Alibaba Qwen
Research scientist in Tongyi Lab whose public profile highlights work on speech processing, machine learning, and multimodal large language models.
Clement Gehring
Mistral AI
Research scientist at Mistral AI working on deep learning, language technologies, and large-scale AI systems.
Jake Cheng
OpenAI
Researcher working on reasoning and coding models, including OpenAI o3 and o4-mini.
Jianfeng Gao
Microsoft
Distinguished Scientist and corporate vice president at Microsoft Research whose public work spans language models, search, and reasoning; arXiv author results include Phi-4 mini reasoning.
Jiang Bian
Microsoft
Principal Research Manager at Microsoft Research AI4Science leading language AI and foundation model work; his arXiv author results include the Phi-4 reasoning reports.
Nils Olsson
OpenAI
Researcher working on reasoning-focused language models, including OpenAI o1.
Roman Ring
OpenAI
Machine learning researcher and writer focused on mechanistic interpretability, music generation, and image or video diffusion; contributed to OpenAI's o1 system-card work.
Shilpa Rao
Microsoft
Researcher at Microsoft working on machine learning, natural language processing, and information retrieval.
Theophile Gervet
Mistral AI
Research scientist at Mistral AI working on language models for code.
Victor Zhukov
Microsoft
Research scientist at Microsoft working on large language models and generative AI.
Yash Khandelwal
Microsoft
Research scientist at Microsoft AI working on language models, previously at FAIR and the University of Southern California.
Yuxiang Zheng
Alibaba Qwen
PhD candidate at Shanghai Jiao Tong University focused on reinforcement learning, large language models, and coding agents; coauthor of QwQ-32B.