Atlas / Fields / Detail
Alignment and RLHF
Researchers connected to this field in the public atlas.
Liane Lovitt
Anthropic
Research scientist at Anthropic whose public work includes AI alignment, reinforcement learning from human feedback, and model behavior.
Samuel R. Bowman
Anthropic
Member of technical staff at Anthropic and associate professor of computer science, data science, and linguistics at New York University on leave. His public homepage focuses on natural language processing, machine learning, and AI alignment.
Noemi Mercado
Anthropic
Researcher at Anthropic whose public homepage and scholarly profile connect cognitive science research with AI.
Azalia Mirhoseini
Anthropic
Research scientist at Anthropic working on machine learning systems and AI; previously worked on machine learning systems, compilers, and sustainability at Google.
Jack Clark
Anthropic / OpenAI
Co-founder and head of policy at Anthropic. He previously served as policy director at OpenAI, worked as a technology journalist, and writes the Import AI newsletter.
Shauna Kravec
Anthropic
Researcher focused on AI safety, reinforcement learning, and language models, with public work spanning red teaming, adversarial robustness, and model behavior.
Zac Hatfield-Dodds
Anthropic
Staff software engineer at Anthropic building systems for AI safety, reliability, and alignment.
Andy Jones
Anthropic
Anthropic researcher working on machine learning and AI-assisted science; previously built tools for learning from text, images, and tabular data.
Chris Olah
Anthropic
Research scientist known for mechanistic interpretability and deep learning visualization, previously at Google Brain and OpenAI.
Robert Lasenby
Anthropic
Research scientist at Anthropic working on reasoning and geometry-aware machine learning.
Amanda Askell
Anthropic / OpenAI
Alignment researcher at OpenAI working on making AI understandable to and aligned with human values.
Jared D. Kaplan
Anthropic
Anthropic co-founder and Chief Science Officer. Formerly a physicist at Johns Hopkins, he helped develop scaling laws for neural language models and works on the science and safety of large AI systems.
Yuntao Bai
Anthropic
Anthropic researcher whose work includes reinforcement learning from human feedback and Constitutional AI; previously a Sherman Fairchild Postdoctoral Scholar in theoretical high-energy physics at Caltech.
Sam McCandlish
Anthropic
Independent researcher working on the theoretical foundations of AI, especially inductive biases, scaling laws, and approximate Bayesian updating. His public homepage notes prior research roles at Anthropic and OpenAI.
Jackson Kernion
Anthropic
Member of Anthropic's Interpretability team, where he works on understanding how large language models work.
Christopher Olah
Anthropic
Research scientist at Anthropic known for mechanistic interpretability work, including early research on feature visualization and circuits in neural networks.
Kamal Ndousse
Anthropic
Researcher at Anthropic working on alignment, reasoning, and evaluation for large language models.
Catherine Olsson
Anthropic
Catherine Olsson is an AI alignment researcher and writer whose public website and Anthropic author page describe work on AI safety, interpretability, and building helpful, harmless assistants.
Kamile Lukosuite
Anthropic
AI governance researcher at the Centre for the Governance of AI and former Anthropic resident researcher, with interests in language models, AI safety, scalable oversight, and evaluations.
Ethan Perez
Anthropic
Research scientist at Anthropic focused on scalable oversight, AI safety, and language model evaluation; previously worked at New York University and Google.
Nicholas Schiefer
Anthropic
Member of Technical Staff at Anthropic and cofounder of Oulipo Labs, working on language model safety, evaluations, and scientific forecasting.
Deep Ganguli
Anthropic
Co-founder and head of alignment science at Anthropic.
Dario Amodei
Anthropic / OpenAI
CEO and co-founder of Anthropic. Before Anthropic, he served as vice president of research at OpenAI.
Nova DasSarma
Anthropic
Research scientist at Anthropic interested in understanding neural networks and applying that understanding to alignment.
Anna Chen
Anthropic
Researcher working on AI safety and adversarial evaluation, including Anthropic many-shot jailbreaking research.
Saurav Kadavath
Anthropic
Research scientist at Anthropic interested in understanding and steering AI systems.
Tom Conerly
Anthropic
Software engineer at Anthropic, previously at Google, with public writing on language models, agents, and reinforcement learning.
Ben Mann
Anthropic
Researcher interested in neural networks and their potential to achieve general intelligence. His public homepage notes prior roles as a cofounder at Anthropic, researcher at OpenAI, and member of the startup team at Stripe.
Nicholas Joseph
Anthropic
Researcher at Anthropic working on the alignment and evaluation of advanced AI systems.
Tom Brown
Anthropic
Research scientist at Anthropic working on model behavior and interpretability.
Scott Johnston
Anthropic
Software engineer at Anthropic working on infrastructure, tooling, model behavior, and multimodal systems.
Stanislav Fort
Anthropic
Member of Technical Staff at Anthropic whose work focuses on understanding, evaluating, and improving large language models, with emphasis on reasoning, safety, and generalization.
Tristan Hume
Anthropic
Member of technical staff at Anthropic working on AI systems and alignment, with published work on RLHF and constitutional methods for harmless assistants.
Anna Goldie
Anthropic
Research scientist working on scalable systems and machine learning; her public homepage notes previous work at Anthropic and current work on Gemini at Google DeepMind.
Carroll Wainwright
Anthropic
Research scientist at Anthropic focused on alignment, reasoning, agents, and complex systems.
Danny Hernandez
Anthropic
Anthropic researcher working on the economics of AI and scaling laws.
Herbie Bradley
Anthropic
Computer scientist and machine learning researcher with public work spanning AI systems and alignment-related research.
Jamie Kerr
Anthropic
Researcher working on AI safety and alignment, including Constitutional AI.
Sam Ringer
Anthropic
Member of technical staff at Anthropic working on large language model training, evaluation, and interpretability.
Sheer El Showk
Anthropic
Research scientist at Anthropic working on machine learning, causality, and computational biology.