Atlas / Reports / Detail
Constitutional AI: Harmlessness from AI Feedback
Alignment and RLHF
Connected researchers
Samuel R. Bowman
Anthropic
Member of technical staff at Anthropic and associate professor of computer science, data science, and linguistics at New York University on leave. His public homepage focuses on natural language processing, machine learning, and AI alignment.
Noemi Mercado
Anthropic
Researcher at Anthropic whose public homepage and scholarly profile connect cognitive science research with AI.
Azalia Mirhoseini
Anthropic
Research scientist at Anthropic working on machine learning systems and AI; previously worked on machine learning systems, compilers, and sustainability at Google.
Jack Clark
Anthropic / OpenAI
Co-founder and head of policy at Anthropic. He previously served as policy director at OpenAI, worked as a technology journalist, and writes the Import AI newsletter.
Shauna Kravec
Anthropic
Researcher focused on AI safety, reinforcement learning, and language models, with public work spanning red teaming, adversarial robustness, and model behavior.
Zac Hatfield-Dodds
Anthropic
Staff software engineer at Anthropic building systems for AI safety, reliability, and alignment.
Chris Olah
Anthropic
Research scientist known for mechanistic interpretability and deep learning visualization, previously at Google Brain and OpenAI.
Robert Lasenby
Anthropic
Research scientist at Anthropic working on reasoning and geometry-aware machine learning.
Amanda Askell
Anthropic / OpenAI
Alignment researcher at OpenAI working on making AI understandable to and aligned with human values.
Jared D. Kaplan
Anthropic
Anthropic co-founder and Chief Science Officer. Formerly a physicist at Johns Hopkins, he helped develop scaling laws for neural language models and works on the science and safety of large AI systems.
Yuntao Bai
Anthropic
Anthropic researcher whose work includes reinforcement learning from human feedback and Constitutional AI; previously a Sherman Fairchild Postdoctoral Scholar in theoretical high-energy physics at Caltech.
Sam McCandlish
Anthropic
Independent researcher working on the theoretical foundations of AI, especially inductive biases, scaling laws, and approximate Bayesian updating. His public homepage notes prior research roles at Anthropic and OpenAI.
Jackson Kernion
Anthropic
Member of Anthropic's Interpretability team, where he works on understanding how large language models work.
Christopher Olah
Anthropic
Research scientist at Anthropic known for mechanistic interpretability work, including early research on feature visualization and circuits in neural networks.
Kamal Ndousse
Anthropic
Researcher at Anthropic working on alignment, reasoning, and evaluation for large language models.
Kamile Lukosuite
Anthropic
AI governance researcher at the Centre for the Governance of AI and former Anthropic resident researcher, with interests in language models, AI safety, scalable oversight, and evaluations.
Ethan Perez
Anthropic
Research scientist at Anthropic focused on scalable oversight, AI safety, and language model evaluation; previously worked at New York University and Google.
Nicholas Schiefer
Anthropic
Member of Technical Staff at Anthropic and cofounder of Oulipo Labs, working on language model safety, evaluations, and scientific forecasting.
Deep Ganguli
Anthropic
Co-founder and head of alignment science at Anthropic.
Dario Amodei
Anthropic / OpenAI
CEO and co-founder of Anthropic. Before Anthropic, he served as vice president of research at OpenAI.
Nova DasSarma
Anthropic
Research scientist at Anthropic interested in understanding neural networks and applying that understanding to alignment.
Anna Chen
Anthropic
Researcher working on AI safety and adversarial evaluation, including Anthropic many-shot jailbreaking research.
Saurav Kadavath
Anthropic
Research scientist at Anthropic interested in understanding and steering AI systems.
Tom Conerly
Anthropic
Software engineer at Anthropic, previously at Google, with public writing on language models, agents, and reinforcement learning.