LLMpeople
Home People Organizations Reports Fields Schools
Public Atlas People first, reports as evidence, organizations as context.

Atlas / Reports / Detail

Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training

Alignment and Safety report from Anthropic with 28 connected researchers in the LLMpeople atlas.

AnthropicUndated28 researchers
Field
Alignment and Safety
Organization
Anthropic
arXiv
2401.05566

Canonical link

https://arxiv.org/abs/2401.05566

Connected researchers

Samuel R. Bowman portrait
Researcher 5 reports

Samuel R. Bowman

Anthropic

Member of technical staff at Anthropic and associate professor of computer science, data science, and linguistics at New York University on leave. His public homepage focuses on natural language processing, machine learning, and AI alignment.

Anthropic
United States
Newton Cheng portrait
Researcher 1 reports

Newton Cheng

Anthropic

Anthropic researcher on the Frontier Red Team focused on cyber misuse evaluation and threat modeling; previously a physics PhD student at UC Berkeley and now also mentors in the MATS program.

Anthropic
Jack Clark portrait
Researcher 7 reports

Jack Clark

Anthropic / OpenAI

Co-founder and head of policy at Anthropic. He previously served as policy director at OpenAI, worked as a technology journalist, and writes the Import AI newsletter.

AnthropicOpenAI
David Duvenaud portrait
Researcher 4 reports

David Duvenaud

Anthropic

Associate Professor at the University of Toronto whose research spans deep learning, probabilistic modeling, and machine learning methods for science and AI safety.

Anthropic
Canada
Shauna Kravec portrait
Researcher 3 reports

Shauna Kravec

Anthropic

Researcher focused on AI safety, reinforcement learning, and language models, with public work spanning red teaming, adversarial robustness, and model behavior.

Anthropic
United States
Jesse Mu portrait
Researcher 1 reports

Jesse Mu

Anthropic

Jesse Mu is a Research Scientist at Anthropic and a visiting researcher at Stanford University. His work spans machine learning, AI safety, reinforcement learning, and deep learning theory.

Anthropic
Roger Grosse portrait
Researcher 1 reports

Roger Grosse

Anthropic

Associate Professor of Computer Science at the University of Toronto and director of the machine learning group, with research spanning probabilistic models and optimization algorithms.

Anthropic
Amanda Askell portrait
Researcher 7 reports

Amanda Askell

Anthropic / OpenAI

Alignment researcher at OpenAI working on making AI understandable to and aligned with human values.

AnthropicOpenAI
Jared D. Kaplan portrait
Researcher 6 reports

Jared D. Kaplan

Anthropic

Anthropic co-founder and Chief Science Officer. Formerly a physicist at Johns Hopkins, he helped develop scaling laws for neural language models and works on the science and safety of large AI systems.

Anthropic
Yuntao Bai portrait
Researcher 4 reports

Yuntao Bai

Anthropic

Anthropic researcher whose work includes reinforcement learning from human feedback and Constitutional AI; previously a Sherman Fairchild Postdoctoral Scholar in theoretical high-energy physics at Caltech.

Anthropic
Kamal Ndousse portrait
Researcher 5 reports

Kamal Ndousse

Anthropic

Researcher at Anthropic working on alignment, reasoning, and evaluation for large language models.

Anthropic
Sören Mindermann portrait
Researcher 3 reports

Sören Mindermann

Anthropic

Research scientist at Anthropic working on machine learning and AI safety.

Anthropic
Kshitij Sachan portrait
Researcher 1 reports

Kshitij Sachan

Anthropic

Kshitij Sachan is a research scientist at Anthropic whose public homepage and Google Scholar profile highlight work on language models, reasoning, code generation, and machine learning systems.

Anthropic
Michael Sellitto portrait
Researcher 1 reports

Michael Sellitto

Anthropic

Research scientist at Anthropic working on trustworthy AI and deceptive alignment.

Anthropic
Mrinank Sharma portrait
Researcher 1 reports

Mrinank Sharma

Anthropic

AI safety researcher who led Anthropic's Safeguards Research Team and worked on jailbreak robustness, automated red teaming, and monitoring for misuse and misalignment.

Anthropic
Zachary Witten portrait
Researcher 1 reports

Zachary Witten

Anthropic

Zachary Witten is a member of technical staff at Anthropic.

Anthropic
Ethan Perez portrait
Researcher 8 reports

Ethan Perez

Anthropic

Research scientist at Anthropic focused on scalable oversight, AI safety, and language model evaluation; previously worked at New York University and Google.

Anthropic
Nicholas Schiefer portrait
Researcher 8 reports

Nicholas Schiefer

Anthropic

Member of Technical Staff at Anthropic and cofounder of Oulipo Labs, working on language model safety, evaluations, and scientific forecasting.

Anthropic
Deep Ganguli portrait
Researcher 6 reports

Deep Ganguli

Anthropic

Co-founder and head of alignment science at Anthropic.

Anthropic
Nova DasSarma portrait
Researcher 5 reports

Nova DasSarma

Anthropic

Research scientist at Anthropic interested in understanding neural networks and applying that understanding to alignment.

Anthropic
Buck Shlegeris portrait
Researcher 3 reports

Buck Shlegeris

Anthropic

Buck Shlegeris is a Member of Technical Staff at Anthropic whose public homepage focuses on AI safety, model evaluations, and alignment.

Anthropic
Carson Denison portrait
Researcher 2 reports

Carson Denison

Anthropic

Member of Technical Staff at Anthropic and PhD student at Carnegie Mellon University focused on AI safety, evaluations, and oversight of large language models.

Anthropic
Monte MacDiarmid portrait
Researcher 2 reports

Monte MacDiarmid

Anthropic

Member of technical staff at Anthropic working on alignment science and the evaluation of hidden objectives in language models.

Anthropic
Adam Jermyn portrait
Researcher 1 reports

Adam Jermyn

Anthropic

Research scientist at Anthropic and former professor of theoretical astrophysics at Stony Brook University.

Anthropic

LLMpeople is a public atlas for discovering frontier AI researchers with context, provenance, and respect.

Privacy · Terms