LLMpeople
Home People Organizations Reports Fields Schools
Public Atlas People first, reports as evidence, organizations as context.

Atlas / Reports / Detail

Alignment faking in large language models

Alignment and Safety

AnthropicUndated20 researchers
Field
Alignment and Safety
Organization
Anthropic
arXiv
2412.14093

Canonical link

https://arxiv.org/abs/2412.14093

Connected researchers

Profile Reports

Samuel Marks

Anthropic

Senior research engineer at Anthropic interested in agent foundations, model organisms of misalignment, and human-computer interaction.

Anthropic
Unknown 6
Profile Reports

Samuel R. Bowman

Anthropic

Member of technical staff at Anthropic and associate professor of computer science, data science, and linguistics at New York University on leave. His public homepage focuses on natural language processing, machine learning, and AI alignment.

Anthropic
United States 5
Profile Reports

David Duvenaud

Anthropic

Associate Professor at the University of Toronto whose research spans deep learning, probabilistic modeling, and machine learning methods for science and AI safety.

Anthropic
Canada 4
Profile Reports

Linda Petrini

Anthropic

Research scientist at Anthropic focused on safety and robustness for language models and reinforcement learning.

Anthropic
Unknown 1
Profile Reports

Jared D. Kaplan

Anthropic

Anthropic co-founder and Chief Science Officer. Formerly a physicist at Johns Hopkins, he helped develop scaling laws for neural language models and works on the science and safety of large AI systems.

Anthropic
Unknown 6
Profile Reports

Sören Mindermann

Anthropic

Research scientist at Anthropic working on machine learning and AI safety.

Anthropic
Unknown 3
Profile Reports

Jack Chen

Anthropic

Researcher at Anthropic with interests in machine learning, AI alignment, and economics.

Anthropic
Unknown 1
Profile Reports

Ethan Perez

Anthropic

Research scientist at Anthropic focused on scalable oversight, AI safety, and language model evaluation; previously worked at New York University and Google.

Anthropic
Unknown 8
Profile Reports

Buck Shlegeris

Anthropic

Buck Shlegeris is a Member of Technical Staff at Anthropic whose public homepage focuses on AI safety, model evaluations, and alignment.

Anthropic
Unknown 3
Profile Reports

Carson Denison

Anthropic

Member of Technical Staff at Anthropic and PhD student at Carnegie Mellon University focused on AI safety, evaluations, and oversight of large language models.

Anthropic
Unknown 2
Profile Reports

Monte MacDiarmid

Anthropic

Member of technical staff at Anthropic working on alignment science and the evaluation of hidden objectives in language models.

Anthropic
Unknown 2
Profile Reports

Johannes Treutlein

Anthropic

Member of Technical Staff at Anthropic and researcher in neural circuits and mechanistic interpretability, building tools for understanding AI systems.

Anthropic
Unknown 1
Profile Reports

Evan Hubinger

Anthropic

Profile still being enriched.

Anthropic
Unknown 2
Profile Reports

Ryan Greenblatt

Anthropic

Profile still being enriched.

Anthropic
Unknown 2
Profile Reports

Akbir Khan

Anthropic

Profile still being enriched.

Anthropic
Unknown 1
Profile Reports

Benjamin Wright

Anthropic

Profile still being enriched.

Anthropic
Unknown 1
Profile Reports

Fabien Roger

Anthropic

Profile still being enriched.

Anthropic
Unknown 1
Profile Reports

Jonathan Uesato

Anthropic

Profile still being enriched.

Anthropic
Unknown 1
Profile Reports

Julian Michael

Anthropic

Profile still being enriched.

Anthropic
Unknown 1
Profile Reports

Tim Belonax

Anthropic

Profile still being enriched.

Anthropic
Unknown 1

LLMpeople is a public atlas for discovering frontier AI researchers with context, provenance, and respect.