LLMpeople
Home People Organizations Reports Fields Schools
Public Atlas People first, reports as evidence, organizations as context.

Atlas / Reports / Detail

Tracing the thoughts of a large language model

Interpretability report from Anthropic with 14 connected researchers in the LLMpeople atlas.

AnthropicUndated14 researchers
Field
Interpretability
Organization
Anthropic
arXiv
2503.21435

Canonical link

https://arxiv.org/abs/2503.21435

Connected researchers

Samuel Marks portrait
Researcher 6 reports

Samuel Marks

Anthropic

Senior research engineer at Anthropic interested in agent foundations, model organisms of misalignment, and human-computer interaction.

Anthropic
David Duvenaud portrait
Researcher 4 reports

David Duvenaud

Anthropic

Associate Professor at the University of Toronto whose research spans deep learning, probabilistic modeling, and machine learning methods for science and AI safety.

Anthropic
Canada
Nora Belrose portrait
Researcher 2 reports

Nora Belrose

Anthropic

Nora Belrose is an AI researcher whose work studies neural language models, latent structure, and cognition. She has contributed to Anthropic research on tracing and interpreting reasoning in large language models.

Anthropic
David Bau portrait
Researcher 3 reports

David Bau

Anthropic

Research scientist at Anthropic and assistant professor of computer science at Northeastern University working on interpretability and model understanding.

Anthropic
United States
Josh Batson portrait
Researcher 2 reports

Josh Batson

Anthropic

Member of technical staff at Anthropic interested in understanding deep learning and AI safety; previously a research scientist at OpenAI.

Anthropic
Ethan Perez portrait
Researcher 8 reports

Ethan Perez

Anthropic

Research scientist at Anthropic focused on scalable oversight, AI safety, and language model evaluation; previously worked at New York University and Google.

Anthropic
Nicholas Schiefer portrait
Researcher 8 reports

Nicholas Schiefer

Anthropic

Member of Technical Staff at Anthropic and cofounder of Oulipo Labs, working on language model safety, evaluations, and scientific forecasting.

Anthropic
Deep Ganguli portrait
Researcher 6 reports

Deep Ganguli

Anthropic

Co-founder and head of alignment science at Anthropic.

Anthropic
Alex Tamkin portrait
Researcher 3 reports

Alex Tamkin

Anthropic

Member of technical staff at Anthropic whose work focuses on language models, model understanding, and alignment.

Anthropic
Buck Shlegeris portrait
Researcher 3 reports

Buck Shlegeris

Anthropic

Buck Shlegeris is a Member of Technical Staff at Anthropic whose public homepage focuses on AI safety, model evaluations, and alignment.

Anthropic
Jared Kaplan portrait
Researcher 2 reports

Jared Kaplan

Anthropic

Jared Kaplan is a researcher at Anthropic known for work on scaling laws and large language models.

Anthropic
Alex Turner portrait
Researcher 1 reports

Alex Turner

Anthropic

Researcher in alignment science at Anthropic focused on AI safety and alignment.

Anthropic
Murray Shanahan portrait
Researcher 1 reports

Murray Shanahan

Anthropic

Emeritus Professor of Cognitive Robotics at Imperial College London whose public work focuses on artificial intelligence, robotics, and consciousness.

Anthropic
United Kingdom
Pieter Abbeel portrait
Researcher 1 reports

Pieter Abbeel

Anthropic

Computer scientist and robotics researcher whose public work focuses on reinforcement learning, imitation learning, and large-scale AI systems.

Anthropic

LLMpeople is a public atlas for discovering frontier AI researchers with context, provenance, and respect.

Privacy ยท Terms