LLMpeople
Home People Organizations Reports Fields Schools
Public Atlas People first, reports as evidence, organizations as context.

Atlas / People / Detail

Sam McCandlish

Independent researcher working on the theoretical foundations of AI, especially inductive biases, scaling laws, and approximate Bayesian updating. His public homepage notes prior research roles at Anthropic and OpenAI.

Researcher1 organizations3 reports

Profile status: updated

Suggest a correction
Suggest a source

Contributions are treated as untrusted leads. Public changes happen only after review against public sources.

Trust signals

Profile completeness55%
Public sources3
Official sources2
CountryUnknown
Last reviewedMar 13, 2026
Review outcomeUpdated
Official homepage Scholar profile
updated Unknown location 3 public sources

Latest review note

Added personal homepage, Google Scholar, X profile, and a bio from his public research page.

Public links

website Personal homepage google_scholar Google Scholar profile x X profile

Organizations

core Anthropic

Reports

Alignment and RLHF Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback Alignment and RLHF Constitutional AI: Harmlessness from AI Feedback Alignment and RLHF Collective Constitutional AI: Aligning a Language Model with Public Input

Official and primary sources

https://sam.dance/ Official source · homepage https://scholar.google.com/citations?user=uMNVV0gAAAAJ Official source · scholar

Supporting sources

https://x.com/mccandlish Supporting source · social

LLMpeople is a public atlas for discovering frontier AI researchers with context, provenance, and respect.