LLMpeople
Home People Organizations Reports Fields Schools
Public Atlas People first, reports as evidence, organizations as context.

Atlas / People / Detail

Tom Henighan

Public researcher profile

Researcher1 organizations3 reports

Profile status: draft

Suggest a correction
Suggest a source

Contributions are treated as untrusted leads. Public changes happen only after review against public sources.

Trust signals

Profile completeness6%
Public sources0
Official sources0
CountryUnknown
Last reviewedNot reviewed yet
Review outcomeNo review yet
draft Unknown location 0 public sources

Organizations

core Anthropic

Reports

Alignment and RLHF Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback Alignment and RLHF Constitutional AI: Harmlessness from AI Feedback Alignment and Safety Constitutional Classifiers++: Defending against Universal Jailbreaks across Thousands of Hours of Red Teaming

LLMpeople is a public atlas for discovering frontier AI researchers with context, provenance, and respect.