LLMpeople
Home People Organizations Reports Fields Schools
Public Atlas People first, reports as evidence, organizations as context.

Atlas / People / Detail

Kamal Ndousse

Researcher at Anthropic working on alignment, reasoning, and evaluation for large language models.

Researcher1 organizations5 reports

Profile status: updated

Kamal Ndousse portrait
Suggest a correction
Suggest a source

Trust signals

Profile completeness49%
Public sources2
Official sources2
Last reviewedMar 13, 2026
Official homepage Scholar profile
updated 2 public sources

Public links

website Personal homepage google_scholar Google Scholar profile

Organizations

core Anthropic

Reports

Alignment and RLHF Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback Alignment and RLHF Constitutional AI: Harmlessness from AI Feedback Alignment and RLHF Collective Constitutional AI: Aligning a Language Model with Public Input Alignment and Safety Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training Alignment and Safety Constitutional Classifiers++: Defending against Universal Jailbreaks across Thousands of Hours of Red Teaming

Official and primary sources

https://www.kamalndousse.com/ Official source · homepage https://scholar.google.com/citations?user=rSKiI6UAAAAJ&hl=en Official source · scholar

LLMpeople is a public atlas for discovering frontier AI researchers with context, provenance, and respect.

Privacy · Terms