LLMpeople
Home People Organizations Reports Fields Schools
Public Atlas People first, reports as evidence, organizations as context.

Atlas / People / Detail

Amanda Askell

Alignment researcher at OpenAI working on making AI understandable to and aligned with human values.

Researcher2 organizations7 reports

Profile status: updated

Amanda Askell portrait
Suggest a correction
Suggest a source

Trust signals

Profile completeness56%
Public sources1
Official sources1
Last reviewedMar 12, 2026
Official homepage
updated 1 public sources

Public links

website OpenAI profile

Organizations

core OpenAI core Anthropic

Reports

Large Language Models Language Models are Few-Shot Learners Alignment and RLHF Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback Alignment and RLHF Constitutional AI: Harmlessness from AI Feedback Alignment and RLHF Collective Constitutional AI: Aligning a Language Model with Public Input Alignment and Safety Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training Alignment and Safety Auditing language models for hidden objectives Alignment and Safety Constitutional Classifiers++: Defending against Universal Jailbreaks across Thousands of Hours of Red Teaming

Official and primary sources

https://openai.com/index/amanda-askell/ Official source · homepage

LLMpeople is a public atlas for discovering frontier AI researchers with context, provenance, and respect.

Privacy · Terms