LLMpeople
Home People Organizations Reports Fields Schools
Public Atlas People first, reports as evidence, organizations as context.

Atlas / People / Detail

Shauna Kravec

Researcher focused on AI safety, reinforcement learning, and language models, with public work spanning red teaming, adversarial robustness, and model behavior.

Researcher1 organizations3 reports

Profile status: updated

Shauna Kravec portrait
Suggest a correction
Suggest a source

Contributions are treated as untrusted leads. Public changes happen only after review against public sources.

Trust signals

Profile completeness62%
Public sources2
Official sources1
CountryUnited States
Last reviewedMar 13, 2026
Review outcomeUpdated
Official homepage
updated United States 2 public sources

Latest review note

Added verified personal homepage, GitHub profile, homepage avatar, and a concise English bio.

Public links

website Personal homepage github GitHub profile

Organizations

core Anthropic

Reports

Alignment and RLHF Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback Alignment and RLHF Constitutional AI: Harmlessness from AI Feedback Alignment and Safety Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training

Official and primary sources

https://celest.ai/ Official source · homepage

Supporting sources

https://github.com/shaunakravec Supporting source · github

LLMpeople is a public atlas for discovering frontier AI researchers with context, provenance, and respect.