LLMpeople
Home People Organizations Reports Fields Schools
Public Atlas People first, reports as evidence, organizations as context.

Atlas / People / Detail

Liane Lovitt

Research scientist at Anthropic whose public work includes AI alignment, reinforcement learning from human feedback, and model behavior.

Researcher1 organizations2 reports

Profile status: updated

Liane Lovitt portrait
Suggest a correction
Suggest a source

Contributions are treated as untrusted leads. Public changes happen only after review against public sources.

Trust signals

Profile completeness100%
Public sources4
Official sources1
CountryUnknown
Last reviewedMar 13, 2026
Review outcomeUpdated
Official homepage Structured work Structured education
updated Unknown location 4 public sources
AI AlignmentRLHFModel Behavior

Latest review note

Cleanup improvement: replaced the unresolved row with a verified OpenReview profile, LinkedIn profile, and public education/work history from OpenReview.

Education

Stanford University B.S. with honors in Computer Science 2010 → 2014
University of Oxford M.Sc. in Computer Science 2014 → 2015

Work

Anthropic Research Scientist 2021-01-01

Public links

website Oxford Internet Institute profile linkedin LinkedIn profile news Anthropic article on collective constitutional AI other OpenReview profile

Organizations

core Anthropic

Reports

Alignment and RLHF Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback Alignment and Safety Constitutional Classifiers: Defending against Universal Jailbreaks across Thousands of Hours of Red Teaming

Official and primary sources

https://www.oii.ox.ac.uk/people/profiles/liane-lovitt/ Official source · homepage

Supporting sources

https://www.anthropic.com/news/collective-constitutional-ai-aligning-a-language-model-with-public-input/ Supporting source · news https://openreview.net/profile?id=~Liane_Lovitt1 Supporting source · other https://www.linkedin.com/in/lianelovitt/ Supporting source · social

LLMpeople is a public atlas for discovering frontier AI researchers with context, provenance, and respect.