Current frame
Co-author on ByteDance Seed large-language-model reasoning and reinforcement-learning research.
Atlas / People / Detail
Public evidence identifies Chao Xin as a ByteDance Seed paper author, including the Seed1.5-Thinking report and the OpenReview paper PAG on LLM self-correction with multi-turn reinforcement learning.
Profile status: updated
Co-author on ByteDance Seed large-language-model reasoning and reinforcement-learning research.
Chao Xin is publicly listed as an author on ByteDance Seed's 2025 report "Seed1.5-Thinking: Advancing Superb Reasoning Models with Reinforcement Learning" and on the 2025 OpenReview paper "PAG: Multi-Turn Reinforced LLM Self-Correction with Policy as Generative Verifier." These sources support a conservative profile centered on large language models, reasoning, self-correction, and reinforcement learning. No reliable public homepage, team biography, education history, or standalone profile details were found within the exact-source search budget, so biographical claims are intentionally minimal.