Current frame
Researcher publishing on large language model systems, preference optimization, and vision-language learning.
Atlas / People / Detail
Shipeng Yan is a co-author on ByteDance Seed's Seed1.5-Thinking and Seed1.5-VL technical reports. DBLP records additional publications on large-scale LLM training, self-rewarding preference optimization, and continual vision-language pretraining.
Profile status: updated
Researcher publishing on large language model systems, preference optimization, and vision-language learning.
Public sources link Shipeng Yan to ByteDance Seed's 2025 Seed1.5 technical reports and to earlier machine learning publications indexed by DBLP, including MegaScale: Scaling Large Language Model Training to More Than 10,000 GPUs (NSDI 2024), Just say what you want: only-prompting self-rewarding online preference optimization (2024), and Generative Negative Text Replay for Continual Vision-Language Pretraining (ECCV 2022).