Atlas / Reports / Detail
Seed1.5-Thinking: Advancing Superb Reasoning Models with Reinforcement Learning
Public report from ByteDance Seed with 133 connected researchers in the LLMpeople atlas.
Connected researchers
Zihan Wang
ByteDance Seed / DeepSeek
Zihan Wang is a Northwestern University PhD candidate advised by Manling Li whose research focuses on agentic reinforcement learning, model efficiency, and long-context understanding. His official homepage lists prior internships at DeepSeek, Microsoft, and Yutori.
Hang Zhu
ByteDance Seed
Hang Zhu is a Research Scientist at ByteDance Seed focused on LLM infrastructure, including large-scale pre-training and post-training systems.
Renjie Zheng
ByteDance Seed
Renjie Zheng is a researcher at ByteDance. His public OpenReview profile lists prior research experience at Baidu Research and earlier study at Oregon State University and Tongji University, with public work spanning NLP, language models, and reasoning-related research.
Yuwen Xiong
ByteDance Seed
Yuwen Xiong is a Research Scientist at ByteDance Seed in the Bay Area. He received a Ph.D. from the University of Toronto's Machine Learning Group and previously worked at Waabi and Uber ATG.
Xiangpeng Wei
ByteDance Seed
Xiangpeng Wei is an algorithm engineer at ByteDance Seed whose public research profile spans large language models, multimodal applications, and multilingual NLP.
Yong Shan
ByteDance Seed
Yong Shan is a ByteDance researcher whose public profiles and publication record indicate work across LLMs, neural machine translation, dialogue systems, and music generation.
Shengding Hu
ByteDance Seed
Shengding Hu is a final-year PhD student in the Department of Computer Science and Technology at Tsinghua University. His public homepage describes research on scalable pretraining, reinforcement learning, world models, large language models, and embodied agents.
Jianhui Duan
ByteDance Seed
Jian-Hui Duan is an algorithm researcher in the ByteDance Seed LLM team whose public homepage highlights work on pretraining data, training optimization, and distribution-shift mitigation for large language models.
Xingyan Bin
ByteDance Seed
Public profiles and publication indexes link Xingyan Bin to ByteDance research work and Tsinghua University, with papers in recommendation, retrieval, MoE models, and LLM pre-training/quantization.
Zhiqi Lin
ByteDance Seed
Zhiqi Lin is publicly listed on OpenReview as a researcher at ByteDance Inc. OpenReview also lists prior computer science study at the University of Science and Technology of China, with undergraduate study from 2015 to 2019 and PhD study from 2019 to 2024.
Guanghan Ning
ByteDance Seed
Guanghan Ning is a ByteDance researcher whose public homepage says he switched to foundation models at the beginning of 2023, especially code LLMs, after earlier work in computer vision and deep learning.
Qi Liu
ByteDance Seed
Qi Liu is a ByteDance researcher whose public OpenReview profile lists prior research work at Horizon Robotics and studies at Fudan University and Huazhong University of Science and Technology. Public expertise areas include multimodal large language models, gesture recognition, metric learning, and deep learning.
Qiyang Min
ByteDance Seed
Qiyang Min is a researcher at ByteDance Inc. whose public profiles and publication record indicate work on large language models, memory-augmented architectures, and related model systems. OpenReview lists prior research experience at Baidu and undergraduate study in software engineering at Nanjing University.
Shihan Dou
ByteDance Seed
Public profiles identify Shihan Dou as a PhD student at Fudan University. His publication record covers LLM alignment and reward or preference modeling, with additional work on code intelligence and document parsing.
Chenwei Lou
ByteDance Seed
Chenwei Lou is a researcher at ByteDance Seed. Public profiles indicate earlier research experience at Tencent, an MS period at Harbin Institute of Technology, and earlier undergraduate study at Jilin University.
Jiangjie Chen
ByteDance Seed
Jiangjie Chen is a researcher at ByteDance Seed. He earned a Ph.D. in computer science from Fudan University in 2024 and works on reasoning models, autonomous agents, and machine reasoning.
Qiying Yu
ByteDance Seed
Qiying Yu is a PhD student at the Institute for AI Industry Research (AIR), Tsinghua University, working on self-supervised learning and multimodal large models.
Yongfei Liu
ByteDance Seed
Yongfei Liu is a researcher whose public profiles list interests in CodeLLM, generative multimodality, and vision-language research. His homepage says he completed a joint PhD program at the University of Chinese Academy of Sciences and ShanghaiTech University in 2022 after a bachelor's degree from Xidian University in 2017.
Ziheng Jiang
ByteDance Seed
Ziheng Jiang is an AI researcher working on large language models and machine learning systems. His public homepage says he is a Principal AI Researcher at Meta and previously was a Principal Research Scientist at ByteDance.
Haobin Jiang
ByteDance Seed
Haobin Jiang is a Peking University PhD student whose public profiles describe research in reinforcement learning, robotics, and LLM post-training.
Qingping Yang
ByteDance Seed
Qingping Yang / 杨清平 is a researcher whose public homepage highlights work on LLMs, especially code generation, table understanding, and structured information extraction. Public sources also connect him to earlier table understanding publications and recent reinforcement-learning-for-LLMs work.
Siyu Yuan
ByteDance Seed
Siyu Yuan is a final-year Ph.D. student at Fudan University advised by Deqing Yang and Yanghua Xiao, working on reasoning models and autonomous agents.
Weinan Dai
ByteDance Seed
Weinan Dai is a PhD student in Computer Science and Technology at Tsinghua University. Public profiles and publications link him to research on large language models, reasoning, and reinforcement learning, including Seed1.5-Thinking, DAPO, MemAgent, and Enigmata.
Zewei Sun
ByteDance Seed
Public sources associate Zewei Sun with ByteDance and list Nanjing University education (BS 2013-2017, MS 2017-2020), with research spanning machine translation and later large language model work.