Atlas / Reports / Detail
DeepSeek-Coder-V2: Breaking the Barrier of Closed-Source Models in Code Intelligence
Code Language Models
Connected researchers
Runxin Xu
DeepSeek
Researcher at DeepSeek whose public homepage describes work on DeepSeek R1, V1, V2, V3, Math, Coder, and mixture-of-experts systems.
Daya Guo
DeepSeek / Moonshot AI
DeepSeek researcher focused on NLP, code intelligence, and LLM reasoning, with public work spanning DeepSeek-Coder, DeepSeekMath, DeepSeek-V2, DeepSeek-V3, and DeepSeek-R1.
Zhenkai Zhu
DeepSeek
Research intern at DeepSeek and machine learning researcher working on efficient large language models, reinforcement learning, and AI coding systems.
Qihao Zhu
DeepSeek
Research scientist focused on foundation models and multimodal large language models; his homepage notes earlier work at DeepSeek AI and current research at the University of Southern California.
Dejian Yang
DeepSeek
DeepSeek team member and co-author of the DeepSeek-V3, DeepSeek-V2, and DeepSeek LLM technical reports.
Y. Wu
DeepSeek
Yu Wu is a researcher at DeepSeek AI and head of its LLM Alignment Team. His public homepage highlights work on reinforcement learning and alignment for the DeepSeek model family, including DeepSeek-V3, DeepSeek-R1, and DeepSeekMath, and notes prior work at Microsoft Research Asia.
Junxiao Song
DeepSeek
Member of Technical Staff at DeepSeek.
Haowei Zhang
DeepSeek
Research scientist at DeepSeek with public GitHub work on language models and AI systems.
Peiyi Wang
DeepSeek
Research scientist at DeepSeek with public GitHub projects on AI systems.
Ruoyu Zhang
DeepSeek
Researcher affiliated with DeepSeek-AI and co-author of the Nature paper introducing DeepSeek-R1.