Atlas / Reports / Detail
Qwen2.5-Coder Technical Report
Code Language Models report from Alibaba Qwen with 8 connected researchers in the LLMpeople atlas.
Connected researchers
Junyang Lin
Alibaba Qwen
Junyang Lin (Justin Lin) is a researcher and open-source maintainer known for the Qwen family of models. His public profiles list interests in LLMs, AI agents, multimodal learning, long-horizon reasoning, world models, and reinforcement learning; multiple March 2026 news reports said he stepped down from the Qwen tech lead role.
Shuai Bai
Alibaba Qwen
Senior algorithm expert at Alibaba Group working on large language models, multimodal large language models, and diffusion models.
Zeyu Cui
Alibaba Qwen
Research scientist at Meta in New York City and research advisor at the UCLA NLP group; previously completed a PhD in computer science at UCLA.
Jinze Bai
Alibaba Qwen
PhD student at The Hong Kong University of Science and Technology (Guangzhou) whose research interests include large language models, vision-language models, AI agents, and multimodal retrieval.
Kai Dang
Alibaba Qwen
Researcher on Alibaba's Qwen team focused on large language models and NLP, with public research profiles listing a Nankai University background.
Xiaodong Deng
Alibaba Qwen
Research scientist in Tongyi Lab whose official profile highlights post-training and multimodal large language models.
Wenbin Ge
Alibaba Qwen
Research scientist in Tongyi Lab whose official profile highlights work on efficient reinforcement learning, generalization, inference-time scaling, and reasoning for large language models.
Chang Zhou
Alibaba Qwen
Qwen researcher and co-lead whose work focuses on pretraining and post-training, multimodal models, agent systems, and large-scale model infrastructure.