LLMpeople
Home People Organizations Reports Fields Schools
Public Atlas People first, reports as evidence, organizations as context.

Atlas / People / Detail

Daya Guo

DeepSeek researcher focused on NLP, code intelligence, and LLM reasoning, with public work spanning DeepSeek-Coder, DeepSeekMath, DeepSeek-V2, DeepSeek-V3, and DeepSeek-R1.

Researcher2 organizations11 reports

Profile status: updated

Daya Guo portrait
Suggest a correction
Suggest a source

Trust signals

Profile completeness64%
Public sources2
Official sources1
Last reviewedApr 2, 2026
Official homepage
updated 2 public sources

Public links

website Personal homepage github GitHub profile

Organizations

core DeepSeek core Moonshot AI

Reports

Large Language Models DeepSeek-V3 Technical Report Large Language Models DeepSeek-V2 Technical Report Large Language Models DeepSeek LLM Technical Report Large Language Models DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning Large Language Models Kimi k1.5: Scaling Reinforcement Learning with LLMs Mathematical Reasoning Models DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models Vision-Language Models Janus: Decoupling Visual Encoding for Unified Multimodal Understanding and Generation Vision-Language Models JanusFlow: Harmonizing Autoregression and Rectified Flow for Unified Multimodal Understanding and Generation Code Language Models DeepSeek-Coder-V2: Breaking the Barrier of Closed-Source Models in Code Intelligence Mathematical Reasoning Models DeepSeek-Prover-V1.5: Harnessing Proof Assistant Feedback for Reinforcement Learning and Monte-Carlo Tree Search Mathematical Reasoning Models DeepSeek-Prover-V2: Advancing Formal Mathematical Reasoning via Reinforcement Learning and Monte-Carlo Tree Search with Proof Assistant Feedback

Official and primary sources

https://guoday.github.io/ Official source · homepage

Supporting sources

https://github.com/guoday Supporting source · github

LLMpeople is a public atlas for discovering frontier AI researchers with context, provenance, and respect.

Privacy · Terms