LLMpeople
Home People Organizations Reports Fields Schools
Public Atlas People first, reports as evidence, organizations as context.

Atlas / Reports / Detail

MM1.5: Methods, Analysis and Insights from Multimodal LLM Fine-tuning

Multimodal Language Models report from Apple with 17 connected researchers in the LLMpeople atlas.

AppleUndated17 researchers
Field
Multimodal Language Models
Organization
Apple
arXiv
2409.20566

Canonical link

https://arxiv.org/abs/2409.20566

Connected researchers

Yinfei Yang portrait
Researcher 2 reports

Yinfei Yang

Apple

Research scientist at Apple focused on natural language processing and machine learning.

Apple
Haoxuan You portrait
Researcher 1 reports

Haoxuan You

Apple

Research scientist on Apple Foundation Models whose work focuses on machine learning systems, multimodal foundation models, and AI agents.

Apple
Peter Grasch portrait
Researcher 2 reports

Peter Grasch

Apple

Research scientist at Apple focused on state-of-the-art machine learning and computer vision methods.

Apple
Zirui Wang portrait
Researcher 2 reports

Zirui Wang

Apple

Senior researcher at Apple working on large models, multimodal learning, and speech processing, according to his personal site.

Apple
Zhengfeng Lai portrait
Researcher 1 reports

Zhengfeng Lai

Apple

Zhengfeng Lai is an AI/ML engineer at Apple working on generative AI and multimodal learning. He is also a PhD student at Cornell University whose interests include multimodal learning, model reasoning, and interpretability, and he has previously interned at Apple, Google, and Meta.

Apple
Bowen Zhang portrait
Researcher 2 reports

Bowen Zhang

Apple

Research scientist at Apple working on large language models, vision-language models, and model scaling.

Apple
Dhruti Shah portrait
Researcher 2 reports

Dhruti Shah

Apple

Researcher working on machine learning, vision and language, computer vision, diffusion, and generative AI.

Apple
Jean-Philippe Fauconnier portrait
Researcher 2 reports

Jean-Philippe Fauconnier

Apple

Research scientist at Apple Foundation Models working on generative AI, large language models, and multimodal models.

Apple
Philipp Dufter portrait
Researcher 2 reports

Philipp Dufter

Apple

Research scientist at Apple Foundation Models with interests in natural language processing, structured generation, controllable generation, and algorithmic efficiency.

Apple
Xianzhi Du portrait
Researcher 2 reports

Xianzhi Du

Apple

Research scientist at Apple working on language and vision-language modeling, AI agents, and post-training.

Apple
Zhe Gan portrait
Researcher 2 reports

Zhe Gan

Apple

Machine learning researcher at Apple working on large multimodal foundation models, video generation, and vision-language systems.

Apple
Afshin Dehghan portrait
Researcher 1 reports

Afshin Dehghan

Apple

Research scientist at Apple focused on computer vision, multimodal learning, and robotics.

Apple
Aleksei Timofeev portrait
Researcher 1 reports

Aleksei Timofeev

Apple

Research scientist whose public OpenReview profile lists work on multimodal representation learning, speech synthesis, and personalized voice generation.

Apple
Forrest Huang portrait
Researcher 1 reports

Forrest Huang

Apple

Research scientist at Apple Foundation Models working on efficient training and multimodal language models.

Apple
Hong-You Chen portrait
Researcher 1 reports

Hong-You Chen

Apple

AI and machine learning engineer at Apple working on multimodal foundation models; previously worked at Snap and the University of Southern California.

Apple
Keen You portrait
Researcher 1 reports

Keen You

Apple

Research scientist at Apple specializing in post-training, reinforcement learning, and AI agents.

Apple
Mingfei Gao portrait
Researcher 1 reports

Mingfei Gao

Apple

Researcher working on machine learning, optimization, and sequential data.

Apple

LLMpeople is a public atlas for discovering frontier AI researchers with context, provenance, and respect.

Privacy ยท Terms