updated 2 public sources
CodeLLMgenerative multimodalityvision-language pretrainingfew-shot segmentationvisual grounding

Current frame

Research scientist focused on CodeLLM and generative multimodality.

Extended note

Public sources identify Yongfei Liu as a ByteDance researcher with a confirmed ShanghaiTech email on OpenReview and a personal homepage. His homepage states that he received a PhD in 2022 from a joint program of the University of Chinese Academy of Sciences and ShanghaiTech University under Xuming He, following a bachelor's degree from Xidian University in 2017. Publicly stated research interests include CodeLLM, generative multimodality, vision-language pretraining, few-shot segmentation, and visual grounding.