Yixiao Ge
- geyixiao831@gmail.com
- Google Scholar
- Github
- Beijing, China
I am currently a senior researcher at Tencent ARC Lab and Tencent AI Lab, leading an effort on multimodal foundation models, open-world visual comprehension, and efficient AI. Previously, I got my Ph.D. degree from Multimedia Lab (MMLab), the Chinese University of Hong Kong, advised by Prof. Hongsheng Li and Prof. Xiaogang Wang. We are actively looking for self-motivated interns to work on related research topics. Please feel free to reach out if you are interested.
News:
Welcome to check out our SEED ([Project Page])!
- [Oct 2023] Excited to unveil SEED-LLaMA, featuring multi-turn in-context emergent capabilities.
- [Sep 2023] Three papers are accepted to NeurIPS 2023.
- [Aug 2023] Glad to release ViT-Lens, advancing omni-modal representation learning.
- [Aug 2023] Glad to release SEED-Bench, the most comprehensive MLLM benchmark to date.
- [July 2023] Glad to release SEED, an image tokenizer tailored for LLM.
- [July 2023] Four papers are accepted to ICCV 2023.
- [May 2023] One paper is accepted to KDD 2023.
- [Apr 2023] One paper is accepted to ICML 2023.
- [Feb 2023] Four papers are accepted to CVPR 2023.
- [Jan 2023] One paper is accepted to ICLR 2023.
- [Jan-Nov 2022] 11 papers were accepted by ICLR/CVPR/IJCAI/ECCV 2022 and AAAI 2023, 2 of which were oral.
- [Mar-Jul 2021] 5 papers were accepted by CVPR/ICCV 2021.
- [Jan-Sep 2020] 3 papers were accepted by ICLR/ECCV/NeurIPS 2020, 1 of which was spotlight.
Selected Projects
Multimodal Foundation Models:
-
Vision-language: We aim to develop foundational models that unify visual comprehension and generation tasks within one framework.
Given the great success of Large Language Models (LLMs), we take the initial step to empower the off-the-shelf LLMs with the ability to perform visual tasks via plugins (GPT4Tools @NeurIPS23). Despite a feasible solution, it is far from multimodal emergent abilities.
We are further devoted to developing an end-to-end framework that facilitates flexible input/output formats, transitioning and reasoning seamlessly between multimodal signals while acquiring knowledge from an inherently multimodal world. Check out our SEED for details.
Previously, we focused on pre-training vision-language representations and video-text retrieval, e.g., MCQ @CVPR22(Oral), All-in-One @CVPR23. We also made some interesting applications like Tune-A-Video @ICCV23.
-
Omni-modal: A real AI agent (e.g., a smart robot) should be capable of sensing all modalities. It is non-trivial, especially for those rare modalities. Check out our solution, namely, ViT-Lens. Omni-modal representation has great potential in emergent applications, see our DreamDiffusion.
-
Data-centric: High-quality and large-scale data is the prerequisite for training foundation models. For training data, we collect large-scale TV dramas (PTVD, Tencent Video authorization), as well as memes (Sticker820K, Tencent Search authorization). Besides, we are also focusing on properly evaluating multimodal LLMs, proposing SEED-Bench ([leaderboard]).
Open-world Visual Comprehension:
-
Visual representation: We are committed to improving image representation (e.g., mc-BEiT @ECCV22, ConMIM @ICLR23, RILS @CVPR23) and video representation (e.g., TVTS @CVPR23, TVTSv2) via large-scale pre-training.
-
Visual perception: We also tackle the challenge of visual perception tasks, for instance, detection and segmentation. Check out our MIMDet @ICCV23, BoxSnake @ICCV23.
Efficient AI:
We have created a new topic of hot-refresh model upgrades (RACT @ICLR22) for large-scale retrieval systems, which is practical in industry and under-explored in academia. Beyond retrieval, upgrading the foundation models in current AI systems is also costly because all downstream modules need to be retrained to adapt. Check out our TaCA for a solution. We are also interested in model selection (SFDA @ECCV22, PED @ICCV23), binarization (BEBR @KDD23), etc.
Our algorithms helped Tencent effectively reduce costs and increase efficiency. We won the highest technical award within the company and the SZCCF Science and Technology Award.
Publications [Full List]
Selected Preprints:
-
Making LLaMA SEE and Draw with SEED TokenizerOffers unified multimodal comprehension and generation, featuring multi-turn in-context emergent capabilities, akin to an AI aide.Yuying Ge*, Sijie Zhao*, Ziyun Zeng, Yixiao Ge#, Chen Li, Xintao Wang, Ying Shan
-
Planting a SEED of Vision in Large Language ModelEmpowers Large Language Models (LLMs) with the emergent ability to see and draw.Yuying Ge*, Yixiao Ge*#, Ziyun Zeng, Xintao Wang, Ying Shan
-
ViT-Lens: Towards Omni-modal RepresentationsAdvancing omni-modal representation learning with modality lens.Weixian Lei, Yixiao Ge#, Jianfeng Zhang, Dylan Sun, Kun Yi, Ying Shan, Mike Zheng Shou#
-
SEED-Bench: Benchmarking Multimodal LLMs with Generative ComprehensionConsists of 19K multiple-choice questions with accurate human annotations, spans 12 evaluation dimensions in terms of both spatial and temporal comprehension.Bohao Li*, Rui Wang*, Guangzhi Wang*, Yuying Ge#, Yixiao Ge#, Ying Shan
-
TaCA: Upgrading Your Visual Foundation Model with Task-agnostic Compatible AdapterEnabling new ViTs plugged into the framework (e.g., BLIP-2) with other modules untouched and a performance boost.Binjie Zhang, Yixiao Ge#, Xuyuan Xu, Ying Shan, Mike Zheng Shou#
-
What Makes for Good Visual Tokenizers for Large Language Models?Rather than simply applying CLIP models, we systematically investigate proper pre-training methods to build good visual tokenizers, making LLMs powerful multimodal LLMs.Guangzhi Wang, Yixiao Ge#, Xiaohan Ding, Mohan Kankanhalli, Ying Shan
-
TVTSv2: Learning Out-of-the-box Spatiotemporal Visual Representations at ScaleProducing general-purpose video features that work out of the box. We surpass InternVideo and ImageBind on zero-shot and linear tasks.Ziyun Zeng, Yixiao Ge#, Zhan Tong, Xihui Liu, Shu-Tao Xia, Ying Shan
2023:
-
GPT4Tools: Teaching Large Language Model to Use Tools via Self-instructionRui Yang, Lin Song, Yanwei Li, Sijie Zhao, Yixiao Ge, Xiu Li, Ying Shan
-
Mix-of-Show: Decentralized Low-Rank Adaptation for Multi-Concept Customization of Diffusion ModelsYuchao Gu, Xintao Wang, Jay Zhangjie Wu, Yujun Shi, Yunpeng Chen, Zihan Fan, Wuyou Xiao, Rui Zhao, Shuning Chang, Weijia Wu, Yixiao Ge, Ying Shan, Mike Zheng Shou
-
Meta-Adapter: An Online Few-shot Learner for Vision-Language ModelCheng Cheng, Lin Song, Ruoyi Xue, Hang Wang, Hongbin Sun, Yixiao Ge, Ying ShanNeurIPS, 2023 [Paper (Coming soon)]
-
Tune-A-Video: One-Shot Tuning of Image Diffusion Models for Text-to-Video GenerationJay Zhangjie Wu, Yixiao Ge, Xintao Wang, Weixian Lei, Yuchao Gu, Yufei Shi, Wynne Hsu, Ying Shan, Xiaohu Qie, Mike Zheng Shou
-
Exploring Model Transferability through the Lens of Potential EnergyXiaotong Li, Zixuan Hu, Yixiao Ge, Ying Shan, Lingyu Duan
-
BoxSnake: Polygonal Instance Segmentation with Box SupervisionRui Yang, Lin Song, Yixiao Ge, Xiu Li
-
Unleashing Vanilla Vision Transformer with Masked Image Modeling for Object DetectionYuxin Fang*, Shusheng Yang*, Shijie Wang*, Yixiao Ge, Ying Shan, Xinggang Wang
-
Binary Embedding-based Retrieval at TencentYukang Gan*, Yixiao Ge*, Chang Zhou*, Shupeng Su, Zhouchuan Xu, Xuyuan Xu, Quanchao Hui, Xiang Chen, Yexin Wang, Ying Shan
-
π-Tuning: Transferring Multimodal Foundation Models with Optimal Multi-task InterpolationChengyue Wu, Teng Wang, Yixiao Ge#, Zeyu Lu, Ruisong Zhou, Ying Shan, Ping Luo
-
Accelerating Vision-Language Pretraining with Free Language ModelingTeng Wang, Yixiao Ge, Feng Zheng, Ran Cheng, Ying Shan, Xiaohu Qie, Ping Luo
-
Masked Visual Reconstruction in Language Semantic SpaceShusheng Yang, Yixiao Ge#, Kun Yi, Dian Li, Ying Shan, Xiaohu Qie, Xinggang Wang#
-
Learning Transferable Spatiotemporal Representations from Natural Script KnowledgeZiyun Zeng*, Yuying Ge*, Xihui Liu, Bin Chen#, Ping Luo, Shu-Tao Xia, Yixiao Ge#
-
All in One: Exploring Unified Video-Language Pre-trainingAlex Jinpeng Wang, Yixiao Ge, Rui Yan, Yuying Ge, Xudong Lin, Guanyu Cai, Jianping Wu, Ying Shan, Xiaohu Qie, Mike Zheng Shou
-
Masked Image Modeling with Denoising ContrastKun Yi*, Yixiao Ge*#, Xiaotong Li, Shusheng Yang, Dian Li, Jianping Wu, Ying Shan, Xiaohu Qie
-
Darwinian Model Upgrades: Model Evolving with Selective CompatibilityBinjie Zhang*, Shupeng Su*, Yixiao Ge#, Xuyuan Xu, Yexin Wang, Chun Yuan, Mike Zheng Shou, Ying ShanAAAI, 2023 [Paper]
-
Video-Text Pre-training with Learned RegionsRui Yan, Mike Zheng Shou, Yixiao Ge, Alex Jinpeng Wang, Xudong Lin, Guanyu Cai, Jinhui Tang
2022:
-
MILES: Visual BERT Pre-training with Injected Language Semantics for Video-text RetrievalYuying Ge, Yixiao Ge, Xihui Liu, Jinpeng Wang, Jianping Wu, Ying Shan, Xiaohu Qie, Ping Luo
-
Not All Models Are Equal: Predicting Model Transferability in a Self-challenging Fisher SpaceWenqi Shao#, Xun Zhao, Yixiao Ge#, Zhaoyang Zhang, Lei Yang, Xiaogang Wang, Ying Shan, Ping Luo
-
mc-BEiT: Multi-choice Discretization for Image BERT Pre-trainingXiaotong Li, Yixiao Ge, Kun Yi, Zixuan Hu, Ying Shan, Lingyu Duan
-
Towards Universal Backward-Compatible Representation LearningBinjie Zhang, Yixiao Ge#, Yantao Shen, Shupeng Su, Fanzi Wu, Chun Yuan#, Xuyuan Xu, Yexin Wang, Ying Shan
-
Bridging Video-text Retrieval with Multiple Choice QuestionsYuying Ge, Yixiao Ge, Xihui Liu, Dian Li, Ying Shan, Xiaohu Qie, Ping Luo
-
Object-aware Video-language Pre-training for RetrievalAlex Jinpeng Wang, Yixiao Ge, Guanyu Cai, Rui Yan, Xudong Lin, Ying Shan, Xiaohu Qie, Mike Zheng Shou
-
Hot-Refresh Model Upgrades with Regression-Alleviating Compatible Training in Image RetrievalBinjie Zhang, Yixiao Ge#, Yantao Shen, Yu Li, Chun Yuan#, Xuyuan Xu, Yexin Wang, Ying Shan
-
Dynamic Token Normalization Improves Vision TransformerWenqi Shao, Yixiao Ge, Zhaoyang Zhang, Xuyuan Xu, Xiaogang Wang, Ying Shan, Ping Luo
-
Uncertainty Modeling for Out-of-Distribution GeneralizationXiaotong Li, Yongxing Dai, Yixiao Ge, Jun Liu, Ying Shan, Lingyu Duan
-
Structured Domain Adaptation with Online Relation Regularization for Unsupervised Person Re-IDYixiao Ge, Feng Zhu, Dapeng Chen, Rui Zhao, Xiaogang Wang, Hongsheng Li
2021:
-
Progressive Correspondence Pruning by Consensus LearningChen Zhao*, Yixiao Ge*, Feng Zhu, Rui Zhao, Hongsheng Li, Mathieu Salzmann
-
Online Pseudo Label Generation by Hierarchical Cluster Dynamics for Adaptive Person Re-identificationYi Zheng, Shixiang Tang, Guolong Teng, Yixiao Ge, Kaijian Liu, Donglian Qi, Jing Qin, Dapeng ChenICCV, 2021 [Paper]
-
Refining Pseudo Labels with Clustering Consensus over Generations for Unsupervised Object Re-identificationXiao Zhang*, Yixiao Ge*, Yu Qiao, Hongsheng LiCVPR, 2021 [Paper]
-
DivCo: Diverse Conditional Image Synthesis via Contrastive Generative Adversarial NetworkRui Liu, Yixiao Ge, Ching Lam Choi, Xiaogang Wang, Hongsheng Li
-
Mutual CRF-GNN Network for Few-shot LearningShixiang Tang, Dapeng Chen, Lei Bai, Kaijian Liu, Yixiao Ge, Wanli OuyangCVPR 2021 [Paper]
2020:
-
Self-paced Contrastive Learning with Hybrid Memory for Domain Adaptive Object Re-IDYixiao Ge, Feng Zhu, Dapeng Chen, Rui Zhao, Hongsheng Li
-
Self-supervising Fine-grained Region Similarities for Large-scale Image LocalizationYixiao Ge, Haibo Wang, Feng Zhu, Rui Zhao, Hongsheng Li
-
Mutual Mean-Teaching: Pseudo Label Refinery for Unsupervised Domain Adaptation on Person Re-identificationYixiao Ge, Dapeng Chen, Hongsheng Li
Before 2020:
-
FD-GAN: Pose-guided Feature Distilling GAN for Robust Person Re-identificationYixiao Ge*, Zhuowan Li*, Haiyu Zhao, Guojun Yin, Shuai Yi, Xiaogang Wang, Hongsheng Li