Teacher Self-Preparation
Teacher-Student Guidance
Student Self-Practice
AI personal assistants, deployed through robots or wearables, require embodied understanding to collaborate effectively with humans. Current Multimodal Large Language Models (MLLMs) primarily focus on third-person (exocentric) vision, overlooking the unique aspects of first-person (egocentric) videos. Additionally, high acquisition costs limit data size, impairing MLLM performance. To address these challenges, we propose learning the mapping between exocentric and egocentric domains, leveraging the extensive exocentric knowledge within existing MLLMs to enhance egocentric video understanding. To this end, we introduce Ego-ExoClip, a pre-training dataset comprising 1.1M synchronized ego-exo clip-text pairs derived from Ego-Exo4D. Our approach features a progressive training pipeline with three stages: Teacher Self-Preparation, Teacher-Student Guidance, and Student Self-Practice. Additionally, we propose an instruction-tuning data EgoIT from multiple sources to strengthen the model's instruction-following capabilities, along with the EgoBench benchmark comprising eight different tasks for thorough evaluation. Extensive experiments across diverse egocentric tasks reveal that existing MLLMs perform inadequately in egocentric video understanding, while our model significantly outperforms these leading models.
In this paper, we aim to improve egocentric video understanding by transferring extensive exocentric knowledge embedded in MLLMs at low cost. To achieve this, we first construct a diverse clip-text dataset, Ego-ExoClip, with synchronized ego-exo perspectives. We then design a progressive three-stage training pipeline to effectively learn the mappings between the two domains. To enhance instruction-following abilities for downstream tasks, we introduce the egocentric instruction-tuning data EgoIT. Moreover, we propose EgoBench, a benchmark designed to comprehensively evaluate the embodied cognitive abilities of existing MLLMs. Extensive experiments have validated the superiority of our Exo2Ego framework for egocentric tasks.