Skip to main content

EgoExo4D Challenge 2025

Ego-Exo4D is a diverse, large-scale multi-modal multi view video dataset and benchmark challenge. Ego-Exo4D centers around simultaneously-captured ego-centric and exocentric video of skilled human activities (e.g., sports, music, dance, bike repair).

At EgoVis workshop during CVPR 2025, we will host 5 challenges representing EgoExo4D benchmarks. Please find details below on the challenges:

  • Ego-Pose Body: Given an egocentric video, estimate the 3D body pose of the camera-wearer. Specifically, predict the 3D position of the 17 annotated body joints for each frame.

  • Ego-Pose Hands: Estimate the 3D locations of the defined hand joints for visible hand(s). Specifically, estimate the (x,y,z) coordinates of each joint in the egocentric coordinate frame.

  • Correspondence: Given a pair of timesynchronized egocentric and exocentric videos, as well as a query object track in one of the views, the goal is to output the corresponding mask for the same object instance in the other view for all frames where the object is visible in both views.

    • Refer to the github repository to get started.
  • Fine-grained Keystep Recognition: The objective of this task is to predict the keystep label for a trimmed egocentric video clip.

    • Refer to the github repository to get started.
  • Demonstrator Proficiency:Given synchronized egocentric and exocentric video of a demonstrator performing a task, classify the proficiency skill level of the demonstrator.

    • Refer to the github repository to get started.

Other EgoExo4D challenges which are not part of CVPR 2025 workshop remain open on EvalAI website for submissions but are not eligible for prizes.

Dataset

EgoExo4D challenge participants will be using EgoExo4D dataset for these challenges. Please find the documentation here about the dataset.

Participation Guidelines

Participate in the contest by registering on the EvalAI challenge page and creating a team. All participants must register as a part of a “participating team” on EvalAI to ensure the submission limits are honored. Participants will upload their predictions in the format specified for the specific challenge, and will be evaluated on AWS instances by comparing to ground truth predictions. Instructions for training, local evaluation, and online submission are provided at EvalAI. Please refer to the individual EvalAI pages for each challenge for submission guidelines, task specifications, and evaluation criteria.

Dates

Competition Rules and Prize Information

Competition rules can be found here. Additionally, we are thrilled that FAIR is able to offer the following prize thresholds for challenges:

  • First place: $500
  • Second place: $300
  • Third place: $200

Challenge Reports

In addition to the submission on EvalAI, participants must submit a report describing their method to the workshop CMT link. In addition to your method and results, please remember to include examples of positive and negative results (limitations) of your model. These validation reports will be evaluated by challenge hosts from the Ego4D consortium before winner determination can be made. Similarly, challenge validation reports, research code from winning entries, and names of participants from the winning teams for all successful submissions must be shared publicly with the research community. More details can be found on the EgoVis workshop page.

Acknowledgements

The EgoExo4D challenges would not have been possible without the infrastructure and support of the EvalAI team. Thank you!

Organizers

  • Xizi Wang
  • Suyog Jain
  • Andrew Westbury
  • Tushar Nagarajan
  • Sherry Xue
  • Jinxu Zhang
  • Shan Shu
  • Gabriel Pérez Santamaria
  • Juanita Puentes
  • Maria Camila Escobar Palomeque
  • Arjun Somayazulu
  • Sanjay Haresh
  • Yale Song
  • Manolis Savva
  • Pablo Arbelaez
  • Jianbo Shi
  • Kristen Grauman

Past Challenges / Winners

CVPR Workshop 2024 (June 17, 2024)

CVPR Workshop 2023 (June 19, 2023)

ECCV Workshop 2022 (Oct 24, 2022)

CVPR Workshop 2022 (June 19, 2022)