Home > Technology peripherals > AI > body text

The spark of large models and embodied intelligence, ICML 2024 MFM-EAI Workshop call for papers and challenge launched

PHPz
Release: 2024-07-15 11:57:57
Original
760 people have browsed it

大模型与具身智能的火花,ICML 2024 MFM-EAI Workshop征稿和挑战赛启动

Workshop homepage: https://icml-mfm-eai.github.io/

Overview

In recent years, multimodal basic models (MFM), such as CLIP, ImageBind, DALL・E 3. GPT-4V, Gemini and Sora have become one of the most eye-catching and rapidly developing areas in the field of artificial intelligence. At the same time, the MFM open source community has also emerged with representative open source projects such as LLaVA, LAMM, MiniGPT-4, Stable Diffusion and OpenSora.

Different from traditional computer vision and natural language processing models, this type of MFM is actively exploring general problem solutions. By introducing MFM, Embodied Intelligence (EAI) can better handle various complex tasks in simulators and real-world environments. However, there are still many issues that have not yet been explored and solved in the intersection of MFM and EAI, including the agent's long-term decision-making, agent motion planning, new environment generalization capabilities, etc.

This Workshop will be dedicated to exploring several key issues, including but not limited to:

  1. Generalization ability of MFM;
  2. MFM for embodied intelligence;
  3. World model based on generative models;
  4. Imitation Learning data collection.

Workshop Call for Papers

This workshop focuses on Multimodal Basic Model (MFM), Embodied Intelligence (EAI) and the intersection of the two studies. Topics of this call for papers include but are not limited to:

  • Training and evaluation of MFM in open-ended scenarios
  • Data collection for training embodied Agents
  • Framework designs for MFM-powered embodied agents
  • Perception and high-level planning in embodied agents empowered by MFM
  • Decision-making and low-level control in embodied agents empowered by MFM
  • Evaluation of the capability of embodied agents
  • Generative model as world simulator
  • Limitations of MFM in empowering EAI

Submission rules

This submission will be subject to double-blind review through the OpenReview platform. The length of the main text of the submission is 4 pages, and there is no limit on the length of references and supplementary materials.

  • The submission format and template follow the ICML 2024 submission guidelines: https://icml.cc/Conferences/2024/CallForPapers
  • Submission entrance: https://openreview.net/group?id=ICML.cc/2024/Workshop /MFM-EAI

Time nodes

All time nodes are [AoE] (Anywhere on Earth).

大模型与具身智能的火花,ICML 2024 MFM-EAI Workshop征稿和挑战赛启动

MFM-EAI Challenge

Three tracks (can participate at the same time)

  1. EgoPlan Challenge

EgoPlan Challenge is designed to evaluate multi-modal large models in real-world scenarios, targeting The ability to plan real-world tasks involved in daily human activities. The model needs to select reasonable actions to complete the task based on the task goal description, first-person perspective video and current environment observation.

  • Official website of the competition: https://chenyi99.github.io/ego_plan_challenge/
  • Registration method: Fill out [Google Form](https://docs.google.com/forms/d/e/1FAIpQLScnWoXjZcwaagozP3jXnzdSEXX3r2tgXbqO6JWP_lr_fdnpQw/viewform? usp =sf_link)
  • Registration time: from now on - July 1, 2024
  • Prize settings:

    • Winner: $800
    • Runner-up: $600
    • Innovation Award: $600
  1. Composable Generalization Agent Challenge

The Composable Generalization Challenge aims to evaluate the task capabilities and generalization capabilities of the planning-execution combined system in open scenarios. The model performs task decomposition based on language task description and multi-modal visual input, and the controller executes the decomposed subtasks.

  • More details will be announced in July
  1. World Model Challenge

The World Model Challenge aims to evaluate the application performance of world simulators in embodied intelligence scenarios. The model generates videos that comply with task instructions based on embodied task descriptions and real-time scene observations, and evaluates the quality of video generation and the ability to guide the agent to complete tasks.

  • More details will be announced in July

Committee Members

Workshop Organizer

大模型与具身智能的火花,ICML 2024 MFM-EAI Workshop征稿和挑战赛启动

Steering Committee

大模型与具身智能的火花,ICML 2024 MFM-EAI Workshop征稿和挑战赛启动

Contact Workshop related questions icmlmfmeai@gmail.com

The above is the detailed content of The spark of large models and embodied intelligence, ICML 2024 MFM-EAI Workshop call for papers and challenge launched. For more information, please follow other related articles on the PHP Chinese website!

source:jiqizhixin.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template