BabyBench is a multimodal benchmark for intrinsic motivations and open-ended learning in developmental artificial ingelligence. The objective is to teach typical behavioral milestones to MIMo, a multimodal infant model. We provide the embodiment, the simulation environments, and the evaluation metrics; all you need is to implement your ideas.
The first BabyBench Competition will take place at the IEEE ICDL 2025 Conference in Prague! The topic of this first edition will be how infants discover their own bodies. Can you help MIMo learn two typical infant behaviors: self touch and hand regard? Make your submission and know more about the competition here.
Pre-requisites: Python, Git, and Conda. Tested on Ubuntu 18.04 and 24.04, MS Windows 10 and 11, and MacOS 15.
-
Create a conda environment:
conda create --name babybench python=3.12 conda activate babybench
-
Clone this repository:
git clone https://github.com/babybench/BabyBench2025_Starter_Kit.git cd BabyBench2025_Starter_Kit
-
Install requirements:
pip install -r requirements.txt
-
Clone and install MIMo:
pip install -e MIMo
All done! You are ready to start using BabyBench.
-
Launch the installation test:
python test_installation.py
This will run a test to check that the everything is correctly installed. If the installation is successful, you should find a new folder called test_installation
in the results
directory with a rendered video of the test.
Pre-requisites: Singularity. Tested on Ubuntu 24.04.
- Create the singularity container
singularity build -F babybench.sif babybench.def
This will create a singularity container called babybench.sif
in the current directory.
- Launch the container
singularity run -c -H /home --bind "$PWD/:/home" babybench.sif
This will run a test to check that the everything is correctly installed. If the installation is successful, you should find a new folder called test_installation
in the results
directory with a rendered video of the test.
If you encounter any issues, visit the troubleshooting page
If you are not sure where to begin, we recommend having a look at the examples
directory and this wiki page.
The aim for BabyBench is to get MIMo to learn the target behaviors without any external supervision, i.e. without extrinsic rewards. Your goal is to train a policy that matches sensory observation (proprioception, vision, touch) to actions. To do so, we provide an API to initialize and interact with the environments.
Submissions must be made through PaperPlaza. The topic of this first BabyBench competition is how infants discover their own bodies. There are two target behaviors: self-touch and hand regard.
First-round submission should consist of:
-
A 2-page extended abstract explaining your method,
-
The training log file generated during training,
-
The trajectory log files generated during the evaluation,
-
Optionally, you can also submit links to the following:
- A repository with your training code,
- A folder with videos of the learned behaviors rendered during the evaluation.
For further information, check our Wiki.
In particular, if you want to know more about:
- the training environments, see here
- the target behaviors, see here
- examples, see here
- how to generate the submission files, see here
- the evaluation process, see here
- resources about intrinsic motivations and open-ended learning, see here
... or see the FAQ for common questions or errors.
Feel free to contact us for any questions about BabyBench. You can post an issue here on Github or contact the organizers via mail.
We highly encourage you to collaborate with other participants! You can submit your problems, questions or ideas in the discussion forum.
This project is licensed under an MIT License - see LICENSE for details