Training of Event to Video

This Python script allows you to train a model to generate a video from events. This model can then be used to run the Inference Pipeline of Event to Video sample.

The source code of this script can be found in <install-prefix>/share/metavision/sdk/core_ml/python_samples/train_event_to_video when installing Metavision SDK from installer or packages. For other deployment methods, check the page Path of Samples.

Expected Output

Training result:

  • checkpoints (models at different training stages)

  • log files

  • videos on test dataset

Setup & requirements

To run the script, you need:

  • path to the output folder

  • path to the training dataset:

    • a folder containing 3 sub folders, named train, val, test

    • each subfolder should contain multiple images (png or jpg) files

    Note

    If you don’t already have a set of images for training, you can start by using the testing data included with our SDK. Go to this page and click on the Download All button to retrieve an archive. Once downloaded, locate the folder openeb/core_ml/mini_image_dataset within the archive. This folder contains a small image dataset you can use to test your training process.

    Alternatively, you can build your dataset using images from the COCO dataset, which provides a comprehensive collection of labeled images suitable for training a model.

Synthetic events data will be generated during training. The model trained on purely synthetic data is expected to generalize well on real events data at inference time.

How to start

To run the script:

python train_event_to_video.py /path/to/logging /path/to/dataset

To find the full list of options, run:

python train_event_to_video.py -h