Training of Event to Video
This Python script allows you to train a model to generate a video from events.
The source code of this script can be found in <install-prefix>/share/metavision/sdk/core_ml/python_samples/train_event_to_video
when installing Metavision SDK from installer or packages. For other deployment methods, check the page
Path of Samples.
Expected Output
Training result:
checkpoints (models at different training stages)
log files
videos on test dataset
Setup & requirements
To run the script, you need:
path to the output folder
path to the training dataset:
a folder containing 3 sub folders, named
train
,val
,test
.each subfolder should contain multiple images (
png
orjpg
) files
Synthetic events data will be generated during training. The model trained on purely synthetic data is expected to generalize well on real events data at inference time.
How to start
To run the script:
python train_event_to_video.py /path/to/logging /path/to/dataset
To find the full list of options, run:
python train_event_to_video.py -h