Inference Pipeline of Event to Video

The script allows you to run the inference pipeline for event to video.

You can use it with our pre-trained PyTorch model e2v.ckpt from our pre-trained models.

The source code of this script can be found in <install-prefix>/share/metavision/sdk/core_ml/python_samples/demo_event_to_video when installing Metavision SDK from installer or packages. For other deployment methods, check the page Path of Samples.

Expected Output

The script takes event stream as input and displays a video and/or generates a video file on disk.

Setup & requirements

To run the script, you will need:

Note

The model might be sensitive to the input frame resolution. If you have downsampled the tensor input during training, it’s better to pass the same tensor resolution during the inference as well, by using the “–height_width” argument.

How to start

To start the script based on recorded data, you need to provide the full path to the pre-trained model and the path to the input file. Leave the file path empty if you want to use a live camera. For example:

python demo_event_to_video.py /path/to/sequence.raw /path/to/e2v.ckpt

Note

  1. Use --delta_t 10000 to select accumulation time in micro-seconds

  2. Use --video_path /path/to/output to generate a mp4 video

To find the full list of options, run:

python demo_event_to_video.py -h