Inference Pipeline of Event to Video
The script allows you to run the inference pipeline for event to video.
You can use it with our pre-trained PyTorch model
e2v.ckpt from our pre-trained models.
The source code of this script can be found in
when installing Metavision SDK from installer or packages. For other deployment methods, check the page
Path of Samples.
The script takes event stream as input and displays a video and/or generates a video file on disk.
Setup & requirements
To run the script, you will need:
a pre-trained PyTorch model (e.g.
e2v.ckptfrom our pre-trained models)
an event-based camera or a
The model might be sensitive to the input frame resolution. If you have downsampled the tensor input during training, it’s better to pass the same tensor resolution during the inference as well, by using the “–height_width” argument.
How to start
To start the script based on recorded data, you need to provide the full path to the pre-trained model and the path to the input file. Leave the file path empty if you want to use a live camera. For example:
python3 demo_event_to_video.py /path/to/sequence.raw /path/to/e2v.ckpt
python demo_event_to_video.py /path/to/sequence.raw /path/to/e2v.ckpt
--delta_t 10000to select accumulation time in micro-seconds
--video_path /path/to/outputto generate a mp4 video
To find the full list of options, run:
python3 demo_event_to_video.py -h
python demo_event_to_video.py -h