Video to Event GPU Simulator

This Python script allows you to transform frame-based images or videos into event-based counterpart.

The source code of this script can be found in <install-prefix>/share/metavision/sdk/core_ml/python_samples/viz_video_to_event_gpu_simulator when installing Metavision SDK from installer or packages. For other deployment methods, check the page Path of Samples.

Expected Output

When launched on a video with default options, the script shows one window with tiled images for the specified batch_size:

  • the original video

  • the corresponding event-based video produced by the simulator

Setup & requirements

To run the script, you need:

  • a full path to a directory containing image files (in png or jpg format) or videos (in mp4 or avi format).

Warning

To run this sample, you need to have some GPUs and CUDA installed to leverage them with pytorch. You can refer to the section about the Machine Learning Module Dependencies in the installation guide for details.

How to start

An example to run the script:

Linux

python3 viz_video_to_event_gpu_simulator.py /path/to/data_directory

Windows

python viz_video_to_event_gpu_simulator.py /path/to/data_directory

To find the full list of options, run:

Linux

python3 viz_video_to_event_gpu_simulator.py -h

Windows

python viz_video_to_event_gpu_simulator.py -h