This Python sample is available only with our Professional plan.

Inference Pipeline of Object Classification

The script allows you to quickly set up an inference pipeline for object classification.

You can use it with our pretrained PyTorch Rock-Paper-Scissor model chifoumi_rnn.ckpt from our pre-trained models.

Expected Output

The script takes event stream as input and generates a sequence of predictions.

The demo below shows a live Rock-Paper-Scissor game based on our inference script:

Setup & requirements

To run the script, you will need:

  • a pretrained PyTorch model (e.g. chifoumi_rnn.ckpt from our pre-trained models)

  • an event-based camera or a RAW, DAT or HDF5 input file.


The model might be sensitive to the input frame resolution. If you have downsampled the tensor input during training, it’s better to pass the same tensor resolution during the inference as well, by using the “–height-width” argument.

How to start

To start the script based on recorded data, you need to provide the full path to the pre-trained model and the path to the input file. Leave the file path empty if you want use a live camera. For example:


python3 /path/to/chifoumi_rnn.ckpt -p "gesture_a.raw"


python /path/to/chifoumi_rnn.ckpt -p "gesture_a.raw"


  1. To read directly from a camera, provide the camera serial number if there are several cameras otherwise leave it blank.

  2. Use -w /path/to/output to generate a mp4 video


Normally you should set the accumulation time interval (--delta-t) the same value as the one during the training. But if there is bandwidth constraint to run it live, you can try to increase the value accordingly, at a potential loss of accuracy.

To find the full list of options, run:


python3 -h


python -h