Training of Optical Flow model

This Python script allows you to train a self-supervised Unet Flow Regression Model with preprocessed event-based data in h5 format.

The source code of this sample can be found in <install-prefix>/share/metavision/sdk/ml/python_samples/train_flow when installing Metavision SDK from installer or packages. For other deployment methods, check the page Path of Samples.

Expected Output

Flow training results:

  • checkpoints (models at different training stages)

  • log files

  • video demos on test dataset

An example of the video demo is shown below:

Setup & requirements

To run the script, you need:

  • path to the output directory

  • path to the training dataset:

    • a folder containing 2 sub folders, named train and val. Each subfolder should contain one or multiple h5 files.


The current model is trained in a self-supervised fashion, but you can easily adapt it to a supervised fashion if you have flow labels at hand. This feature will be officially published in an upcoming release.

How to start

To run the script:


python3 /path/to/output /path/to/dataset


python /path/to/output /path/to/dataset


Add an optional argument --feature-extractor to select your specific Unet architecture. We provide four Unet variants:

  • eminet (default): Unet Regression Model with depth-wise separable recurrent convolution for minimal footprint

  • eminet_non_sep: eminet without depth-wise separable convolution

  • midinet: Unet Regression Model with Squeeze excitation layers

  • midinet2: midinet with a fine-tuned middle block (convRNN with tunable depth + residual connection)

To find the full list of options, run:


python3 -h


python -h


In case of “CUDA out of memory” error, you can try to reduce the batch-size, num-tbins or height-width parameters.