Export trained PyTorch detection model to TorchScript model

This Python script allows you to export a trained PyTorch detection model to a TorchScript model that can be easily deployed in various runtime environment, with an optimized latency and throughput.

The source code of this sample can be found in <install-prefix>/share/metavision/sdk/ml/python_samples/export_detector when installing Metavision SDK from installer or packages. For other deployment methods, check the page Path of Samples.

Expected Output

Upon successful completion, the process will generate a compiled TorchScript model that is optimized for deployment during runtime.

The specific output files include:

  • model.ptjit: the compiled TorchScript model file, containing the trained network architecture and weights, ready for deployment

  • info_ssd_jit.json: a JSON file containing the hyperparameters and configurations used during training

Setup & requirements

You will need to provide the following input:

  • path to the checkpoint. You can use red_event_cube_all_classes.ckpt from our pre-trained models

  • path to the output directory

How to start

To run the script with red_event_cube_all_classes.ckpt:

python export_detector.py red_event_cube_all_classes.ckpt /path/to/output

You can also verify the performance of the trained checkpoint directly by testing it on an event-based recording. For example, to use driving_sample.raw from our Sample Recordings as a verification sequence:

python export_detector.py red_event_cube_all_classes.ckpt /path/to/output --verification_sequence driving_sample.raw

To find the full list of options, run:

python export_detector.py -h