Export trained PyTorch detection model to TorchScript model
This Python script allows you to export a trained PyTorch detection model to a TorchScript model that can be easily deployed in various runtime environment, with an optimized latency and throughput.
The source code of this sample can be found in <install-prefix>/share/metavision/sdk/ml/python_samples/export_detector
when installing Metavision SDK from installer or packages. For other deployment methods, check the page
Path of Samples.
Expected Output
Compiled TorchScript Model that can be easily deployed during runtime.
Specifically, it will output:
model.ptjit (the model)
info_ssd_jit.json (the hyperparameters used during training)
Setup & requirements
You will need to provide the following input:
path to the checkpoint. You can use
red_event_cube_all_classes.ckpt
from our pre-trained modelspath to the output directory
How to start
To run the script with red_event_cube_all_classes.ckpt
:
python export_detector.py red_event_cube_all_classes.ckpt /path/to/output
You can also verify the performance of the trained checkpoint directly by testing it on an event-based recording.
For example, to use driving_sample.raw
as a verification sequence:
python export_detector.py red_event_cube_all_classes.ckpt /path/to/output --verification_sequence driving_sample.raw
To find the full list of options, run:
python export_detector.py -h