This C++ application is available as pre-compiled binaries (ready for execution) with all Metavision Intelligence Plans and as source code only with our Professional plan. The corresponding Python sample is available with all Metavision Intelligence Plans.
Inference Pipeline of Detection and Tracking using C++
The script allows you to quickly set up an inference pipeline for object detection and tracking. You can use our pretrained PyTorch Jit model for detection and tracking of vehicles and pedestrians. Check our pre-trained models page to find out how to retrieve the model depending on your Metavision Intelligence Plan.
Note that the network was trained on a dataset recorded by a camera positioned on top of a car facing forward. Its performance might be quite degraded in other settings.
The pipeline takes events as input and outputs detected objects with bounding boxes and their corresponding confidence level.
The detected and tracked bounding boxes will be shown in two windows set side by side: the detection is shown on the left pane, with colors indicating the class membership; the tracking is drawn on the right with colors indicating the trackID and confidence level.
Setup & requirements
To run the application, you will need:
How to start
You can use the pre-compiled executable or compile from the source code.
To run the pre-compiled application, you need to reference the folder
LIBTORCH_DIR_PATH in which
you installed libtorch as described during installation (if you are using Metavision Essentials check the page
Additional Dependencies to install for ML, and if you subscribed to our
Professional plan, follow the compilation instruction for Linux)
or for Windows).
To start the application based on recorded data, you need to provide the full path to a
file and the path to the pre-trained model:
LD_LIBRARY_PATH=<LIBTORCH_DIR_PATH>:$LD_LIBRARY_PATH metavision_detection_and_tracking_pipeline --record-file <RAW file to process> --object-detector-dir /path/to/model --display
PATH=<LIBTORCH_DIR_PATH>;%PATH% && metavision_detection_and_tracking_pipeline.exe --record-file <RAW file to process> --object-detector-dir /path/to/model --display
The application comes with extensive functionalities covering the following aspects:
Input: Define the input source, sampling period, start and end timestamp
--object_detector_dir: path to a folder containing a
model.ptjittorchjit model and a
info_ssd_jit.jsonfile containing a few hyperparameters.
Output: Produce inference video (.avi), export detected and tracked bbox (in csv format)
--output-video-filenameis set, the corresponding file is created.
--output-detections-filenameis set, the corresponding file is created. It contains the output boxes of the object detector (the neural network). The format is a csv with one detection box per line, each line containing the following fields (separated by spaces):
timestamp, class_id, 0, x, y, width, height, class_confidence
--output-tracks-filenameis set, the corresponding file is created. It contains the output boxes of the tracking. The format is a csv with one tracked box per line, each line containing the following fields (separated by commas):
timestamp, class_id, track_id, x, y, width, height, class_confidence, tracking_confidence, last_detection_update_time, nb_detections
Object Detector: Define the pre-trained detection model and its calibrated hyperparameters; Set up inference thresholds for detection confidence and NMS-IoU level
Geometric Preprocessing: Provide geometric preprocessing of event stream: input transposition, filter events outside of a RoI
Noise Filtering: Trail and STC filters
Data Association: Define matching thresholds for tracking confidence and NMS-IoU level, together with other association parameters
To find the full list of options, run:
LD_LIBRARY_PATH=<LIBTORCH_DIR_PATH>:$LD_LIBRARY_PATH metavision_detection_and_tracking_pipeline -h
PATH=<LIBTORCH_DIR_PATH>;%PATH% && metavision_detection_and_tracking_pipeline.exe -h