Note
This C++ sample has a corresponding Python sample.
Inference Pipeline of Detection and Tracking using C++
This sample allows you to quickly set up an inference pipeline for object detection and tracking. You can use our pre-trained TorchScript model for detection and tracking of vehicles and pedestrians. Check our pre-trained models page to find out how to retrieve the model depending on your Metavision SDK Package.
Note
Note that the network was trained on a dataset recorded by a camera positioned on top of a car facing forward. Its performance might be quite degraded in other settings.
The source code of this sample can be found in <install-prefix>/share/metavision/sdk/ml/cpp_samples/detection_and_tracking_pipeline
when installing Metavision SDK from installer or packages. For other deployment methods, check the page
Path of Samples.
Expected Output
The pipeline takes events as input and outputs detected objects with bounding boxes and their corresponding confidence level.
The detected and tracked bounding boxes will be shown in two windows set side by side: the detection is shown on the left pane, with colors indicating the class membership; the tracking is drawn on the right with colors indicating the trackID and confidence level.
Setup & requirements
To run the sample, you will need:
a pre-trained TorchScript model with a JSON file of hyperparameters. Check our pre-trained models
an event-based camera or an event file (RAW, DAT or HDF5). We suggest you to start with
driving_sample.raw
, downloadable from our Sample Recordings
How to start
First, you need to compile the sample. You should use the compilation guide as a baseline,
but the cmake ..
step needs to be customized to properly reference libtorch. Here, we assume you followed the
Machine Learning Module Dependencies in the installation guide that requires to deploy libtorch
in a LIBTORCH_DIR_PATH directory. If so, use those cmake
commands to compile:
cmake .. -DCMAKE_PREFIX_PATH=`LIBTORCH_DIR_PATH` -DTorch_DIR=`LIBTORCH_DIR_PATH` -DCMAKE_BUILD_TYPE=Release cmake --build . --config Release
For example, on Windows, if libtorch was installed in C:\libtorch
(where you should have, among others, the folders C:\libtorch\cmake
and C:\libtorch\lib
),
then the compilation steps are going to be:
cmake .. -DCMAKE_PREFIX_PATH=C:\libtorch -DTorch_DIR=C:\libtorch -DCMAKE_BUILD_TYPE=Release cmake --build . --config Release
To start the sample based on recorded data, you need to provide the full path to an event file and the path to the pre-trained model:
Linux
metavision_detection_and_tracking_pipeline --record-file <event file to process> --object-detector-dir /path/to/model --display
Windows
metavision_detection_and_tracking_pipeline.exe --record-file <event file to process> --object-detector-dir /path/to/model --display
The sample comes with extensive functionalities covering the following aspects:
Input: Define the input source, sampling period, start and end timestamp
--object_detector_dir
: path to a folder containing amodel.ptjit
torchjit model and ainfo_ssd_jit.json
file containing a few hyperparameters.
Output: Produce inference video (.avi), export detected and tracked bbox (in csv format)
if
--output-video-filename
is set, the corresponding file is created.if
--output-detections-filename
is set, the corresponding file is created. It contains the output boxes of the object detector (the neural network). The format is a csv with one detection box per line, each line containing the following fields (separated by spaces):timestamp, class_id, 0, x, y, width, height, class_confidence
if
--output-tracks-filename
is set, the corresponding file is created. It contains the output boxes of the tracking. The format is a csv with one tracked box per line, each line containing the following fields (separated by commas):timestamp, class_id, track_id, x, y, width, height, class_confidence, tracking_confidence, last_detection_update_time, nb_detections
Object Detector: Define the pre-trained detection model and its calibrated hyperparameters; Set up inference thresholds for detection confidence and NMS-IoU level
Geometric Preprocessing: Provide geometric preprocessing of event stream: input transposition, filter events outside of a RoI
Noise Filtering: Trail and STC filters
Data Association: Define matching thresholds for tracking confidence and NMS-IoU level, together with other association parameters
To find the full list of options, run:
Linux
metavision_detection_and_tracking_pipeline -h
Windows
metavision_detection_and_tracking_pipeline.exe -h