Note
This Python sample is available with all Metavision Intelligence Plans. The corresponding C++ sample is available only with our Professional plan.
Tracking using Python
The Python bindings of Metavision SDK Analytics provide two algorithms for object tracking:
metavision_sdk_analytics.TrackingAlgorithm
which is a generic algorithm that tracks any moving object (by default) and can be tuned for more specific applicationsmetavision_sdk_analytics.SpatterTrackerAlgorithm
which is a lighter implementation allowing to track simple non colliding moving objects
Each algorithm has a corresponding sample showing how to use it:
metavision_spatter_tracking.py
metavision_generic_tracking.py
Expected Output
Metavision Tracking samples visualize events and output bounding boxes around the tracked objects with an ID of the tracked object shown next to the bounding box:
Example of running Metavision Generic Tracking sample on the dataset file:
Example of running Metavision Spatter Tracking sample on the dataset file:
Setup & requirements
By default, Metavision Tracking looks for objects of at least 10x10 pixels size.
How to start
Here, we take Metavision Tracking sample as an example, however, Metavision Spatter Tracker runs in a similar way.
To start the sample based on the live stream from your camera, run:
Linux
python3 metavision_generic_tracking.py
Windows
python metavision_generic_tracking.py
To start the sample based on recorded data, provide the full path to a RAW file (here, we use a file from our Sample Recordings):
Linux
python3 metavision_generic_tracking.py -i traffic_monitoring.raw
Windows
python metavision_generic_tracking.py -i traffic_monitoring.raw
To check for additional options:
Linux
python3 metavision_generic_tracking.py -h
Windows
python metavision_generic_tracking.py -h
Code Overview
Pipeline
Both samples implement the same pipeline:

Tracking Algorithm
The tracking algorithms consume CD events and produce tracking results (i.e.
metavision_sdk_analytics.EventSpatterClusterBuffer
or metavision_sdk_analytics.EventTrackingDataBuffer
). Those tracking results
contain the bounding boxes with unique IDs.
These algorithms are implemented in an asynchronous way and process time slices of fixed duration. This means that, depending on the duration of the input time slices of events, the algorithms might produce 0, 1 or N buffer(s) of tracking results.
Like any other asynchronous algorithm we need to specify the callback that will be called to retrieve the tracking results when a time slice has been processed:
spatter_tracker.set_output_callback(spatter_tracking_cb)
Frame Generation
At this step, we generate an image where the bounding boxes and IDs of the tracked objects are displayed on top of the
events. For that purpose, we rely on the metavision_sdk_core.OnDemandFrameGenerationAlgorithm
class. This
class allows us to buffer the input events (i.e. metavision_sdk_core.OnDemandFrameGenerationAlgorithm.process_events()
) and to generate the image on demand (i.e.
metavision_sdk_core.OnDemandFrameGenerationAlgorithm.generate()
). After the event image is generated, the
bounding boxes and IDs are rendered using the metavision_sdk_analytics.draw_tracking_results()
function.
As the output images are generated at the same frequency as the buffers of tracking results produced by the tracking algorithm, the image generation is done in the tracking algorithm’s output callbacks:
# Output callback of the spatter tracking algorithm
def spatter_tracking_cb(ts, clusters):
clusters_np = clusters.numpy()
for cluster in clusters_np:
log.append([ts, cluster['id'], int(cluster['x']), int(cluster['y']), int(cluster['width']),
int(cluster['height'])])
events_frame_gen_algo.generate(ts, output_img)
draw_tracking_results(ts, clusters, output_img)
window.show_async(output_img)
if args.out_video:
video_writer.write(output_img)
while the buffering of the events is done in the main loop with the tracking processing:
# Process events
for evs in mv_iterator:
# Dispatch system events to the window
EventLoop.poll_and_dispatch()
# Process events
events_frame_gen_algo.process_events(evs)
spatter_tracker.process_events(evs)
if window.should_close():
break
Note
Different approaches could be considered for more advanced applications.
Display
Finally, the generated frame is displayed on the screen, the following image shows an example of output:
