SDK ML Components

template<typename Event, typename DetectionBox, typename Tracklet>
class Metavision::DataAssociation

Module that matches detections and builds tracklets.

Public Types

using EventCallback = std::function<void(const Event*, const Event*)>

Function type handling events.

using EndSliceCallback = std::function<void(timestamp)>

Function providing clock ticks.

Public Functions

inline DataAssociation(float detection_merge_weight = 0.7f, timestamp deletion_time = 100000, float max_iou_inter_track = 0.5f, float iou_to_match_a_detection = 0.2f, float max_iou_for_one_det_to_many_tracks = 0.5f, bool use_descriptor = false, int detection_threshold = 1, int width = 640, int height = 480, timestamp time_surface_delta_t = 200000, bool update_tracklets_between_detections = true)

Creates a DataAssociation object.

Parameters
  • detection_merge_weight – Weight to merge a tracklet and a detection. Takes a float value in range [0; 1] (0 means use only tracklet box, 1 means use only detection box)

  • deletion_time – Time before deleting a tracklet no longer supported by new detections

  • max_iou_inter_track – Maximum IOU inter tracklet before deleting the least recently updated one

  • iou_to_match_a_detection – Minimum IOU to match a detection

  • max_iou_for_one_det_to_many_tracks – High IOU threshold above which a detection is ignored (skipped) if it is matched with multiple tracks

  • use_descriptor – Boolean to enable the use of a descriptor

  • detection_threshold – Number of consecutive detections to create a new track

  • width – Sensor’s width

  • height – Sensor’s height

  • time_surface_delta_t – Delta time for the timesurface

  • update_tracklets_between_detections – boolean to determine if tracklets are updated only when new detections are received

inline void done()

Ends the internal thread.

inline ~DataAssociation()

Destructor.

inline void receive_events(const Event *start_ev, const Event *end_ev)

Callback called on events reception.

Parameters
  • start_ev – First event iterator

  • end_ev – Last event iterator

inline void receive_boxes(const DetectionBox *start_box, const DetectionBox *end_box, timestamp ts, bool is_valid)

Callback called on boxes reception.

Parameters
  • start_box – First box pointer

  • end_box – Last box pointer

  • ts – Timestamp of boxes

  • is_valid – True if boxes have been computed

inline void receive_end_event_cb(timestamp ts)

Callback called on clock ticks.

Parameters

ts – Current timestamp

inline EventCallback get_event_callback()

Returns the callback to be called on events reception.

Returns

Function of type EventCallback

inline BoxCallback get_box_callback()

Returns the callback called to process boxes.

Returns

Function to receive the boxes

inline EndSliceCallback get_timestamp_callback()

Returns the callback to be called time to time to update output.

Warning

Every event should be received for this timestamp

Returns

Function of type EndSliceCallback

inline void add_tracklet_consumer_cb(TrackletCallback cb)

Sets a callback that is called when tracklets are computed.

Parameters

cb – The callback to be called

inline void disable_update_tracklets_positions_between_detections()

Disables the update of tracks positions between detections.

template<typename Event>
class Metavision::DetectionAndTrackingDisplay

Component which generates the display of detections and tracks.

Public Functions

inline DetectionAndTrackingDisplay(int width, int height, timestamp pipeline_delta_t, int fps, const std::string output_video_filename = "", bool display_window = true)

Constructs a frame builder component for the detection and tracking pipeline.

Parameters
  • width – Sensor’s width

  • height – Sensor’s height

  • pipeline_delta_t – Temporal period of the pipeline

  • fps – Number of frames per seconds

  • output_video_filename – Filename of the output video

  • display_window – Boolean to display the frame (default is true)

inline void set_ui_keys(KeyBindCallback keybind_func)

Sets callback to handle keyboard events.

Parameters

keybind_func – Function to be called when a key is pressed

inline void set_detector_labels(const std::vector<std::string> &labels)

Sets names of the class detection.

Parameters

labels – Array of names per class identifier

inline EventCallback get_event_callback()

Returns a function to generate the display from the events.

Returns

Function to generate a display from the events

inline EndEventCallback get_timestamp_callback()

Returns callback to be called at the pipeline frequency.

Returns

Function to be called when time progresses

inline EventBoxConsumerCallback get_box_callback()

Returns function to display generated boxes.

Returns

Function to be called on boxes

inline EventTrackletConsumerCallback get_track_callback()

Returns function to display generated tracks.

Returns

Function to display tracks

class Metavision::EventProviderBase

Wraps Metavision::Device to generate a generic event producer.

Subclassed by Metavision::EventProviderDat, Metavision::EventProviderRaw

Public Functions

inline void set_callback(EventCallback cb)

Configures the device callback.

Parameters

cb – Function to be called on received events

virtual int get_width() = 0

Returns Sensor’s width.

Returns

Sensor’s width

virtual int get_height() = 0

Returns Sensor’s height.

Returns

Sensor’s height

virtual void start() = 0

Starts streaming events.

inline virtual bool set_event_rate_limit(uint32_t ev_rate)

Sets event rate limit.

Returns

true on success

inline void stop()

Stops streaming events.

inline bool is_done()

Checks if the camera is stopped.

Returns

true if the camera is stopped, False otherwise

inline void set_start_ts(timestamp start_ts)

Sets after which the callback should be called.

Parameters

start_ts – Timestamp of the first useful event

inline void set_end_ts(timestamp end_ts)

Sets the timestamp of the last considered event.

Parameters

end_ts – Timestamp of the last useful event

inline timestamp get_start_ts() const

Gets the first considered timestamp.

Returns

first considered timestamp

class Metavision::EventProviderRaw : public Metavision::EventProviderBase

Implementing EventProviderBase abstraction for RAW files and physical devices.

Public Functions

EventProviderRaw(const std::string filename = "")

Creates a virtual camera.

Parameters

filename – If empty (default value), opens the first available camera. Otherwise loads the corresponding RAW file

inline virtual int get_width()

Returns the sensor’s width.

Returns

Sensor’s width

inline virtual int get_height()

Returns the sensor’s height.

Returns

Sensor’s height

virtual void start()

Starts the camera / the processing.

inline virtual bool set_event_rate_limit(uint32_t ev_rate)

Sets event rate limit.

Returns

true on success

class Metavision::EventProviderDat : public Metavision::EventProviderBase

Implementing EventProviderBase abstraction for DAT files.

Public Functions

inline EventProviderDat(const std::string filename)

Constructs an event provider able to read DAT files.

Parameters

filename – Input DAT file name

inline virtual int get_width()

Returns the sensor’s width.

Returns

Sensor’s width

inline virtual int get_height()

Returns the sensor’s height.

Returns

Sensor’s height

virtual void start()

Starts streaming events.

template<typename Event>
class Metavision::ObjectDetectorBaseT

Subclassed by Metavision::ObjectDetectorT< Event >

Public Types

using EventCallback = std::function<void(const Event*, const Event*)>

Function type handling events.

using EndSliceCallback = std::function<void(timestamp)>

Function to indicate that the time evolves.

using EventBoxConsumerCallback = std::function<void(const EventBbox *begin, const EventBbox *end, timestamp ts, bool is_valid)>

Function type handling EventBbox events.

This is the format of the callback of clients of this class

Public Functions

virtual void done() = 0

Ends the processing of events (no more events)

virtual void add_box_consumer_callback(EventBoxConsumerCallback new_box_consumer_callback) = 0

Registers a new client callback.

Parameters

new_box_consumer_callback – Function to be called on box generation

virtual EventCallback get_event_callback() = 0

Returns function to be called on received events to ease lambda function creation.

Returns

Function of type EventCallback

virtual EndSliceCallback get_timestamp_callback() = 0

Returns function to be call time to time to update output.

Note

Every event should be received for this timestamp

Returns

Function of type EndSliceCallback to update the current timestamp

virtual const std::vector<std::string> &get_labels() const = 0

Gets the labels from the model.

Returns

Vector of string with the label names

virtual timestamp get_accumulation_time() const = 0

Gets object detector’s accumulation time.

Returns

Accumulation time between two frames

virtual void set_start_ts(timestamp ts) = 0

Initializes the internal timestamp of the object detector.

This is needed in order to use the start_ts parameter in the pipeline to start at a ts > 0

Note

Events are not discarded

Parameters

ts – Timestamp of the first considered event

virtual void set_detection_threshold(float threshold) = 0

Updates current detection threshold instead of the default value read from the JSON file.

This is the lower bound on the confidence score for a detection box to be accepted. It takes values in range ]0;1[ Low value -> more detections High value -> less detections

Parameters

threshold – Lower bound on the detection confidence score

virtual void set_iou_threshold(float threshold) = 0

Updates current IOU threshold for NMS instead of the default value read from the JSON file.

Non-Maximum suppression discards detection boxes which are too similar to each other, keeping only the best one of such group. This similarity criterion on based on the measure of Intersection-Over-Union between the considered boxes. This threshold is the upper bound on the IOU for two boxes to be considered distinct (and therefore not filtered out by the Non-Maximum Suppression). It takes values in range ]0;1[ Low value -> less overlapping boxes High value -> more overlapping boxes

Parameters

threshold – Upper bound on the IOU for two boxes to be considered distinct

template<typename Event>
class Metavision::ObjectDetectorT : public Metavision::ObjectDetectorBaseT<Event>

Generates from events boxes based on a machine learning kernel.

The box generation happens in several steps:

  • Generates frame from events

  • Runs detection kernel based on machine learning algorithm

  • Extracts the detection into boxes vector

In every case a vector of boxes is generated at every end of event slice. The kernel may be called at a lower frequency than the slice of events

Public Types

using EventCallback = std::function<void(const Event*, const Event*)>

Function type handling events.

using EndSliceCallback = std::function<void(timestamp)>

Function to indicate that the time evolves.

using EventBoxConsumerCallback = std::function<void(const EventBbox *begin, const EventBbox *end, timestamp ts, bool is_valid)>

Function type handling EventBbox events.

This is the format of the callback of clients of this class

Public Functions

inline ObjectDetectorT(const std::string &directory, const std::string &runtime, int events_input_width, int events_input_height, int network_input_width, int network_input_height)

Creates a object detector component.

Parameters
  • directory – Folder containing the machine learning model

  • runtime – Targeted processor supported: cpu, gpu, gpu:[0-9]

  • events_input_width – Sensor’s width

  • events_input_height – Sensor’s height

  • network_input_width – Network input frame’s width

  • network_input_height – Network input frame’s height

inline virtual void done() final override

Ends the processing of events (no more events)

inline virtual void add_box_consumer_callback(EventBoxConsumerCallback new_box_consumer_callback)

Registers an additional new client callback.

Parameters

new_box_consumer_callback – function to be called on box generation

inline virtual EventCallback get_event_callback() final override

Returns function to be called on received events to ease lambda function creation.

Returns

Function of type EventCallback to insert new events

inline virtual EndSliceCallback get_timestamp_callback() final override

Returns function to be called time to time to update output.

Note

Every event should be received for this timestamp

Returns

Function of type EndSliceCallback to provide time modification

inline virtual const std::vector<std::string> &get_labels() const final override

Gets the labels from the model.

inline virtual timestamp get_accumulation_time() const final override

Gets object detector’s accumulation time.

inline virtual void set_start_ts(timestamp ts) final override

Initializes the internal timestamp of the object detector.

This is needed in order to use the start_ts parameter in the pipeline to start at a ts > 0

Parameters

ts – time at which the first slice of time starts

inline virtual void set_detection_threshold(float threshold) final override

Uses this detection threshold instead of the default value read from the JSON file.

This is the lower bound on the confidence score for a detection box to be accepted. It takes values in range ]0;1[ Low value -> more detections High value -> less detections

Parameters

threshold – Lower bound on the confidence score for the detection box to be considered

inline virtual void set_iou_threshold(float threshold) final override

Use this IOU threshold for NMS instead of the default value read from the JSON file.

Non-Maximum suppression discards detection boxes which are too similar to each other, keeping only the best one of such group. This similarity criterion on based on the measure of Intersection-Over-Union between the considered boxes. This threshold is the upper bound on the IOU for two boxes to be considered distinct (and therefore not filtered out by the Non-Maximum Suppression). It takes values in range ]0;1[ Low value -> less overlapping boxes High value -> more overlapping boxes

Parameters

threshold – Lower bound on the IOU for two boxes to be considered identical

template<typename EventType>
class Metavision::PreprocessingBase

Abstract object to provide generic method for processing events inside the slicer.

Subclassed by Metavision::GeometricPreprocessing< EventType >, Metavision::NoiseFilterPreprocessing< EventType, NoiseFilterType >

Public Types

typedef std::function<void(const EventType*, const EventType*, std::vector<EventType>&)> PreProcessingEvent

Function to transform events.

Public Functions

virtual PreProcessingEvent get_preprocessing_callback() = 0

Returns a function to preprocess events.

Returns

function to be called on every events

template<typename EventType>
class Metavision::GeometricPreprocessing : public Metavision::PreprocessingBase<EventType>

Geometric events preprocessing.

This class allows transforming input events by applying simple transformations like flip and transposition. like flip and or transposition

Public Functions

inline GeometricPreprocessing(int width, int height)

Builds GeometricPreprocessing object.

Parameters
  • width – Sensor’s width

  • height – Sensor’s width

void use_transpose_flipxy(bool transpose = false, bool flip_x = false, bool flip_y = false)

Configures the preprocessing filter.

Parameters
  • transpose – If True, transposes events’ X and Y coordinates

  • flip_x – Move origin to bottom of image

  • flip_y – Move origin to right of image

inline virtual PreProcessingEvent get_preprocessing_callback() final override

Returns the function to be called on every events.

Returns

Function to be called on every events

void use_roi(int x, int y, int w, int h)

Remove events outside of a region of interest (ROI)

Parameters
  • x – X coordinate of ROI top left corner

  • y – Y coordinate of ROI top left corner

  • w – ROI’s width

  • h – ROI’s height

void process(const EventType *begin, const EventType *end, std::vector<EventType> &tmp_buffer)

Processes input events.

Parameters
  • begin – First input event

  • end – Last input event

  • tmp_buffer – Vector to store transformed events

inline int get_width_after_preproc() const

Gets width of output events.

Returns

Width of output events

inline int get_height_after_preproc() const

Gets height of output events.

Returns

Height of output events

template<typename EventType, typename NoiseFilterType>
class Metavision::NoiseFilterPreprocessing : public Metavision::PreprocessingBase<EventType>

Class to pre-process events with a noise filter.

Public Functions

inline NoiseFilterPreprocessing(int width, int height, timestamp noise_threshold)

Builds a preprocessing object for noise filter.

Parameters
  • width – Sensor’s width

  • height – Sensor’s height

  • noise_threshold – Threshold to configure the noise filter

inline void process(const EventType *begin, const EventType *end, std::vector<EventType> &tmp_buffer)

Applies noise filter on every events.

Parameters
  • begin – First event

  • end – Last event

  • tmp_buffer – Output vector of events

inline virtual PreProcessingEvent get_preprocessing_callback() final override

Returns the function to apply the noise filter.

Returns

function to be called on every events

template<typename Event>
class Metavision::Slicer

A slicer informs connected component of the slice end.

The slicer receives as input:

  • vector of events

  • timestamp

When a vector of events is received the slicer does:

  1. Call the preprocess function on all events

  2. Look for end_slice_it (the last event of the current slice inside the vector)

  3. If end_slice_it is not the vector end

    1. Forward events in this slice to the event callbacks

    2. Call the end of slice callbacks to signal the end of the slice

    3. Return to second step

  4. Forwards remaining events to the event callbacks

When a timestamp is received, the end of slice callbacks are called as required to have a slice greater than this timestamp.

Public Types

using PreProcessingEvent = std::function<void(const Event*, const Event*, std::vector<Event>&)>

Function type preprocessing event before calling the callbacks.

using EventCallback = std::function<void(const Event*, const Event*)>

Function type handling events.

using EndSliceCallback = std::function<void(timestamp)>

Function to indicate the end of the current slice.

Public Functions

inline Slicer(timestamp batch_time, PreProcessingEvent preprocess = nullptr)

Builds a slicer.

Parameters
  • batch_time – Duration of one slice

  • preprocess – Function to transform the events before calling event callbacks

inline void add_callbacks(EventCallback new_event_callback, EndSliceCallback new_end_slice_callback)

Adds function to be called on event reception and on batch end.

The callbacks will be executed all in the same thread thus the execution time of the callback should not be too long to keep it realtime

Parameters
  • new_event_callback – Function called when events are available

  • new_end_slice_callback – Function called at each end of batch

inline EventCallback get_event_callback()

Returns function to be called on received events to ease lambda function creation.

The function returned on each call will loop until then end of events:

  • call event callbacks on the events until the end of the batch

  • call end batch callbacks if the batch end is reached

Returns

Function of type EventCallback

inline EndSliceCallback get_timestamp_callback()

Returns function to be called time to time to update output.

Note

Every event should be received for this timestamp

Returns

Function of type EndSliceCallback

inline void event_callback(Event const *begin, Event const *end)

Loops over all events to find slice ends and:

  • call event callbacks on the events up to the end of the slices

  • call end slice callbacks for each ended slice

Parameters
  • begin – Pointer on the first event

  • end – Pointer on the last event

inline void timestamp_callback(timestamp time)

Calls all end batch callbacks for every ended batches.

Parameters

time – Current time reached to generate the required slices

inline void set_start_ts(timestamp time)

Sets timestamp of the first batch.

Parameters

time – Time at which the first slice begin