SDK ML Python bindings API

class metavision_sdk_ml.CDProcessing

Processes CD event to compute neural network input frame (3 dimensional tensor)

This is the base class. It handles the rescaling of the events if necessary. It also provides accessors to get the shape of the output tensor. Derived class implement the computation. Calling operator() on this base class triggers the computation

static create_CDProcessingDiff(delta_t: int, network_input_width: int, network_input_height: int, max_incr_per_pixel: float = 5, clip_value_after_normalization: float = 1.0, event_input_width: int = 0, event_input_height: int = 0)metavision_sdk_ml.CDProcessing

Creates a CDProcessing diff

static create_CDProcessingEventCube(delta_t: int, network_input_width: int, network_input_height: int, num_utbins: int, split_polarity: bool, max_incr_per_pixel: float = 63.75, clip_value_after_normalization: float = 1.0, event_input_width: int = 0, event_input_height: int = 0)metavision_sdk_ml.CDProcessing

Creates a CDProcessing event_cube

static create_CDProcessingHisto(delta_t: int, network_input_width: int, network_input_height: int, max_incr_per_pixel: float = 5, clip_value_after_normalization: float = 1.0, event_input_width: int = 0, event_input_height: int = 0, use_CHW: bool = True)metavision_sdk_ml.CDProcessing

Creates a CDProcessing histo

get_frame_channels(self: metavision_sdk_ml.CDProcessing)int

Gets the number of channel in network input frame.

return

Number of channel in network input frame

get_frame_height(self: metavision_sdk_ml.CDProcessing)int

Gets the network’s input frame’s height.

return

Network input frame’s height

get_frame_shape(self: metavision_sdk_ml.CDProcessing)List[int]

Gets the shape of the frame (3 dim, either CHW or HWC)

return

a vector of sizes

get_frame_size(self: metavision_sdk_ml.CDProcessing)int

Gets the frame size.

return

the frame size in pixel (height * width * channels)

get_frame_width(self: metavision_sdk_ml.CDProcessing)int

Gets the network’s input frame’s width.

return

Network input frame’s width

init_output_tensor(self: metavision_sdk_ml.CDProcessing)numpy.ndarray[numpy.float32]
is_CHW(self: metavision_sdk_ml.CDProcessing)bool

Checks the tensor’s dimension order.

return

true if the dimension order is (channel, height, width)

process_events(*args, **kwargs)

Overloaded function.

  1. process_events(self: metavision_sdk_ml.CDProcessing, cur_frame_start_ts: int, events_np: numpy.ndarray[metavision_sdk_base._EventCD_decode], frame_tensor_np: numpy.ndarray) -> None

Takes a chunk of events (numpy array of EventCD) and updates the frame_tensor (numpy array of float)

  1. process_events(self: metavision_sdk_ml.CDProcessing, cur_frame_start_ts: int, events_buf: metavision_sdk_base.EventCDBuffer, frame_tensor_np: numpy.ndarray) -> None

Takes a chunk of events (EventCDBuffer) and updates the frame_tensor (numpy array of float)

class metavision_sdk_ml.DataAssociation

Module that matches detections and builds tracklets.

static get_empty_output_buffer()metavision_sdk_ml.EventTrackedBoxBuffer

This function returns an empty buffer of events of the correct type, which can later on be used as output_buf when calling process_events()

process_events(self: metavision_sdk_ml.DataAssociation, ts: int, events_np: numpy.ndarray[metavision_sdk_base._EventCD_decode], boxes_np: numpy.ndarray[Metavision::EventBbox], output_tracks_buf: metavision_sdk_ml.EventTrackedBoxBuffer)None

Computes the data association and outputs updated set of tracked boxes

:param : ts: (int) current timestamp to process :param : events_np: input chunk of events (numpy structured array of EventCD) :param : boxes_np: input detections (numpy structured array of EventBbox) :param : output_tracks_buf: output buffer of tracked boxes. It can be converted to a numpy structured array of EventTrackedBox using .numpy()

class metavision_sdk_ml.EventBboxBuffer
numpy(self: metavision_sdk_ml.EventBboxBuffer, copy: bool = False)numpy.ndarray[Metavision::EventBbox]
Copy

if True, allocates new memory and returns a copy of the events. If False, use the same memory

class metavision_sdk_ml.EventTrackedBoxBuffer
numpy(self: metavision_sdk_ml.EventTrackedBoxBuffer, copy: bool = False)numpy.ndarray[Metavision::EventTrackedBox]
Copy

if True, allocates new memory and returns a copy of the events. If False, use the same memory

class metavision_sdk_ml.NonMaximumSuppressionWithRescaling

Rescales events from network input format to the sensor’s size and suppresses Non-Maximum overlapping boxes.

static get_empty_output_buffer()metavision_sdk_ml.EventBboxBuffer

This function returns an empty buffer of events of the correct type, which can later on be used as output_buf when calling process_events()

ignore_class_id(self: metavision_sdk_ml.NonMaximumSuppressionWithRescaling, class_id: int)None

Configures the computation to ignore some class identifier.

class_id

Identifier of the class to be ignored

process_events(*args, **kwargs)

Overloaded function.

  1. process_events(self: metavision_sdk_ml.NonMaximumSuppressionWithRescaling, input_np: numpy.ndarray[Metavision::EventBbox], output_buf: metavision_sdk_ml.EventBboxBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes a numpy array as input and writes the results into the specified output event buffer
input_np

input chunk of events (numpy structured array)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

  1. process_events(self: metavision_sdk_ml.NonMaximumSuppressionWithRescaling, input_buf: metavision_sdk_ml.EventBboxBuffer, output_buf: metavision_sdk_ml.EventBboxBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes an event buffer as input and writes the results into a distinct output event buffer
input_buf

input chunk of events (event buffer)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

set_iou_threshold(self: metavision_sdk_ml.NonMaximumSuppressionWithRescaling, threshold: float)None

Sets Intersection Over Union (IOU) threshold.

threshold

Threshold on IOU metrics to consider that two boxes are matching

note

Intersection Over Union (IOU) is the ratio of the intersection area over union area