SDK ML Algorithms

class Metavision::CDProcessing

Processes CD event to compute neural network input frame (3 dimensional tensor)

This is the base class. It handles the rescaling of the events if necessary. It also provides accessors to get the shape of the output tensor. Derived class implement the computation. Calling operator() on this base class triggers the computation

Public Functions

CDProcessing(timestamp delta_t, int network_input_width, int network_input_height, int event_input_width = 0, int event_input_height = 0, bool use_CHW = true)

Constructs a CDProcessing object to ease the neural network input frame.

Parameters
  • delta_t: Delta time used to accumulate events inside the frame

  • network_input_width: Neural network input frame’s width

  • network_input_height: Neural network input frame’s height

  • event_input_width: Sensor’s width

  • event_input_height: Sensor’s height

  • use_CHW: Boolean to define frame dimension order, True if the fields’ frame order is (Channel, Height, Width)

size_t get_frame_size() const

Gets the frame size.

Return

the frame size in pixel (height * width * channels)

size_t get_frame_width() const

Gets the network’s input frame’s width.

Return

Network input frame’s width

size_t get_frame_height() const

Gets the network’s input frame’s height.

Return

Network input frame’s height

size_t get_frame_channels() const

Gets the number of channel in network input frame.

Return

Number of channel in network input frame

bool is_CHW() const

Checks the tensor’s dimension order.

Return

true if the dimension order is (channel, height, width)

std::vector<size_t> get_frame_shape() const

Gets the shape of the frame (3 dim, either CHW or HWC)

Return

a vector of sizes

template<typename InputIt>
void operator()(const timestamp cur_frame_start_ts, InputIt begin, InputIt end, float *frame, int frame_size) const

Updates the frame depending on the input events.

Template Parameters
  • InputIt: type of input iterator (either a container iterator or raw pointer to EventCD)

Parameters
  • cur_frame_start_ts: starting timestamp of the current frame

  • begin: Begin iterator

  • end: End iterator

  • frame: Pointer to the frame (input/output)

  • frame_size: Input frame size

class Metavision::NonMaximumSuppressionWithRescaling

Rescales events from network input format to the sensor’s size and suppresses Non-Maximum overlapping boxes.

Public Functions

NonMaximumSuppressionWithRescaling()

Builds non configured NonMaximumSuppressionWithRescaling object.

NonMaximumSuppressionWithRescaling(std::size_t num_classes, int events_input_width, int events_input_height, int network_input_width, int network_input_height, float iou_threshold)

Constructs object that rescales detected boxes and suppresses Non-Maximum overlapping boxes.

Parameters
  • num_classes: Number of possible class returned by neural network

  • events_input_width: Sensor’s width

  • events_input_height: Sensor’s height

  • network_input_width: Neural network input frame’s width

  • network_input_height: Neural network input frame’s height

  • iou_threshold: Threshold on IOU metrics to consider that two boxes are matching

template<typename InputIt, typename OutputIt>
void process_events(const InputIt it_begin, const InputIt it_end, OutputIt inserter)

Rescales and filters boxes.

Template Parameters
  • InputIt: Read-Only input iterator type

  • OutputIt: Read-Write output iterator type

Parameters
  • it_begin: Iterator to the first box

  • it_end: Iterator to the past-the-end box

  • inserter: Output iterator or back inserter

template<typename InputIt, typename OutputIt>
void process(const InputIt begin, const InputIt end, OutputIt bbox_first)

Note

process(…) is deprecated since version 2.2.0 and will be removed in later releases. Please use process_events(…) instead

void set_iou_threshold(float threshold)

Sets Intersection Over Union (IOU) threshold.

Note

Intersection Over Union (IOU) is the ratio of the intersection area over union area

Parameters
  • threshold: Threshold on IOU metrics to consider that two boxes are matching

void ignore_class_id(std::size_t class_id)

Configures the computation to ignore some class identifier.

Parameters
  • class_id: Identifier of the class to be ignored

Public Static Functions

void compute_nms_per_class(std::list<EventBbox> &bbox_list, float iou_threshold)

Suppresses non-maximum overlapping boxes over a list of EventBbox-es.

Note

The list is modified in-place. The result is sorted by confidence.

Parameters
  • [inout] bbox_list: List of EventBbox on which to apply the Non-maximum suppression

  • iou_threshold: Threshold above which two boxes are considered to overlap

class Metavision::ObjectDetectorTorchJit

Public Functions

ObjectDetectorTorchJit(const std::string &directory, int frame_width, int frame_height, int network_input_width = 0, int network_input_height = 0, bool use_cuda = false, int ignore_first_n_prediction_steps = 0, int gpu_id = 0)

Constructor for ObjectDetectorTorchJit.

Note

When network_input_width and network_input_height are different from frame_width and frame_height, the corresponding rescaling is performed on the output bounding boxes, such that the output detection are still returned in the original input frame of the events

Parameters
  • directory: Name of the directory containing at least two files:

    • model.ptjit : PyTorch model exported using torch.jit

    • info_ssd_jit.json : JSON file which contains several information about the neural network (type of input features, dimensions, accumulation time, list of classes, default thresholds, etc.)

  • frame_width: Sensor’s width

  • frame_height: Sensor’s height

  • network_input_width: Neural network’s width which could be smaller than frame_width. In this case the network will work on a downscaled size

  • network_input_height: Neural network’s height which could be smaller than frame_height. In this case the network will work a downscaled size

  • use_cuda: Boolean to indicate if we use gpu or not

  • ignore_first_n_prediction_steps: Number of discarded neural network predictions at the beginning of a sequence. Depending on initial conditions, recurrent models sometimes have a transitory regime in which they initially produce unreliable detections before they enter normal working regime.

  • gpu_id: GPU identification number that allows the selection of the gpu if several are available.

void use_cpu()

Performs all computations on the CPU.

bool use_gpu_if_available(int gpu_id = 0)

Performs the computations on the GPU if there is one.

Return

Boolean to indicate if the provided gpu_id is available

Parameters
  • gpu_id: ID of the gpu on which the computations must be performed

template<typename OutputIt>
void process(Frame_t &input, OutputIt bbox_first, timestamp ts)

Computes the detection given the provided input tensor.

Parameters
  • input: Chunk of memory which corresponds to input tensor

  • bbox_first: Output iterator to add the detection boxes

  • ts: Timestamp of current timestep. Output boxes will have this timestamp

int get_network_height() const

Returns the input frame height.

Return

Network input height in pixels

int get_network_width() const

Returns the input frame width.

Return

Network input width in pixels

int get_network_input_channels() const

Returns the number of channels in the input frame.

Return

Network input channel number

int get_network_input_size() const

Returns the network input size.

Return

Size of the input frame

Metavision::timestamp get_accumulation_time() const

Returns the time during which the events are accumulated to compute the NN input tensor.

Return

Delta time used to generate the input frame

CDProcessing &get_cd_processor()

Returns the object responsible for computing the content of the input tensor.

Return

CDProcessing to ease the input frame generation

const std::vector<std::string> &get_labels() const

Returns a vector of labels for the classes of the neural network.

Return

Vector of strings containing labels

void set_ts(Metavision::timestamp ts)

Initializes the internal timestamp of the object detector.

This is needed in order to use the start_ts parameter in the pipeline to start at a ts > 0

Parameters
  • ts: time at which the first slice of time starts

void set_detection_threshold(float threshold)

Uses this detection threshold instead of the default value read from the JSON file.

This is the lower bound on the confidence score for a detection box to be accepted. It takes values in range ]0;1[ Low value -> more detections High value -> less detections

Parameters
  • threshold: Lower bound on the detector confidence score

void set_iou_threshold(float threshold)

Uses this IOU threshold for NMS instead of the default value read from the JSON file.

Non-Maximum suppression discards detection boxes which are too similar to each other, keeping only the best one of such group. This similarity criterion is based on the measure of Intersection-Over-Union between the considered boxes. This threshold is the upper bound on the IOU for two boxes to be considered distinct (and therefore not filtered out by the Non-Maximum Suppression). It takes values in range ]0;1[ Low value -> less overlapping boxes High value -> more overlapping boxes

Parameters
  • threshold: Upper bound on the IOU for two boxes to be considered distinct

void reset()

Resets the memory cells of the neural network.

Neural networks used as object detectors are usually RNNs (typically LSTMs). Use this function to reset the memory of the neural network when feeding new inputs unrelated to the previous ones : call reset() before applying the same object detector on a new sequence