SDK ML Metrics API

Utility to compute coco metrics with a time tolerance

class metavision_ml.metrics.coco_eval.CocoEvaluator(classes=('car', 'pedestrian'), height=240, width=304, time_tol=40000, eval_rate=- 1, verbose=False)

Wrapper Class for coco metrics.

It is equivalent to the evaluate_detection function but will use less memory when used for a large number of files. In this case list of boxes can be fed in several go before being accumulated.

Parameters
  • classes (tuple) – all class names

  • height – frame height, used to determine if the box is considered big medium or small

  • width – frame width, used to determine if the box is considered big medium or small

  • time_tol (float) – half range time tolerance to match all_ts (‘t’ are matched with +/- time_tol) (in us)

  • eval_rate (int) – if eval rate > 0 we evaluate every eval_rate (us), the windows are centered around [0, eval_rate, 2* eval_rate , … etc] otherwise, the windows are centered round every timestamp with at least one box (gt or detection)

  • verbose (boolean) – if True, print the COCO APIs prints.

Returns

all kpi results

Return type

coco_kpi (dict)

Examples

>>> coco_wrapper = CocoEvaluator()
>>> coco_wrapper.partial_eval([gt_bbox1], [dt_bbox_2])
>>> coco_wrapper.partial_eval(gt_box_list, dt_box_list)
>>> result_dict = coco_wrapper.accumulate()
accumulate()

Accumulates all previously compared detections and ground truth into a single set of COCO KPIs.

Returns

dict with keys KPI names and float values KPIs.

Return type

eval_dict (dict)

partial_eval(gt_boxes_list, dt_boxes_list)

Compute partial results for KPIs given a list of bounding box ground truth vectors and the list of matching predictions.

Note that timestamps ‘t’ must be increasing inside a given vector.

metavision_ml.metrics.coco_eval.evaluate_detection(gt_boxes_list, dt_boxes_list, classes=('car', 'pedestrian'), height=240, width=304, time_tol=40000, eval_rate=- 1)

Evaluates detection kpis on gt & dt arrays, be advised ts should be strictly increasing if eval_rate =-1 the kpi is computed only at the timestamps where there is gt (if necessary fill the gt with a background box in frames without gt)

Parameters
  • gt_boxes_list – merged list of all ground-truth boxes

  • dt_boxes_list – merged list of all detection boxes

  • classes – all class names

  • height – frame height

  • width – frame width

  • time_tol (float) – half range time tolerance to match all_ts (‘t’ are matched with +/- time_tol) (in us)

  • eval_rate (int) – if eval rate > 0 we evaluate every eval_rate (us), the windows are centered around [0, eval_rate, 2* eval_rate , … etc] otherwise, the windows are centered round every timestamp with at least one box (gt or detection)

Returns

all kpi results

Return type

coco_kpi (dict)

metavision_ml.metrics.coco_eval.match_times(all_ts, boxes_no_tol, boxes_tol, time_tol)

Matches ground truth boxes and ground truth detections at all timestamps using a specified tolerance returns a list of boxes vectors

Parameters
  • all_ts – all timestamps of evaluation

  • boxes_no_tol (np.ndarray) – bounding boxes with ‘t’ time field (those ‘t’ must be a subset of all_ts)

  • boxes_tol (np.ndarray) – bounding boxes with ‘t’ time field (those ‘t’ are matched to all_ts using a 2 * time_tol interval)

  • time_tol (float) – half range time tolerance to match all_ts (‘t’ are matched with +/- time_tol) (in us)

Returns

list of np.ndarray computed from boxes_no_tol windowed_boxes_tol (list): list of np.ndarray computed from boxes_tol

Return type

windowed_boxes_no_tol (list)

metavision_ml.metrics.coco_eval.summarize(coco_eval, verbose=False)

Computes and displays summary metrics for evaluation results. Note this function can only be applied on the default parameter setting