SDK ML Detection API

This contains the classes to encode/ decode the gt into ‘anchor-boxes’.

All targets are encoded in parallel for better throughput.

The module can resize its grid internally (so batches can change sizes)

class metavision_ml.detection.anchors.AnchorLayer(box_size=32, anchor_list=[(0.3333333333333333, 1), (0.5, 1), (1, 1), (1, 1.5), (2, 1), (3, 1)])

For one level of the pyramid: Manages One Grid (x,y,w,h)

The anchors grid is (height, width, num_anchors_per_position, 4)

The grid is cached, but changes if featuremap size changes

Parameters
  • box_size (int) – base size for anchor box

  • anchor_list (List) – a list of ratio, scale tuples configuration of anchors

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(shape: torch.Tensor, stride: int)

Generates anchors

Parameters
  • shape (torch.Tensor) – shape of feature map

  • stride (int) – stride compared to network input

static generate_anchors(box_size, ratio_scale_list)

Generates the anchors sizes

Parameters
  • box_size (int) – base size for anchor boxes

  • ratio_scale_list (List) – a list of ratio, scale tuples configuration of anchors

Returns

Nx2 (width, height)

Return type

anchors (torch.Tensor)

make_grid(height: int, width: int, stride: int)

Makes a coordinate grid for anchor boxes

Parameters
  • height (int) – feature-map height

  • width (int) – feature-map width

Returns

(H,W,NanchorPerPixel,2) size (width & height per anchor per pixel)

Return type

grid (torch.Tensor)

class metavision_ml.detection.anchors.Anchors(num_levels=4, base_size=32, anchor_list='PSEE_ANCHORS', fg_iou_threshold=0.5, bg_iou_threshold=0.3, allow_low_quality_matches=True, variances=[0.1, 0.2], max_decode=False)

Pyramid of Anchoring Grids. Handle encoding/decoding algorithms. Encoding uses padding in order to parallelize iou & assignment computation. Decoding uses “batched_nms” of torchvision to parallelize across images and classes. The option “max_decode” means we only decode the best score, otherwise we decode per class

Parameters
  • num_levels – number of pyramid levels

  • base_size – minimum box size

  • sizes – box sizes per level

  • anchor_list – list of anchors sizes per pyramid level

  • fg/bg_iou_threshold – threshold to accept/ reject a matching anchor

  • allow_low_quality_matches – assign all gt even if no anchor really matches

  • variances – box variance following ssd formula

Initializes internal Module state, shared by both nn.Module and ScriptModule.

batched_decode(boxes, scores, score_thresh, nms_thresh, topk=100)

Decodes all prediction vectors across batch, classes and time.

Parameters
  • boxes (torch.Tensor) – (N,Nanchors,4)

  • scores (torch.Tensor) – (N,Nanchors,C) with C classes excluding

  • background.

  • score_thresh (float) – threshold to score

  • nms_thresh (float) – threshold to IOU.

  • topk (int) – maximum number of anchors considered for NMS.

decode(features, x, loc_preds, scores, batch_size, score_thresh=0.5, nms_thresh=0.6, max_boxes_per_input=500)

Decodes prediction vectors into boxes

Parameters
  • features (list) – list of feature maps

  • x (torch.Tensor) – network’s input

  • loc_preds (torch.Tensor) – regression prediction vector (N,Nanchors,4)

  • scores (torch.Tensor) – score prediction vector (N,Nanchors,C) with C classes (background is

  • excluded)

  • score_thresh (float) – apply this threshold before nms

  • nms_thresh (float) – grouping threshold on IOU similarity between boxes

  • max_boxes_per_input (int) – maximum number of boxes per image. Too small might reduce recall, too high might entail extra computational cost in decoding, loss or evaluation.

Returns

list of list of decoded boxes

Return type

decoded boxes (List)

encode(features, x, targets)

Encodes input and features into target vectors expressed in anchor coordinate system.

Parameters
  • x (torch.Tensor) – input with original size

  • targets (List) – list of list of targets

Returns

encoded anchors regression targets cls (torch.Tensor): encoded anchors classification targets

Return type

loc (torch.Tensor)

encode_anchors(anchors, anchors_xyxy, targets)

Encodes targets into target vectors expressed in anchor coordinate system.

Parameters
  • anchors (torch.Tensor) – anchors in cx,cy,w,h format

  • anchors – anchors in x1,y1,x2,y2 format

Returns

encoded anchors regression targets cls (torch.Tensor): encoded anchors classification targets

Return type

loc (torch.Tensor)

forward(xs: List[torch.Tensor], x: torch.Tensor)

Generates Anchors

Parameters
  • xs (List) – list of feature maps

  • x (torch.Tensor) – network’s input

Returns

(N,Nanchors,4) anchor boxes in (cx,cy,w,h) format

Return type

anchors (torch.Tensor)

has_changed(xs: List[torch.Tensor], x: torch.Tensor)

Detects if feature maps sizes has change.

Parameters
  • xs (List) – list of feature maps

  • x (torch.Tensor) – network’s input

set_low_quality_matches(ious, batch_best_target_per_prior_indices, batch_best_target_per_prior, sizes)

Makes sure that every GT is assigned to at least 1 anchor.

Parameters
  • ious (torch.Tensor) – (N,Nanchors,MaxGT) IOU cost matrix

  • batch_best_target_per_prior_indices (torch.Tensor) – (N,Nanchors)

  • sizes (int list) – number of valid (non padding) boxes for each bin

torch box API

metavision_ml.detection.box.assign_priors(gt_boxes, gt_labels, corner_form_priors, fg_iou_threshold, bg_iou_threshold, allow_low_quality_matches=True)

Assigns ground truth boxes as targets to priors (also called anchor boxes).

Parameters
  • gt_boxes (tensor) – ground truth boxes tensor of shape (num_targets, 4).

  • gt_labels (tensor) – int tensor of size (num_target) containing the class labels of targets.

  • corner_form_priors (tensor) – tensor of shape (num_priors, 4), contains the priors boxes in format (xmin, ymin, xmax, ymax).

  • fg_iou_threshold (float) – minimal iou with a prior box to be considered a positive match (to be assigned a ground truth box)

  • bg_iou_threshold (float) – below this iou threshold a prior box is considered to match the background (the prior box doesn’t match any ground truth box)

  • allow_low_quality_matches (boolean) – allow bad matches to be considered anyway.

Returns

of shape coordinate values of the ground truth box assigned to each prior box

in the format (xmin, ymin, xmax, ymax)

labels (tensor): of shape (num_priors) containing class label of the ground truth box assigned to each prior.

Return type

boxes (tensor)

metavision_ml.detection.box.batch_box_iou(box1, box2)

Computes the intersection over union of two sets of boxes.

The box order must be (xmin, ymin, xmax, ymax).

Parameters
  • box1 – (tensor) bounding boxes, sized [N,4].

  • box2 – (tensor) bounding boxes, sized [B,M,4].

Returns

(tensor) iou, sized [N,M].

Reference:

https://github.com/chainer/chainercv/blob/master/chainercv/utils/bbox/bbox_iou.py

metavision_ml.detection.box.bbox_to_deltas(boxes, default_boxes, variances=[0.1, 0.2], max_width=10000)

converts boxes expressed in absolute coordinate to anchor-relative coordinates.

Parameters
  • boxes – xyxy boxes Nx4 tensor

  • default_boxes – cxcywh anchor boxes Nx4 tensor

  • variances – variances according to SSD paper.

  • max_width – additional clamping to avoid infinite gt boxes.

Returns

boxes expressed relatively to the center and size of anchor boxes.

Return type

deltas

metavision_ml.detection.box.box_clamp(boxes, xmin, ymin, xmax, ymax)

Clamps boxes.

Parameters
  • boxes (tensor) – bounding boxes of (xmin,ymin,xmax,ymax), sized [N,4].

  • xmin (number) – min value of x.

  • ymin (number) – min value of y.

  • xmax (number) – max value of x.

  • ymax (number) – max value of y.

Returns

(tensor) clamped boxes.

metavision_ml.detection.box.box_iou(box1, box2)

Computes the intersection over union of two sets of boxes.

The box order must be (xmin, ymin, xmax, ymax).

Parameters
  • box1 – (tensor) bounding boxes, sized [N,4].

  • box2 – (tensor) bounding boxes, sized [M,4].

Returns

(tensor) iou, sized [N,M].

Reference:

https://github.com/chainer/chainercv/blob/master/chainercv/utils/bbox/bbox_iou.py

metavision_ml.detection.box.box_nms(bboxes, scores, threshold=0.5)

Non maximum suppression.

Parameters
  • bboxes – (tensor) bounding boxes, sized [N,4].

  • scores – (tensor) confidence scores, sized [N,].

  • threshold – (float) overlap threshold.

Returns

(tensor) selected indices.

Return type

keep

Reference:

https://github.com/rbgirshick/py-faster-rcnn/blob/master/lib/nms/py_cpu_nms.py

metavision_ml.detection.box.box_select(boxes, xmin, ymin, xmax, ymax)

Selects boxes in range (xmin,ymin,xmax,ymax).

Parameters
  • boxes (tensor) – bounding boxes of (xmin,ymin,xmax,ymax), sized [N,4].

  • xmin (number) – min value of x.

  • ymin (number) – min value of y.

  • xmax (number) – max value of x.

  • ymax (number) – max value of y.

Returns

(tensor) selected boxes, sized [M,4]. (tensor) selected mask, sized [N,].

metavision_ml.detection.box.deltas_to_bbox(loc_preds, default_boxes, variances=[0.1, 0.2])

converts boxes expressed in anchor-relative coordinates to absolute coordinates.

Parameters
  • loc_preds – deltas boxes Nx4 tensor

  • default_boxes – cxcywh anchor boxes Nx4 tensor

  • variances – variances according to SSD paper.

Returns

boxes expressed in absolute coordinates.

Return type

box_preds

metavision_ml.detection.box.pack_boxes_list(targets)

Packs targets altogether

Because numpy array have variable-length, We pad each group.

Parameters

targets – list of np.ndarray in struct BBOX_dtype

Returns

packed targets in shape [x1,y1,x2,y2,label] num_boxes: list of number of boxes per frame

metavision_ml.detection.box.xywh2xyxy(boxes)

Changes box order from (xcenter, ycenter, width, height) to (xmin,ymin,xmax,ymax).

Parameters

boxes (tensor) – bounding boxes, sized [N,4].

Returns

converted bounding boxes, sized [N,4].

Return type

boxes (tensor)

metavision_ml.detection.box.xyxy2xywh(boxes)

Changes box order from (xmin,ymin,xmax,ymax) to (xcenter,ycenter,width,height).

Parameters

boxes (tensor) – bounding boxes, sized [N,4].

Returns

converted bounding boxes, sized [N,4].

Return type

boxes (tensor)

Detection Data Factory Collection of DataLoaders. You can add your own here if sequential_dataset is not good fit.

metavision_ml.detection.data_factory.get_classes_from_json(json_file_path)

Read classes for a json file

Parameters

json_file_path – path to the json file

Returns

classes

metavision_ml.detection.data_factory.get_classes_from_label_map_fnn(dataset_path)

Read classes for NonSequentialDataset

Parameters

dataset_path – path to dataset containing ‘label_map_dictionary_fnn.json’

Returns

classes

metavision_ml.detection.data_factory.get_classes_from_label_map_rnn(dataset_path)

Read classes for SequentialDataset

Parameters

dataset_path – path to dataset containing ‘label_map_dictionary.json’

Returns

classes

metavision_ml.detection.data_factory.psee_data(hparams: argparse.Namespace, mode: str, empty_label: str = 'empty')metavision_ml.data.sequential_dataset.SequentialDataLoader

generate a PSEE sequential dataset from a dataset path with the following structure: [dataset_path]

|–> label_map_dictionary.json |–> [train] folder |–> [val] folder |–> [test] folder (optional)

Parameters
  • hparams – params of pytorch lightning module

  • mode – section of data “train”, “val” or “test”

  • empty_label – labels used to mark frames without any relevant bbox

metavision_ml.detection.data_factory.setup_psee_classes(label_map_path, wanted_classes=[])

Setups classes and lookup for loading.

Parameters
  • label_map_path – path to json containining label dictionary

  • wanted_classes – if empty, return all available ones except the “empty” frame label.

metavision_ml.detection.data_factory.setup_psee_load_labels(label_map_path, num_tbins, wanted_classes=[], min_box_diag_network=30, interpolation=True)

Setups Gen1/Gen4 loading labels

Parameters
  • label_map_path – path to json containining label dictionary

  • wanted_classes – if empty, return all available ones except the “empty” frame label.

  • num_tbins – number of time-bins per batch

  • min_box_diag_network – minimum box size to keep

This defines several neural networks They take an image/ sequence as input and send a pyramid/ sequence of pyramids as output

class metavision_ml.detection.feature_extractors.Vanilla(cin=1, base=16, cout=256)

Baseline architecture getting 0.4 mAP on the HD Event-based Automotive Detection Dataset.

It consists of Squeeze-Excite Blocks to stride 16 and then 5 levels of Convolutional-RNNs.

Each level can then be fed to a special head for predicting bounding boxes for example.

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(x)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class metavision_ml.detection.feature_extractors.Vanilla_VGG(cin=1, base=16, cout=256)

Baseline architecture getting 0.3 mAP on the HD Event-based Automotive Detection Dataset.

It consists of VGG blocks to stride 16 and then 5 levels of Convolutional-RNNs.

Each level can then be fed to a special head for predicting bounding boxes for example.

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(x)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class metavision_ml.detection.feature_extractors.Vanilla_VGGRU(cin=1, base=16, cout=256)

Baseline architecture with an alternate RNN Cell

It consists of VGG blocks to stride 16 and then 5 levels of Convolutional-RNNs of the GRU type.

Each level can then be fed to a special head for predicting bounding boxes for example.

Initializes internal Module state, shared by both nn.Module and ScriptModule.

This script instantiates a jittable single-shot detector. It is called by export.py. It allows to exports to the C++ Detection & Tracking application.

class metavision_ml.detection.jitting.BoxDecoder

Jittable Box Decoding

This reuses the anchor module that is jittable for its forward, but not all its other functions like encode or decode.

It decodes 2 prediction tensors (regression & box classification) that are N,Nanchors,4 and N,Nanchors,C (with Nanchor: number of anchor & C: number of classes)

into a nested list of torch.tensors [x1,y1,w,h,score,class_id]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(anchors, loc_preds, cls_preds, varx: float, vary: float, score_thresh: float, batch_size: int)

for Torch.Jit this forward does not perform nms

class metavision_ml.detection.jitting.SSD(net, anchor_list='PSEE_ANCHORS')

This module can be exported to C++. Given input tensor it outputs a nested list of filtered boxes. :param net: neural network model :param anchor_list: anchor configuration

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(x, score_thresh: float = 0.4)List[List[torch.Tensor]]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

metavision_ml.detection.jitting.export_lightning_model(lightning_model, out_directory, nms_thresh, score_thresh, precision=32)

Exports lightning model to Torch JIT & json parameter files read by the C++ D&T application.

Parameters
  • lightning_model – pytorch lightning class

  • out_directory – output directory

  • nms_thresh

  • score_thresh

  • precision (int) – save the model in float 16 or 32 precision

metavision_ml.detection.jitting.export_ssd(ssd, params, out_directory, nms_thresh, score_thresh, precision=32)

Exports Jitted class SSD & json parameter files read by the C++ D&T application

Parameters
  • ssd – jitted class

  • params – hyper parameters

  • out_directory – output directory

  • nms_thresh

  • score_thresh

  • precision (int) – save the model in float 16 or 32 precision

This script contains test cases for exporting a RNN detector

metavision_ml.detection.jitting_test.testcase_forward_network_with_and_without_box_decoding(nn_filename, height=120, width=160, device='cpu')

Checks forward() is working on torch.jit model on the device specified by input parameter “device”, with and without torch.no_grad()

metavision_ml.detection.jitting_test.testcase_torch_jit_reset(nn_filename, height=120, width=160, device='cpu')
Tests the fact that function reset_all() works properly:
  • memory cell and activations are indeed set to zero

  • when providing the same input tensor to the network, the outputs we obtain from the first propagation following a reset should be identical to the outputs we obtain from the first propagation after loading the model

Pytorch Lightning Module for training the detector

class metavision_ml.detection.lightning_model.LightningDetectionModel(hparams: argparse.Namespace)

Pytorch Lightning model for neural network to predict boxes.

The detector built by build_ssd should be a Detector with “compute_loss” and “get_boxes” implemented.

Parameters

hparams (argparse.Namespace) – argparse from train.py application

accumulate_predictions(preds, targets, video_infos, frame_is_labeled)

Accumulates prediction to run coco-metrics on the full videos

configure_optimizers()

Choose what optimizers and learning-rate schedulers to use in your optimization. Normally you’d need one. But in the case of GANs or similar you might have multiple.

Returns

Any of these 6 options.

  • Single optimizer.

  • List or Tuple of optimizers.

  • Two lists - The first list has multiple optimizers, and the second has multiple LR schedulers (or multiple lr_scheduler_config).

  • Dictionary, with an "optimizer" key, and (optionally) a "lr_scheduler" key whose value is a single LR scheduler or lr_scheduler_config.

  • Tuple of dictionaries as described above, with an optional "frequency" key.

  • None - Fit will run without any optimizer.

The lr_scheduler_config is a dictionary which contains the scheduler and its associated configuration. The default configuration is shown below.

lr_scheduler_config = {
    # REQUIRED: The scheduler instance
    "scheduler": lr_scheduler,
    # The unit of the scheduler's step size, could also be 'step'.
    # 'epoch' updates the scheduler on epoch end whereas 'step'
    # updates it after a optimizer update.
    "interval": "epoch",
    # How many epochs/steps should pass between calls to
    # `scheduler.step()`. 1 corresponds to updating the learning
    # rate after every epoch/step.
    "frequency": 1,
    # Metric to to monitor for schedulers like `ReduceLROnPlateau`
    "monitor": "val_loss",
    # If set to `True`, will enforce that the value specified 'monitor'
    # is available when the scheduler is updated, thus stopping
    # training if not found. If set to `False`, it will only produce a warning
    "strict": True,
    # If using the `LearningRateMonitor` callback to monitor the
    # learning rate progress, this keyword can be used to specify
    # a custom logged name
    "name": None,
}

When there are schedulers in which the .step() method is conditioned on a value, such as the torch.optim.lr_scheduler.ReduceLROnPlateau scheduler, Lightning requires that the lr_scheduler_config contains the keyword "monitor" set to the metric name that the scheduler should be conditioned on.

Metrics can be made available to monitor by simply logging it using self.log('metric_to_track', metric_val) in your LightningModule.

Note

The frequency value specified in a dict along with the optimizer key is an int corresponding to the number of sequential batches optimized with the specific optimizer. It should be given to none or to all of the optimizers. There is a difference between passing multiple optimizers in a list, and passing multiple optimizers in dictionaries with a frequency of 1:

  • In the former case, all optimizers will operate on the given batch in each optimization step.

  • In the latter, only one optimizer will operate on the given batch at every step.

This is different from the frequency value specified in the lr_scheduler_config mentioned above.

def configure_optimizers(self):
    optimizer_one = torch.optim.SGD(self.model.parameters(), lr=0.01)
    optimizer_two = torch.optim.SGD(self.model.parameters(), lr=0.01)
    return [
        {"optimizer": optimizer_one, "frequency": 5},
        {"optimizer": optimizer_two, "frequency": 10},
    ]

In this example, the first optimizer will be used for the first 5 steps, the second optimizer for the next 10 steps and that cycle will continue. If an LR scheduler is specified for an optimizer using the lr_scheduler key in the above dict, the scheduler will only be updated when its optimizer is being used.

Examples:

# most cases. no learning rate scheduler
def configure_optimizers(self):
    return Adam(self.parameters(), lr=1e-3)

# multiple optimizer case (e.g.: GAN)
def configure_optimizers(self):
    gen_opt = Adam(self.model_gen.parameters(), lr=0.01)
    dis_opt = Adam(self.model_dis.parameters(), lr=0.02)
    return gen_opt, dis_opt

# example with learning rate schedulers
def configure_optimizers(self):
    gen_opt = Adam(self.model_gen.parameters(), lr=0.01)
    dis_opt = Adam(self.model_dis.parameters(), lr=0.02)
    dis_sch = CosineAnnealing(dis_opt, T_max=10)
    return [gen_opt, dis_opt], [dis_sch]

# example with step-based learning rate schedulers
# each optimizer has its own scheduler
def configure_optimizers(self):
    gen_opt = Adam(self.model_gen.parameters(), lr=0.01)
    dis_opt = Adam(self.model_dis.parameters(), lr=0.02)
    gen_sch = {
        'scheduler': ExponentialLR(gen_opt, 0.99),
        'interval': 'step'  # called after each training step
    }
    dis_sch = CosineAnnealing(dis_opt, T_max=10) # called every epoch
    return [gen_opt, dis_opt], [gen_sch, dis_sch]

# example with optimizer frequencies
# see training procedure in `Improved Training of Wasserstein GANs`, Algorithm 1
# https://arxiv.org/abs/1704.00028
def configure_optimizers(self):
    gen_opt = Adam(self.model_gen.parameters(), lr=0.01)
    dis_opt = Adam(self.model_dis.parameters(), lr=0.02)
    n_critic = 5
    return (
        {'optimizer': dis_opt, 'frequency': n_critic},
        {'optimizer': gen_opt, 'frequency': 1}
    )

Note

Some things to know:

  • Lightning calls .backward() and .step() on each optimizer as needed.

  • If learning rate scheduler is specified in configure_optimizers() with key "interval" (default “epoch”) in the scheduler configuration, Lightning will call the scheduler’s .step() method automatically in case of automatic optimization.

  • If you use 16-bit precision (precision=16), Lightning will automatically handle the optimizers.

  • If you use multiple optimizers, training_step() will have an additional optimizer_idx parameter.

  • If you use torch.optim.LBFGS, Lightning handles the closure function automatically for you.

  • If you use multiple optimizers, gradients will be calculated only for the parameters of current optimizer at each training step.

  • If you need to control how often those optimizers step or override the default .step() schedule, override the optimizer_step() hook.

demo_video(epoch=0, num_batches=100, show_video=False)

This runs our detector on several videos of the testing dataset

forward(x)

Same as torch.nn.Module.forward().

Parameters
  • *args – Whatever you decide to pass into the forward method.

  • **kwargs – Keyword arguments are also possible.

Returns

Your model’s output

inference_epoch_end(outputs, mode='val')

Runs Metrics

Parameters
  • outputs – accumulated outputs

  • mode – ‘val’ or ‘test’

inference_step(batch, batch_nb)

One step of validation

load_pretrained(checkpoint_path)

Loads a pretrained detector (of this class) and transfer the weights to this module for fine tuning.

In addition it may remap the old classification weights if some overlap exists between old and new list of classes.

Parameters

checkpoint_path (str) – path to checkpoint of pretrained detector.

test_dataloader()

Implement one or multiple PyTorch DataLoaders for testing.

For data processing use the following pattern:

  • download in prepare_data()

  • process and split in setup()

However, the above are only necessary for distributed processing.

Warning

do not assign state in prepare_data

  • test()

  • prepare_data()

  • setup()

Note

Lightning adds the correct sampler for distributed and arbitrary hardware. There is no need to set it yourself.

Returns

A torch.utils.data.DataLoader or a sequence of them specifying testing samples.

Example:

def test_dataloader(self):
    transform = transforms.Compose([transforms.ToTensor(),
                                    transforms.Normalize((0.5,), (1.0,))])
    dataset = MNIST(root='/path/to/mnist/', train=False, transform=transform,
                    download=True)
    loader = torch.utils.data.DataLoader(
        dataset=dataset,
        batch_size=self.batch_size,
        shuffle=False
    )

    return loader

# can also return multiple dataloaders
def test_dataloader(self):
    return [loader_a, loader_b, ..., loader_n]

Note

If you don’t need a test dataset and a test_step(), you don’t need to implement this method.

Note

In the case where you return multiple test dataloaders, the test_step() will have an argument dataloader_idx which matches the order here.

test_epoch_end(outputs)

Called at the end of a test epoch with the output of all test steps.

# the pseudocode for these calls
test_outs = []
for test_batch in test_data:
    out = test_step(test_batch)
    test_outs.append(out)
test_epoch_end(test_outs)
Parameters

outputs – List of outputs you defined in test_step_end(), or if there are multiple dataloaders, a list containing a list of outputs for each dataloader

Returns

None

Note

If you didn’t define a test_step(), this won’t be called.

Examples

With a single dataloader:

def test_epoch_end(self, outputs):
    # do something with the outputs of all test batches
    all_test_preds = test_step_outputs.predictions

    some_result = calc_all_results(all_test_preds)
    self.log(some_result)

With multiple dataloaders, outputs will be a list of lists. The outer list contains one entry per dataloader, while the inner list contains the individual outputs of each test step for that dataloader.

def test_epoch_end(self, outputs):
    final_value = 0
    for dataloader_outputs in outputs:
        for test_step_out in dataloader_outputs:
            # do something
            final_value += test_step_out

    self.log("final_metric", final_value)
test_step(batch, batch_nb)

Operates on a single batch of data from the test set. In this step you’d normally generate examples or calculate anything of interest such as accuracy.

# the pseudocode for these calls
test_outs = []
for test_batch in test_data:
    out = test_step(test_batch)
    test_outs.append(out)
test_epoch_end(test_outs)
Parameters
  • batch – The output of your DataLoader.

  • batch_idx – The index of this batch.

  • dataloader_id – The index of the dataloader that produced this batch. (only if multiple test dataloaders used).

Returns

Any of.

  • Any object or value

  • None - Testing will skip to the next batch

# if you have one test dataloader:
def test_step(self, batch, batch_idx):
    ...


# if you have multiple test dataloaders:
def test_step(self, batch, batch_idx, dataloader_idx=0):
    ...

Examples:

# CASE 1: A single test dataset
def test_step(self, batch, batch_idx):
    x, y = batch

    # implement your own
    out = self(x)
    loss = self.loss(out, y)

    # log 6 example images
    # or generated text... or whatever
    sample_imgs = x[:6]
    grid = torchvision.utils.make_grid(sample_imgs)
    self.logger.experiment.add_image('example_images', grid, 0)

    # calculate acc
    labels_hat = torch.argmax(out, dim=1)
    test_acc = torch.sum(y == labels_hat).item() / (len(y) * 1.0)

    # log the outputs!
    self.log_dict({'test_loss': loss, 'test_acc': test_acc})

If you pass in multiple test dataloaders, test_step() will have an additional argument. We recommend setting the default value of 0 so that you can quickly switch between single and multiple dataloaders.

# CASE 2: multiple test dataloaders
def test_step(self, batch, batch_idx, dataloader_idx=0):
    # dataloader_idx tells you which dataset this is.
    ...

Note

If you don’t need to test you don’t need to implement this method.

Note

When the test_step() is called, the model has been put in eval mode and PyTorch gradients have been disabled. At the end of the test epoch, the model goes back to training mode and gradients are enabled.

train_dataloader()

Implement one or more PyTorch DataLoaders for training.

Returns

A collection of torch.utils.data.DataLoader specifying training samples. In the case of multiple dataloaders, please see this section.

The dataloader you return will not be reloaded unless you set :paramref:`~pytorch_lightning.trainer.Trainer.reload_dataloaders_every_n_epochs` to a positive integer.

For data processing use the following pattern:

  • download in prepare_data()

  • process and split in setup()

However, the above are only necessary for distributed processing.

Warning

do not assign state in prepare_data

  • fit()

  • prepare_data()

  • setup()

Note

Lightning adds the correct sampler for distributed and arbitrary hardware. There is no need to set it yourself.

Example:

# single dataloader
def train_dataloader(self):
    transform = transforms.Compose([transforms.ToTensor(),
                                    transforms.Normalize((0.5,), (1.0,))])
    dataset = MNIST(root='/path/to/mnist/', train=True, transform=transform,
                    download=True)
    loader = torch.utils.data.DataLoader(
        dataset=dataset,
        batch_size=self.batch_size,
        shuffle=True
    )
    return loader

# multiple dataloaders, return as list
def train_dataloader(self):
    mnist = MNIST(...)
    cifar = CIFAR(...)
    mnist_loader = torch.utils.data.DataLoader(
        dataset=mnist, batch_size=self.batch_size, shuffle=True
    )
    cifar_loader = torch.utils.data.DataLoader(
        dataset=cifar, batch_size=self.batch_size, shuffle=True
    )
    # each batch will be a list of tensors: [batch_mnist, batch_cifar]
    return [mnist_loader, cifar_loader]

# multiple dataloader, return as dict
def train_dataloader(self):
    mnist = MNIST(...)
    cifar = CIFAR(...)
    mnist_loader = torch.utils.data.DataLoader(
        dataset=mnist, batch_size=self.batch_size, shuffle=True
    )
    cifar_loader = torch.utils.data.DataLoader(
        dataset=cifar, batch_size=self.batch_size, shuffle=True
    )
    # each batch will be a dict of tensors: {'mnist': batch_mnist, 'cifar': batch_cifar}
    return {'mnist': mnist_loader, 'cifar': cifar_loader}
training_epoch_end(outputs)

Called at the end of the training epoch with the outputs of all training steps. Use this in case you need to do something with all the outputs returned by training_step().

# the pseudocode for these calls
train_outs = []
for train_batch in train_data:
    out = training_step(train_batch)
    train_outs.append(out)
training_epoch_end(train_outs)
Parameters

outputs – List of outputs you defined in training_step(). If there are multiple optimizers or when using truncated_bptt_steps > 0, the lists have the dimensions (n_batches, tbptt_steps, n_optimizers). Dimensions of length 1 are squeezed.

Returns

None

Note

If this method is not overridden, this won’t be called.

def training_epoch_end(self, training_step_outputs):
    # do something with all training_step outputs
    for out in training_step_outputs:
        ...
training_step(batch, batch_nb)

Here you compute and return the training loss and some additional metrics for e.g. the progress bar or logger.

Parameters
Returns

Any of.

  • Tensor - The loss tensor

  • dict - A dictionary. Can include any keys, but must include the key 'loss'

  • None - Training will skip to the next batch. This is only for automatic optimization.

    This is not supported for multi-GPU, TPU, IPU, or DeepSpeed.

In this step you’d normally do the forward pass and calculate the loss for a batch. You can also do fancier things like multiple forward passes or something model specific.

Example:

def training_step(self, batch, batch_idx):
    x, y, z = batch
    out = self.encoder(x)
    loss = self.loss(out, x)
    return loss

If you define multiple optimizers, this step will be called with an additional optimizer_idx parameter.

# Multiple optimizers (e.g.: GANs)
def training_step(self, batch, batch_idx, optimizer_idx):
    if optimizer_idx == 0:
        # do training_step with encoder
        ...
    if optimizer_idx == 1:
        # do training_step with decoder
        ...

If you add truncated back propagation through time you will also get an additional argument with the hidden states of the previous step.

# Truncated back-propagation through time
def training_step(self, batch, batch_idx, hiddens):
    # hiddens are the hidden states from the previous truncated backprop step
    out, hiddens = self.lstm(data, hiddens)
    loss = ...
    return {"loss": loss, "hiddens": hiddens}

Note

The loss value shown in the progress bar is smoothed (averaged) over the last values, so it differs from the actual loss returned in train/validation step.

Note

When accumulate_grad_batches > 1, the loss returned here will be automatically normalized by accumulate_grad_batches internally.

val_dataloader()

Implement one or multiple PyTorch DataLoaders for validation.

The dataloader you return will not be reloaded unless you set :paramref:`~pytorch_lightning.trainer.Trainer.reload_dataloaders_every_n_epochs` to a positive integer.

It’s recommended that all data downloads and preparation happen in prepare_data().

  • fit()

  • validate()

  • prepare_data()

  • setup()

Note

Lightning adds the correct sampler for distributed and arbitrary hardware There is no need to set it yourself.

Returns

A torch.utils.data.DataLoader or a sequence of them specifying validation samples.

Examples:

def val_dataloader(self):
    transform = transforms.Compose([transforms.ToTensor(),
                                    transforms.Normalize((0.5,), (1.0,))])
    dataset = MNIST(root='/path/to/mnist/', train=False,
                    transform=transform, download=True)
    loader = torch.utils.data.DataLoader(
        dataset=dataset,
        batch_size=self.batch_size,
        shuffle=False
    )

    return loader

# can also return multiple dataloaders
def val_dataloader(self):
    return [loader_a, loader_b, ..., loader_n]

Note

If you don’t need a validation dataset and a validation_step(), you don’t need to implement this method.

Note

In the case where you return multiple validation dataloaders, the validation_step() will have an argument dataloader_idx which matches the order here.

validation_epoch_end(outputs)

Called at the end of the validation epoch with the outputs of all validation steps.

# the pseudocode for these calls
val_outs = []
for val_batch in val_data:
    out = validation_step(val_batch)
    val_outs.append(out)
validation_epoch_end(val_outs)
Parameters

outputs – List of outputs you defined in validation_step(), or if there are multiple dataloaders, a list containing a list of outputs for each dataloader.

Returns

None

Note

If you didn’t define a validation_step(), this won’t be called.

Examples

With a single dataloader:

def validation_epoch_end(self, val_step_outputs):
    for out in val_step_outputs:
        ...

With multiple dataloaders, outputs will be a list of lists. The outer list contains one entry per dataloader, while the inner list contains the individual outputs of each validation step for that dataloader.

def validation_epoch_end(self, outputs):
    for dataloader_output_result in outputs:
        dataloader_outs = dataloader_output_result.dataloader_i_outputs

    self.log("final_metric", final_value)
validation_step(batch, batch_nb)

Operates on a single batch of data from the validation set. In this step you’d might generate examples or calculate anything of interest like accuracy.

# the pseudocode for these calls
val_outs = []
for val_batch in val_data:
    out = validation_step(val_batch)
    val_outs.append(out)
validation_epoch_end(val_outs)
Parameters
  • batch – The output of your DataLoader.

  • batch_idx – The index of this batch.

  • dataloader_idx – The index of the dataloader that produced this batch. (only if multiple val dataloaders used)

Returns

  • Any object or value

  • None - Validation will skip to the next batch

# pseudocode of order
val_outs = []
for val_batch in val_data:
    out = validation_step(val_batch)
    if defined("validation_step_end"):
        out = validation_step_end(out)
    val_outs.append(out)
val_outs = validation_epoch_end(val_outs)
# if you have one val dataloader:
def validation_step(self, batch, batch_idx):
    ...


# if you have multiple val dataloaders:
def validation_step(self, batch, batch_idx, dataloader_idx=0):
    ...

Examples:

# CASE 1: A single validation dataset
def validation_step(self, batch, batch_idx):
    x, y = batch

    # implement your own
    out = self(x)
    loss = self.loss(out, y)

    # log 6 example images
    # or generated text... or whatever
    sample_imgs = x[:6]
    grid = torchvision.utils.make_grid(sample_imgs)
    self.logger.experiment.add_image('example_images', grid, 0)

    # calculate acc
    labels_hat = torch.argmax(out, dim=1)
    val_acc = torch.sum(y == labels_hat).item() / (len(y) * 1.0)

    # log the outputs!
    self.log_dict({'val_loss': loss, 'val_acc': val_acc})

If you pass in multiple val dataloaders, validation_step() will have an additional argument. We recommend setting the default value of 0 so that you can quickly switch between single and multiple dataloaders.

# CASE 2: multiple validation dataloaders
def validation_step(self, batch, batch_idx, dataloader_idx=0):
    # dataloader_idx tells you which dataset this is.
    ...

Note

If you don’t need to validate you don’t need to implement this method.

Note

When the validation_step() is called, the model has been put in eval mode and PyTorch gradients have been disabled. At the end of validation, the model goes back to training mode and gradients are enabled.

metavision_ml.detection.lightning_model.bboxes_to_box_vectors(bbox)

uniformizes bbox dtype to x1,y1,x2,y2,class_id,track_id

detection losses

class metavision_ml.detection.losses.DetectionLoss(cls_loss_func='softmax_focal_loss')

Loss for Detection following SSD.

This class returns 2 losses: - one for anchor classification - one for anchor refinement.

Parameters

cls_loss_func (str) – classification type loss

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(loc_preds, loc_targets, cls_preds, cls_targets)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

metavision_ml.detection.losses.reduce(loss, mode='none')

reduce mode

Parameters
  • loss – multidim loss

  • mode – either “mean”, “sum” or “none”

Returns

reduced loss

metavision_ml.detection.losses.smooth_l1_loss(pred, target, beta=0.11, reduction='sum')

smooth l1 loss

Parameters
  • pred – positive anchors predictions [N, 4]

  • target – positive anchors targets [N, 4]

  • beta – limit between l2 and l1 behavior

metavision_ml.detection.losses.softmax_focal_loss(pred, target, reduction='none')

Softmax focal loss

Parameters
  • pred – [N, A, C+1]

  • target – [N, A] (-1: ignore, 0: background, [1,C]: classes)

  • reduction – ‘sum’, ‘mean’, ‘none’

Returns

reduced loss

Box Regression and Classification

class metavision_ml.detection.rpn.BoxHead(in_channels, num_anchors, num_logits, n_layers=3)

Shared Prediction for boxes. Applies 2 small mini-convnets of stride1 to predict class and box delta. Reshape the predictions and concatenate them to output 2 vectors “loc” and “cls”

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(xs: List[torch.Tensor])List[torch.Tensor]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

metavision_ml.detection.rpn.softmax_init(l, num_anchors, num_logits)

softmax initialization:

We derive initialization according to target probability of background 0.99 Focal Loss for Dense Object Detection (Lin et al.)

Parameters
  • l – linear layer

  • num_anchors – number of anchors of prediction

  • num_logits – number of classes + 1

Detector Interface

class metavision_ml.detection.single_stage_detector.Detector

This in an interface for neural network learning to predict boxes. The trainer expects “compute_loss” and “get_boxes” to be coded.

Initializes internal Module state, shared by both nn.Module and ScriptModule.

class metavision_ml.detection.single_stage_detector.SingleStageDetector(feature_extractor, in_channels, num_classes, feature_base, feature_channels_out, anchor_list, nlayers=0, max_boxes_per_input=500)

This in an interface for “single stage” methods (e.g: RetinaNet, SSD, etc.)

Parameters
  • feature_extractor (string) – name of the feature extractor architecture

  • in_channels (int) – number of channels for the input layer

  • num_classes (int) – number of output classes for the classifier head

  • feature_base (int) – factor to grow the feature extractor width

  • feature_channels_out (int) – number of output channels for the feature extractor

  • anchor_list (couple list) – list of couple (aspect ratio, scale) to be used for each extracted feature

  • max_boxes_per_input (int) – max number of boxes to be considered before thresholding or NMS.

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(x)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

map_cls_weights(module_src, old_classes, new_classes)

Maps old classes to new classes if some overlap exists between old classes and new ones.

Parameters
  • module_src – old model

  • old_classes – old list of classes

  • new_classes – new list of classes