SDK ML Detection API
This contains the classes to encode/ decode the gt into ‘anchor-boxes’.
All targets are encoded in parallel for better throughput.
The module can resize its grid internally (so batches can change sizes)
- class metavision_ml.detection.anchors.AnchorLayer(box_size=32, anchor_list=[(0.3333333333333333, 1), (0.5, 1), (1, 1), (1, 1.5), (2, 1), (3, 1)])
For one level of the pyramid: Manages One Grid (x,y,w,h)
The anchors grid is (height, width, num_anchors_per_position, 4)
The grid is cached, but changes if featuremap size changes
- Parameters
box_size (int) – base size for anchor box
anchor_list (List) – a list of ratio, scale tuples configuration of anchors
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(shape: torch.Tensor, stride: int)
Generates anchors
- Parameters
shape (torch.Tensor) – shape of feature map
stride (int) – stride compared to network input
- static generate_anchors(box_size, ratio_scale_list)
Generates the anchors sizes
- Parameters
box_size (int) – base size for anchor boxes
ratio_scale_list (List) – a list of ratio, scale tuples configuration of anchors
- Returns
Nx2 (width, height)
- Return type
anchors (torch.Tensor)
- make_grid(height: int, width: int, stride: int)
Makes a coordinate grid for anchor boxes
- Parameters
height (int) – feature-map height
width (int) – feature-map width
- Returns
(H,W,NanchorPerPixel,2) size (width & height per anchor per pixel)
- Return type
grid (torch.Tensor)
- class metavision_ml.detection.anchors.Anchors(num_levels=4, base_size=32, anchor_list='PSEE_ANCHORS', fg_iou_threshold=0.5, bg_iou_threshold=0.3, allow_low_quality_matches=True, variances=[0.1, 0.2], max_decode=False)
Pyramid of Anchoring Grids. Handle encoding/decoding algorithms. Encoding uses padding in order to parallelize iou & assignment computation. Decoding uses “batched_nms” of torchvision to parallelize across images and classes. The option “max_decode” means we only decode the best score, otherwise we decode per class
- Parameters
num_levels – number of pyramid levels
base_size – minimum box size
sizes – box sizes per level
anchor_list – list of anchors sizes per pyramid level
fg/bg_iou_threshold – threshold to accept/ reject a matching anchor
allow_low_quality_matches – assign all gt even if no anchor really matches
variances – box variance following ssd formula
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- batched_decode(boxes, scores, score_thresh, nms_thresh, topk=100)
Decodes all prediction vectors across batch, classes and time.
- Parameters
boxes (torch.Tensor) – (N,Nanchors,4)
scores (torch.Tensor) – (N,Nanchors,C) with C classes excluding
background. –
score_thresh (float) – threshold to score
nms_thresh (float) – threshold to IOU.
topk (int) – maximum number of anchors considered for NMS.
- decode(features, x, loc_preds, scores, batch_size, score_thresh=0.5, nms_thresh=0.6, max_boxes_per_input=500)
Decodes prediction vectors into boxes
- Parameters
features (list) – list of feature maps
x (torch.Tensor) – network’s input
loc_preds (torch.Tensor) – regression prediction vector (N,Nanchors,4)
scores (torch.Tensor) – score prediction vector (N,Nanchors,C) with C classes (background is
excluded) –
score_thresh (float) – apply this threshold before nms
nms_thresh (float) – grouping threshold on IOU similarity between boxes
max_boxes_per_input (int) – maximum number of boxes per image. Too small might reduce recall, too high might entail extra computational cost in decoding, loss or evaluation.
- Returns
list of list of decoded boxes
- Return type
decoded boxes (List)
- encode(features, x, targets)
Encodes input and features into target vectors expressed in anchor coordinate system.
- Parameters
x (torch.Tensor) – input with original size
targets (List) – list of list of targets
- Returns
encoded anchors regression targets cls (torch.Tensor): encoded anchors classification targets
- Return type
loc (torch.Tensor)
- encode_anchors(anchors, anchors_xyxy, targets)
Encodes targets into target vectors expressed in anchor coordinate system.
- Parameters
anchors (torch.Tensor) – anchors in cx,cy,w,h format
anchors – anchors in x1,y1,x2,y2 format
- Returns
encoded anchors regression targets cls (torch.Tensor): encoded anchors classification targets
- Return type
loc (torch.Tensor)
- forward(xs: List[torch.Tensor], x: torch.Tensor)
Generates Anchors
- Parameters
xs (List) – list of feature maps
x (torch.Tensor) – network’s input
- Returns
(N,Nanchors,4) anchor boxes in (cx,cy,w,h) format
- Return type
anchors (torch.Tensor)
- has_changed(xs: List[torch.Tensor], x: torch.Tensor)
Detects if feature maps sizes has change.
- Parameters
xs (List) – list of feature maps
x (torch.Tensor) – network’s input
- set_low_quality_matches(ious, batch_best_target_per_prior_indices, batch_best_target_per_prior, sizes)
Makes sure that every GT is assigned to at least 1 anchor.
- Parameters
ious (torch.Tensor) – (N,Nanchors,MaxGT) IOU cost matrix
batch_best_target_per_prior_indices (torch.Tensor) – (N,Nanchors)
sizes (int list) – number of valid (non padding) boxes for each bin
torch box API
- metavision_ml.detection.box.assign_priors(gt_boxes, gt_labels, corner_form_priors, fg_iou_threshold, bg_iou_threshold, allow_low_quality_matches=True)
Assigns ground truth boxes as targets to priors (also called anchor boxes).
- Parameters
gt_boxes (tensor) – ground truth boxes tensor of shape (num_targets, 4).
gt_labels (tensor) – int tensor of size (num_target) containing the class labels of targets.
corner_form_priors (tensor) – tensor of shape (num_priors, 4), contains the priors boxes in format (xmin, ymin, xmax, ymax).
fg_iou_threshold (float) – minimal iou with a prior box to be considered a positive match (to be assigned a ground truth box)
bg_iou_threshold (float) – below this iou threshold a prior box is considered to match the background (the prior box doesn’t match any ground truth box)
allow_low_quality_matches (boolean) – allow bad matches to be considered anyway.
- Returns
- of shape coordinate values of the ground truth box assigned to each prior box
in the format (xmin, ymin, xmax, ymax)
labels (tensor): of shape (num_priors) containing class label of the ground truth box assigned to each prior.
- Return type
boxes (tensor)
- metavision_ml.detection.box.batch_box_iou(box1, box2)
Computes the intersection over union of two sets of boxes.
The box order must be (xmin, ymin, xmax, ymax).
- Parameters
box1 – (tensor) bounding boxes, sized [N,4].
box2 – (tensor) bounding boxes, sized [B,M,4].
- Returns
(tensor) iou, sized [N,M].
- metavision_ml.detection.box.bbox_to_deltas(boxes, default_boxes, variances=[0.1, 0.2], max_width=10000)
converts boxes expressed in absolute coordinate to anchor-relative coordinates.
- Parameters
boxes – xyxy boxes Nx4 tensor
default_boxes – cxcywh anchor boxes Nx4 tensor
variances – variances according to SSD paper.
max_width – additional clamping to avoid infinite gt boxes.
- Returns
boxes expressed relatively to the center and size of anchor boxes.
- Return type
deltas
- metavision_ml.detection.box.box_clamp(boxes, xmin, ymin, xmax, ymax)
Clamps boxes.
- Parameters
boxes (tensor) – bounding boxes of (xmin,ymin,xmax,ymax), sized [N,4].
xmin (number) – min value of x.
ymin (number) – min value of y.
xmax (number) – max value of x.
ymax (number) – max value of y.
- Returns
(tensor) clamped boxes.
- metavision_ml.detection.box.box_iou(box1, box2)
Computes the intersection over union of two sets of boxes.
The box order must be (xmin, ymin, xmax, ymax).
- Parameters
box1 – (tensor) bounding boxes, sized [N,4].
box2 – (tensor) bounding boxes, sized [M,4].
- Returns
(tensor) iou, sized [N,M].
- metavision_ml.detection.box.box_nms(bboxes, scores, threshold=0.5)
Non maximum suppression.
- Parameters
bboxes – (tensor) bounding boxes, sized [N,4].
scores – (tensor) confidence scores, sized [N,].
threshold – (float) overlap threshold.
- Returns
(tensor) selected indices.
- Return type
keep
- metavision_ml.detection.box.box_select(boxes, xmin, ymin, xmax, ymax)
Selects boxes in range (xmin,ymin,xmax,ymax).
- Parameters
boxes (tensor) – bounding boxes of (xmin,ymin,xmax,ymax), sized [N,4].
xmin (number) – min value of x.
ymin (number) – min value of y.
xmax (number) – max value of x.
ymax (number) – max value of y.
- Returns
(tensor) selected boxes, sized [M,4]. (tensor) selected mask, sized [N,].
- metavision_ml.detection.box.deltas_to_bbox(loc_preds, default_boxes, variances=[0.1, 0.2])
converts boxes expressed in anchor-relative coordinates to absolute coordinates.
- Parameters
loc_preds – deltas boxes Nx4 tensor
default_boxes – cxcywh anchor boxes Nx4 tensor
variances – variances according to SSD paper.
- Returns
boxes expressed in absolute coordinates.
- Return type
box_preds
- metavision_ml.detection.box.pack_boxes_list(targets)
Packs targets altogether
Because numpy array have variable-length, We pad each group.
- Parameters
targets – list of np.ndarray in struct BBOX_dtype
- Returns
packed targets in shape [x1,y1,x2,y2,label] num_boxes: list of number of boxes per frame
- metavision_ml.detection.box.xywh2xyxy(boxes)
Changes box order from (xcenter, ycenter, width, height) to (xmin,ymin,xmax,ymax).
- Parameters
boxes (tensor) – bounding boxes, sized [N,4].
- Returns
converted bounding boxes, sized [N,4].
- Return type
boxes (tensor)
- metavision_ml.detection.box.xyxy2xywh(boxes)
Changes box order from (xmin,ymin,xmax,ymax) to (xcenter,ycenter,width,height).
- Parameters
boxes (tensor) – bounding boxes, sized [N,4].
- Returns
converted bounding boxes, sized [N,4].
- Return type
boxes (tensor)
Detection Data Factory Collection of DataLoaders. You can add your own here if sequential_dataset is not good fit.
- metavision_ml.detection.data_factory.get_classes_from_json(json_file_path)
Read classes for a json file
- Parameters
json_file_path – path to the json file
- Returns
classes
- metavision_ml.detection.data_factory.get_classes_from_label_map_fnn(dataset_path)
Read classes for NonSequentialDataset
- Parameters
dataset_path – path to dataset containing ‘label_map_dictionary_fnn.json’
- Returns
classes
- metavision_ml.detection.data_factory.get_classes_from_label_map_rnn(dataset_path)
Read classes for SequentialDataset
- Parameters
dataset_path – path to dataset containing ‘label_map_dictionary.json’
- Returns
classes
- metavision_ml.detection.data_factory.psee_data(hparams: argparse.Namespace, mode: str, empty_label: str = 'empty') metavision_ml.data.sequential_dataset.SequentialDataLoader
generate a PSEE sequential dataset from a dataset path with the following structure: [dataset_path]
- Parameters
hparams – params of pytorch lightning module
mode – section of data “train”, “val” or “test”
empty_label – labels used to mark frames without any relevant bbox
- metavision_ml.detection.data_factory.setup_psee_classes(label_map_path, wanted_classes=[])
Setups classes and lookup for loading.
- Parameters
label_map_path – path to json containining label dictionary
wanted_classes – if empty, return all available ones except the “empty” frame label.
- metavision_ml.detection.data_factory.setup_psee_load_labels(label_map_path, num_tbins, wanted_classes=[], min_box_diag_network=30, interpolation=True)
Setups Gen1/Gen4 loading labels
- Parameters
label_map_path – path to json containining label dictionary
wanted_classes – if empty, return all available ones except the “empty” frame label.
num_tbins – number of time-bins per batch
min_box_diag_network – minimum box size to keep
This defines several neural networks They take an image/ sequence as input and send a pyramid/ sequence of pyramids as output
- class metavision_ml.detection.feature_extractors.Vanilla(cin=1, base=16, cout=256)
Baseline architecture getting 0.4 mAP on the HD Event-based Automotive Detection Dataset.
It consists of Squeeze-Excite Blocks to stride 16 and then 5 levels of Convolutional-RNNs.
Each level can then be fed to a special head for predicting bounding boxes for example.
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(x)
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class metavision_ml.detection.feature_extractors.Vanilla_VGG(cin=1, base=16, cout=256)
Baseline architecture getting 0.3 mAP on the HD Event-based Automotive Detection Dataset.
It consists of VGG blocks to stride 16 and then 5 levels of Convolutional-RNNs.
Each level can then be fed to a special head for predicting bounding boxes for example.
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(x)
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class metavision_ml.detection.feature_extractors.Vanilla_VGGRU(cin=1, base=16, cout=256)
Baseline architecture with an alternate RNN Cell
It consists of VGG blocks to stride 16 and then 5 levels of Convolutional-RNNs of the GRU type.
Each level can then be fed to a special head for predicting bounding boxes for example.
Initialize internal Module state, shared by both nn.Module and ScriptModule.
This script instantiates a jittable single-shot detector. It is called by export.py. It allows to exports to the C++ Detection & Tracking application.
- class metavision_ml.detection.jitting.BoxDecoder
Jittable Box Decoding
This reuses the anchor module that is jittable for its forward, but not all its other functions like encode or decode.
It decodes 2 prediction tensors (regression & box classification) that are N,Nanchors,4 and N,Nanchors,C (with Nanchor: number of anchor & C: number of classes)
into a nested list of torch.tensors [x1,y1,w,h,score,class_id]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(anchors, loc_preds, cls_preds, varx: float, vary: float, score_thresh: float, batch_size: int)
for Torch.Jit this forward does not perform nms
- class metavision_ml.detection.jitting.SSD(net, anchor_list='PSEE_ANCHORS')
This module can be exported to C++. Given input tensor it outputs a nested list of filtered boxes. :param net: neural network model :param anchor_list: anchor configuration
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(x, score_thresh: float = 0.4) List[List[torch.Tensor]]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- metavision_ml.detection.jitting.export_lightning_model(lightning_model, out_directory, nms_thresh, score_thresh, precision=32)
Exports lightning model to Torch JIT & json parameter files read by the C++ D&T application.
- Parameters
lightning_model – pytorch lightning class
out_directory – output directory
nms_thresh –
score_thresh –
precision (int) – save the model in float 16 or 32 precision
- metavision_ml.detection.jitting.export_ssd(ssd, params, out_directory, nms_thresh, score_thresh, precision=32)
Exports Jitted class SSD & json parameter files read by the C++ D&T application
- Parameters
ssd – jitted class
params – hyper parameters
out_directory – output directory
nms_thresh –
score_thresh –
precision (int) – save the model in float 16 or 32 precision
This script contains test cases for exporting a RNN detector
- metavision_ml.detection.jitting_test.testcase_forward_network_with_and_without_box_decoding(nn_filename, height=120, width=160, device='cpu')
Checks forward() is working on torch.jit model on the device specified by input parameter “device”, with and without torch.no_grad()
- metavision_ml.detection.jitting_test.testcase_torch_jit_reset(nn_filename, height=120, width=160, device='cpu')
- Tests the fact that function reset_all() works properly:
memory cell and activations are indeed set to zero
when providing the same input tensor to the network, the outputs we obtain from the first propagation following a reset should be identical to the outputs we obtain from the first propagation after loading the model
Pytorch Lightning Module for training the detector
- class metavision_ml.detection.lightning_model.LightningDetectionModel(hparams: argparse.Namespace)
Pytorch Lightning model for neural network to predict boxes.
The detector built by build_ssd should be a Detector with “compute_loss” and “get_boxes” implemented.
- Parameters
hparams (argparse.Namespace) – argparse from train.py application
- accumulate_predictions(preds, targets, video_infos, frame_is_labeled)
Accumulates prediction to run coco-metrics on the full videos
- configure_optimizers()
Choose what optimizers and learning-rate schedulers to use in your optimization. Normally you’d need one. But in the case of GANs or similar you might have multiple. Optimization with multiple optimizers only works in the manual optimization mode.
- Returns
Any of these 6 options.
Single optimizer.
List or Tuple of optimizers.
Two lists - The first list has multiple optimizers, and the second has multiple LR schedulers (or multiple
lr_scheduler_config
).Dictionary, with an
"optimizer"
key, and (optionally) a"lr_scheduler"
key whose value is a single LR scheduler orlr_scheduler_config
.None - Fit will run without any optimizer.
The
lr_scheduler_config
is a dictionary which contains the scheduler and its associated configuration. The default configuration is shown below.lr_scheduler_config = { # REQUIRED: The scheduler instance "scheduler": lr_scheduler, # The unit of the scheduler's step size, could also be 'step'. # 'epoch' updates the scheduler on epoch end whereas 'step' # updates it after a optimizer update. "interval": "epoch", # How many epochs/steps should pass between calls to # `scheduler.step()`. 1 corresponds to updating the learning # rate after every epoch/step. "frequency": 1, # Metric to to monitor for schedulers like `ReduceLROnPlateau` "monitor": "val_loss", # If set to `True`, will enforce that the value specified 'monitor' # is available when the scheduler is updated, thus stopping # training if not found. If set to `False`, it will only produce a warning "strict": True, # If using the `LearningRateMonitor` callback to monitor the # learning rate progress, this keyword can be used to specify # a custom logged name "name": None, }
When there are schedulers in which the
.step()
method is conditioned on a value, such as thetorch.optim.lr_scheduler.ReduceLROnPlateau
scheduler, Lightning requires that thelr_scheduler_config
contains the keyword"monitor"
set to the metric name that the scheduler should be conditioned on.Metrics can be made available to monitor by simply logging it using
self.log('metric_to_track', metric_val)
in yourLightningModule
.Note
Some things to know:
Lightning calls
.backward()
and.step()
automatically in case of automatic optimization.If a learning rate scheduler is specified in
configure_optimizers()
with key"interval"
(default “epoch”) in the scheduler configuration, Lightning will call the scheduler’s.step()
method automatically in case of automatic optimization.If you use 16-bit precision (
precision=16
), Lightning will automatically handle the optimizer.If you use
torch.optim.LBFGS
, Lightning handles the closure function automatically for you.If you use multiple optimizers, you will have to switch to ‘manual optimization’ mode and step them yourself.
If you need to control how often the optimizer steps, override the
optimizer_step()
hook.
- demo_video(epoch=0, num_batches=100, show_video=False)
This runs our detector on several videos of the testing dataset
- forward(x)
Same as
torch.nn.Module.forward()
.- Parameters
*args – Whatever you decide to pass into the forward method.
**kwargs – Keyword arguments are also possible.
- Returns
Your model’s output
- inference_epoch_end(outputs, mode='val')
Runs Metrics
- Parameters
outputs – accumulated outputs
mode – ‘val’ or ‘test’
- inference_step(batch, batch_nb)
One step of validation
- load_pretrained(checkpoint_path)
Loads a pretrained detector (of this class) and transfer the weights to this module for fine tuning.
In addition it may remap the old classification weights if some overlap exists between old and new list of classes.
- Parameters
checkpoint_path (str) – path to checkpoint of pretrained detector.
- on_test_epoch_end()
Called in the test loop at the very end of the epoch.
- on_test_epoch_start()
Called in the test loop at the very beginning of the epoch.
- on_train_epoch_end()
Called in the training loop at the very end of the epoch.
To access all batch outputs at the end of the epoch, you can cache step outputs as an attribute of the
LightningModule
and access them in this hook:class MyLightningModule(L.LightningModule): def __init__(self): super().__init__() self.training_step_outputs = [] def training_step(self): loss = ... self.training_step_outputs.append(loss) return loss def on_train_epoch_end(self): # do something with all training_step outputs, for example: epoch_mean = torch.stack(self.training_step_outputs).mean() self.log("training_epoch_mean", epoch_mean) # free up the memory self.training_step_outputs.clear()
- on_validation_epoch_end()
Called in the validation loop at the very end of the epoch.
- on_validation_epoch_start()
Called in the validation loop at the very beginning of the epoch.
- test_dataloader()
An iterable or collection of iterables specifying test samples.
For more information about multiple dataloaders, see this section.
For data processing use the following pattern:
download in
prepare_data()
process and split in
setup()
However, the above are only necessary for distributed processing.
Warning
do not assign state in prepare_data
test()
prepare_data()
setup()
Note
Lightning tries to add the correct sampler for distributed and arbitrary hardware. There is no need to set it yourself.
Note
If you don’t need a test dataset and a
test_step()
, you don’t need to implement this method.
- test_step(batch, batch_nb)
Operates on a single batch of data from the test set. In this step you’d normally generate examples or calculate anything of interest such as accuracy.
- Parameters
batch – The output of your data iterable, normally a
DataLoader
.batch_idx – The index of this batch.
dataloader_idx – The index of the dataloader that produced this batch. (only if multiple dataloaders used)
- Returns
Tensor
- The loss tensordict
- A dictionary. Can include any keys, but must include the key'loss'
.None
- Skip to the next batch.
# if you have one test dataloader: def test_step(self, batch, batch_idx): ... # if you have multiple test dataloaders: def test_step(self, batch, batch_idx, dataloader_idx=0): ...
Examples:
# CASE 1: A single test dataset def test_step(self, batch, batch_idx): x, y = batch # implement your own out = self(x) loss = self.loss(out, y) # log 6 example images # or generated text... or whatever sample_imgs = x[:6] grid = torchvision.utils.make_grid(sample_imgs) self.logger.experiment.add_image('example_images', grid, 0) # calculate acc labels_hat = torch.argmax(out, dim=1) test_acc = torch.sum(y == labels_hat).item() / (len(y) * 1.0) # log the outputs! self.log_dict({'test_loss': loss, 'test_acc': test_acc})
If you pass in multiple test dataloaders,
test_step()
will have an additional argument. We recommend setting the default value of 0 so that you can quickly switch between single and multiple dataloaders.# CASE 2: multiple test dataloaders def test_step(self, batch, batch_idx, dataloader_idx=0): # dataloader_idx tells you which dataset this is. ...
Note
If you don’t need to test you don’t need to implement this method.
Note
When the
test_step()
is called, the model has been put in eval mode and PyTorch gradients have been disabled. At the end of the test epoch, the model goes back to training mode and gradients are enabled.
- train_dataloader()
An iterable or collection of iterables specifying training samples.
For more information about multiple dataloaders, see this section.
The dataloader you return will not be reloaded unless you set :paramref:`~pytorch_lightning.trainer.trainer.Trainer.reload_dataloaders_every_n_epochs` to a positive integer.
For data processing use the following pattern:
download in
prepare_data()
process and split in
setup()
However, the above are only necessary for distributed processing.
Warning
do not assign state in prepare_data
fit()
prepare_data()
setup()
Note
Lightning tries to add the correct sampler for distributed and arbitrary hardware. There is no need to set it yourself.
- training_step(batch, batch_nb)
Here you compute and return the training loss and some additional metrics for e.g. the progress bar or logger.
- Parameters
batch – The output of your data iterable, normally a
DataLoader
.batch_idx – The index of this batch.
dataloader_idx – The index of the dataloader that produced this batch. (only if multiple dataloaders used)
- Returns
Tensor
- The loss tensordict
- A dictionary which can include any keys, but must include the key'loss'
in the case of automatic optimization.None
- In automatic optimization, this will skip to the next batch (but is not supported for multi-GPU, TPU, or DeepSpeed). For manual optimization, this has no special meaning, as returning the loss is not required.
In this step you’d normally do the forward pass and calculate the loss for a batch. You can also do fancier things like multiple forward passes or something model specific.
Example:
def training_step(self, batch, batch_idx): x, y, z = batch out = self.encoder(x) loss = self.loss(out, x) return loss
To use multiple optimizers, you can switch to ‘manual optimization’ and control their stepping:
def __init__(self): super().__init__() self.automatic_optimization = False # Multiple optimizers (e.g.: GANs) def training_step(self, batch, batch_idx): opt1, opt2 = self.optimizers() # do training_step with encoder ... opt1.step() # do training_step with decoder ... opt2.step()
Note
When
accumulate_grad_batches
> 1, the loss returned here will be automatically normalized byaccumulate_grad_batches
internally.
- val_dataloader()
An iterable or collection of iterables specifying validation samples.
For more information about multiple dataloaders, see this section.
The dataloader you return will not be reloaded unless you set :paramref:`~pytorch_lightning.trainer.trainer.Trainer.reload_dataloaders_every_n_epochs` to a positive integer.
It’s recommended that all data downloads and preparation happen in
prepare_data()
.fit()
validate()
prepare_data()
setup()
Note
Lightning tries to add the correct sampler for distributed and arbitrary hardware There is no need to set it yourself.
Note
If you don’t need a validation dataset and a
validation_step()
, you don’t need to implement this method.
- validation_step(batch, batch_nb)
Operates on a single batch of data from the validation set. In this step you’d might generate examples or calculate anything of interest like accuracy.
- Parameters
batch – The output of your data iterable, normally a
DataLoader
.batch_idx – The index of this batch.
dataloader_idx – The index of the dataloader that produced this batch. (only if multiple dataloaders used)
- Returns
Tensor
- The loss tensordict
- A dictionary. Can include any keys, but must include the key'loss'
.None
- Skip to the next batch.
# if you have one val dataloader: def validation_step(self, batch, batch_idx): ... # if you have multiple val dataloaders: def validation_step(self, batch, batch_idx, dataloader_idx=0): ...
Examples:
# CASE 1: A single validation dataset def validation_step(self, batch, batch_idx): x, y = batch # implement your own out = self(x) loss = self.loss(out, y) # log 6 example images # or generated text... or whatever sample_imgs = x[:6] grid = torchvision.utils.make_grid(sample_imgs) self.logger.experiment.add_image('example_images', grid, 0) # calculate acc labels_hat = torch.argmax(out, dim=1) val_acc = torch.sum(y == labels_hat).item() / (len(y) * 1.0) # log the outputs! self.log_dict({'val_loss': loss, 'val_acc': val_acc})
If you pass in multiple val dataloaders,
validation_step()
will have an additional argument. We recommend setting the default value of 0 so that you can quickly switch between single and multiple dataloaders.# CASE 2: multiple validation dataloaders def validation_step(self, batch, batch_idx, dataloader_idx=0): # dataloader_idx tells you which dataset this is. ...
Note
If you don’t need to validate you don’t need to implement this method.
Note
When the
validation_step()
is called, the model has been put in eval mode and PyTorch gradients have been disabled. At the end of validation, the model goes back to training mode and gradients are enabled.
- metavision_ml.detection.lightning_model.bboxes_to_box_vectors(bbox)
uniformizes bbox dtype to x1,y1,x2,y2,class_id,track_id
detection losses
- class metavision_ml.detection.losses.DetectionLoss(cls_loss_func='softmax_focal_loss')
Loss for Detection following SSD.
This class returns 2 losses: - one for anchor classification - one for anchor refinement.
- Parameters
cls_loss_func (str) – classification type loss
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(loc_preds, loc_targets, cls_preds, cls_targets)
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- metavision_ml.detection.losses.reduce(loss, mode='none')
reduce mode
- Parameters
loss – multidim loss
mode – either “mean”, “sum” or “none”
- Returns
reduced loss
- metavision_ml.detection.losses.smooth_l1_loss(pred, target, beta=0.11, reduction='sum')
smooth l1 loss
- Parameters
pred – positive anchors predictions [N, 4]
target – positive anchors targets [N, 4]
beta – limit between l2 and l1 behavior
- metavision_ml.detection.losses.softmax_focal_loss(pred, target, reduction='none')
Softmax focal loss
- Parameters
pred – [N, A, C+1]
target – [N, A] (-1: ignore, 0: background, [1,C]: classes)
reduction – ‘sum’, ‘mean’, ‘none’
- Returns
reduced loss
Box Regression and Classification
- class metavision_ml.detection.rpn.BoxHead(in_channels, num_anchors, num_logits, n_layers=3)
Shared Prediction for boxes. Applies 2 small mini-convnets of stride1 to predict class and box delta. Reshape the predictions and concatenate them to output 2 vectors “loc” and “cls”
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(xs: List[torch.Tensor]) List[torch.Tensor]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- metavision_ml.detection.rpn.softmax_init(l, num_anchors, num_logits)
softmax initialization:
We derive initialization according to target probability of background 0.99 Focal Loss for Dense Object Detection (Lin et al.)
- Parameters
l – linear layer
num_anchors – number of anchors of prediction
num_logits – number of classes + 1
Detector Interface
- class metavision_ml.detection.single_stage_detector.Detector
This in an interface for neural network learning to predict boxes. The trainer expects “compute_loss” and “get_boxes” to be coded.
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- class metavision_ml.detection.single_stage_detector.SingleStageDetector(feature_extractor, in_channels, num_classes, feature_base, feature_channels_out, anchor_list, nlayers=0, max_boxes_per_input=500)
This in an interface for “single stage” methods (e.g: RetinaNet, SSD, etc.)
- Parameters
feature_extractor (string) – name of the feature extractor architecture
in_channels (int) – number of channels for the input layer
num_classes (int) – number of output classes for the classifier head
feature_base (int) – factor to grow the feature extractor width
feature_channels_out (int) – number of output channels for the feature extractor
anchor_list (couple list) – list of couple (aspect ratio, scale) to be used for each extracted feature
max_boxes_per_input (int) – max number of boxes to be considered before thresholding or NMS.
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(x)
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- map_cls_weights(module_src, old_classes, new_classes)
Maps old classes to new classes if some overlap exists between old classes and new ones.
- Parameters
module_src – old model
old_classes – old list of classes
new_classes – new list of classes