SDK CV3D Python bindings API

class metavision_sdk_cv3d.Edgelet2dDetectionAlgorithm(self: metavision_sdk_cv3d.Edgelet2dDetectionAlgorithm, threshold: int = 0) None

Algorithm used to detect 2D edgelets in a time surface.

Constructor.

threshold

Detection tolerance threshold

see

is_fast_edge for more details

static get_empty_output_buffer() metavision_sdk_cv3d.EventEdgelet2dBuffer

This function returns an empty buffer of events of the correct type, which can later on be used as output_buf when calling process_events()

process(*args, **kwargs)

Overloaded function.

  1. process(self: metavision_sdk_cv3d.Edgelet2dDetectionAlgorithm, time_surface: metavision_sdk_core.MostRecentTimestampBuffer, in_events: numpy.ndarray[metavision_sdk_base._EventCD_decode], out_edgelets: metavision_sdk_cv3d.EventEdgelet2dBuffer) -> None

  2. process(self: metavision_sdk_cv3d.Edgelet2dDetectionAlgorithm, time_surface: metavision_sdk_core.MostRecentTimestampBuffer, in_events: metavision_sdk_base.EventCDBuffer, out_edgelets: metavision_sdk_cv3d.EventEdgelet2dBuffer) -> None

class metavision_sdk_cv3d.Edgelet2dTrackingAlgorithm(self: metavision_sdk_cv3d.Edgelet2dTrackingAlgorithm, params: metavision_sdk_cv3d.Edgelet2dTrackingAlgorithm.Parameters = <metavision_sdk_cv3d.Edgelet2dTrackingAlgorithm.Parameters object at 0x7fe85b377230>) None

Algorithm used to track 2D edgelets in a time surface.

Points are sampled along the edgelet’s direction (a.k.a support points) and matches are looked for on the two sides of the edgelet along its normal. A match is found when a timestamp more recent than a target timestamp if found in the time surface. A new line is then fitted from those matches and both a new direction and normal are computed.

warning

Because of the aperture problem, the tracking can drift very quickly. As a result, this algorithm should only be used if there are methods that constrain the tracking or to track edgelets that are orthogonal to the camera’s motion.

Constructor.

params

Parameters used by the tracking algorithm

class Parameters(self: metavision_sdk_cv3d.Edgelet2dTrackingAlgorithm.Parameters) None

Parameters used by the tracking algorithm.

property median_outlier_threshold

Distance to the median position of the support points’ matches above which a match is considered as outlier.

property n_support_points

Number of points sampled along the edgelet’s direction.

property search_radius

Radius in which matches are looked for each edgelet’s side.

property support_points_distance

Distance in pixels between the sampled points.

property threshold

Time tolerance used in the tracking.

static get_empty_edgelet_buffer() metavision_sdk_cv3d.EventEdgelet2dBuffer

This function returns an empty buffer of events of the correct type, which can later on be used as output_buf when calling process_events()

process(*args, **kwargs)

Overloaded function.

  1. process(self: metavision_sdk_cv3d.Edgelet2dTrackingAlgorithm, time_surface: metavision_sdk_core.MostRecentTimestampBuffer, target: int, in_edgelets: numpy.ndarray[Metavision::EventEdgelet2d], out_edgelets: metavision_sdk_cv3d.EventEdgelet2dBuffer) -> list

  2. process(self: metavision_sdk_cv3d.Edgelet2dTrackingAlgorithm, time_surface: metavision_sdk_core.MostRecentTimestampBuffer, target: int, in_edgelets: metavision_sdk_cv3d.EventEdgelet2dBuffer, out_edgelets: metavision_sdk_cv3d.EventEdgelet2dBuffer) -> list

class metavision_sdk_cv3d.Model3d(self: metavision_sdk_cv3d.Model3d) None

Structure defining a 3D model.

class EdgeBuffer

Buffer of 3D model’s edges.

numpy(self: metavision_sdk_cv3d.Model3d.EdgeBuffer) numpy.ndarray[Metavision::Model3d::Edge]

Converts to a numpy array

class Face

Structure defining 3D model’s face.

edges_indexes_numpy(self: metavision_sdk_cv3d.Model3d.Face) numpy.ndarray[numpy.uint64]

Indexes to the model’s edges that form this face (Numpy array).

property normal

Face’s normal

class FaceBuffer

Buffer of 3D model’s faces.

class VertexBuffer

Buffer of 3D model’s vertices.

property edges

All the edges forming the 3D model’s faces.

property faces

All the faces forming the 3D model.

property vertices

All the vertices forming the 3D model.

class metavision_sdk_cv3d.Model3dDetectionAlgorithm(self: metavision_sdk_cv3d.Model3dDetectionAlgorithm, cam_geometry: metavision_sdk_cv.CameraGeometry, model: metavision_sdk_cv3d.Model3d, time_surface: metavision_sdk_core.MostRecentTimestampBuffer, params: metavision_sdk_cv3d.Model3dDetectionAlgorithm.Parameters = <metavision_sdk_cv3d.Model3dDetectionAlgorithm.Parameters object at 0x7fe85b7cd170>) None

Algorithm that detects a known 3D model by detecting its edges in an events stream.

Support points are sampled along the 3D model’s visible edges and tracked in a time surface in which the events stream has been accumulated. Matches are looked for in the time surface by looking for timestamps on slopes that have been generated by moving edges having the same orientations as the 3D model’s ones. An edge is considered matched when a line can be fitted from its matches. When enough edges are matched, the 3D model’s pose is estimated by minimizing the orthogonal distance between the matches and their corresponding reprojected edge.

Constructor.

cam_geometry

Camera geometry instance allowing mapping coordinates from camera to image (and vice versa)

model

3D model to detect

time_surface

Time surface instance in which the events stream is accumulated

params

Algorithm’s parameters

class Parameters(self: metavision_sdk_cv3d.Model3dDetectionAlgorithm.Parameters) None

Parameters used by the model 3d detection algorithm.

property fitted_edges_ratio_

Matched edges to visible edges ratio above which a pose estimation is attempted.

property flow_radius_

Radius used to estimate the normal flow which gives the edge’s orientation.

property n_fitting_pts_

Minimum required number of matches for line fitting.

property search_radius_

Radius in which matches are looked for for each support point.

property support_point_step_

Distance, in pixels in the distorted image, between two support points.

property variance_threshold_

Variance of the support points around the fitted line below which an edge is considered matched.

process_events(*args, **kwargs)

Overloaded function.

  1. process_events(self: metavision_sdk_cv3d.Model3dDetectionAlgorithm, in_events: numpy.ndarray[metavision_sdk_base._EventCD_decode], out_T_c_w: metavision_sdk_cv3d.EigenMatrix4f) -> tuple

  2. process_events(self: metavision_sdk_cv3d.Model3dDetectionAlgorithm, in_events: metavision_sdk_base.EventCDBuffer, out_T_c_w: metavision_sdk_cv3d.EigenMatrix4f) -> tuple

set_init_pose(self: metavision_sdk_cv3d.Model3dDetectionAlgorithm, T_c_w: metavision_sdk_cv3d.EigenMatrix4f) None

Sets the camera’s pose from which the algorithm will try to detect the 3D model.

T_c_w

Camera’s initialization pose

class metavision_sdk_cv3d.Model3dTrackingAlgorithm(self: metavision_sdk_cv3d.Model3dTrackingAlgorithm, cam_geometry: metavision_sdk_cv.CameraGeometry, model: metavision_sdk_cv3d.Model3d, time_surface: metavision_sdk_core.MostRecentTimestampBuffer, params: metavision_sdk_cv3d.Model3dTrackingAlgorithm.Parameters = <metavision_sdk_cv3d.Model3dTrackingAlgorithm.Parameters object at 0x7fe85b8b08f0>) None

Algorithm that estimates the 6 DOF pose of a 3D model by tracking its edges in an events stream.

Support points are sampled along the 3D model’s visible edges and tracked in a time surface in which the events stream has been accumulated. Matches are looked for in the time surface within a fixed radius and a given accumulation time. A weight is then attributed to every support point according to the timestamp of the event they matched with. Finally, the pose is estimated using a weighted least squares to minimize the orthogonal distance between the matches and their corresponding reprojected edge.

The accumulation time used for matching can vary depending on how the algorithm is called. Indeed, the algorithm computes the accumulation time by maintaining a sliding buffer of the last N pose computation timestamps. As a result, the accumulation time will be fixed when the algorithm is called every N us and varying when called every N events. The latter is more interesting because, in that case, the accumulation will adapt to the motion of the camera. In case of fast motion, the tracking will restrict the matching to very recent events making it more robust to noise. Whereas in case of slow motion, the tracking will allow matching older events.

Constructor.

cam_geometry

Camera geometry instance allowing mapping coordinates from camera to image (and vice versa)

model

3D model to track

time_surface

Time surface instance in which the events stream is accumulated

params

Algorithm’s parameters

class Parameters(self: metavision_sdk_cv3d.Model3dTrackingAlgorithm.Parameters) None

Parameters used by the model 3d tracker algorithm.

property default_acc_time_us_

Default accumulation time used when the tracking is starting (i.e. the N last poses have not been estimated yet).

property most_recent_weight_

Weight attributed to the more recent matches.

property n_last_poses_

Number of past poses to consider to compute the accumulation time.

property nb_directional_axes_

Number of pre-computed axes to consider to quantize the normal of an edge

property oldest_weight_

Weight attributed to the oldest matches.

property search_radius_

Radius in which matches are looked for for each support point.

property support_point_step_

Distance, in pixels in the distorted image, between two support points.

process_events(*args, **kwargs)

Overloaded function.

  1. process_events(self: metavision_sdk_cv3d.Model3dTrackingAlgorithm, in_events: numpy.ndarray[metavision_sdk_base._EventCD_decode], out_T_c_w: metavision_sdk_cv3d.EigenMatrix4f) -> bool

  2. process_events(self: metavision_sdk_cv3d.Model3dTrackingAlgorithm, in_events: metavision_sdk_base.EventCDBuffer, out_T_c_w: metavision_sdk_cv3d.EigenMatrix4f) -> bool

set_previous_camera_pose(self: metavision_sdk_cv3d.Model3dTrackingAlgorithm, ts: int, T_c_w: metavision_sdk_cv3d.EigenMatrix4f) None

Initializes the tracking by setting a camera’s prior pose.

ts

Timestamp at which the pose was estimated

T_c_w

Camera’s prior pose

class metavision_sdk_cv3d.PyramidalStereoBlockMatchingAlgorithm(self: metavision_sdk_cv3d.PyramidalStereoBlockMatchingAlgorithm, size_master: tuple, size_slave: tuple, plevel_max: int = 3, patch_radius: int = 3, min_contrast_to_second_best: float = 1.2000000476837158, max_ssd: float = 3.4028234663852886e+38, enable_low_confidence_matches: bool = False) None

Class to compute stereo disparities using a pyramidal block matching algorithm.

Class to compute stereo disparities using a pyramidal block matching algorithm.

compute_disparities(self: metavision_sdk_cv3d.PyramidalStereoBlockMatchingAlgorithm, rectified_img_master: numpy.ndarray, rectified_img_slave: numpy.ndarray, disparity_map: numpy.ndarray, rectified_mask_master: object = None) None

Computes the disparity map between two provided rectified images.

rectified_img_master

The rectified master image

rectified_img_slave

The rectified slave image

disparity_map_master

The output disparity map

rectified_mask_master

Optional mask of pixels to be matched from the rectified master image

get_disparity_range(self: metavision_sdk_cv3d.PyramidalStereoBlockMatchingAlgorithm) tuple

Gets the disparity range.

set_depth_range(self: metavision_sdk_cv3d.PyramidalStereoBlockMatchingAlgorithm, stereo_rectified_geometry: metavision_sdk_cv.RectifiedStereoGeometry, min_depth: float, max_depth: float) None

Sets the disparity range based on the stereo geometry and the specified depth range.

set_disparity_range(self: metavision_sdk_cv3d.PyramidalStereoBlockMatchingAlgorithm, stereo_rectified_geometry: metavision_sdk_cv.RectifiedStereoGeometry, min_disp: int, max_disp: int) None

Sets the disparity range.

class metavision_sdk_cv3d.SimpleStereoBlockMatchingAlgorithm(self: metavision_sdk_cv3d.SimpleStereoBlockMatchingAlgorithm, patch_radius: int = 3, min_contrast_to_second_best: float = 1.2000000476837158, max_ssd: float = 3.4028234663852886e+38, enable_low_confidence_matches: bool = False) None

Class to compute stereo disparities using a single-level block matching algorithm.

Class to compute stereo disparities using a single-level block matching algorithm.

compute_disparities(self: metavision_sdk_cv3d.SimpleStereoBlockMatchingAlgorithm, rectified_img_master: numpy.ndarray, rectified_img_slave: numpy.ndarray, disparity_map: numpy.ndarray, rectified_mask_master: object = None) None

Computes the disparity map between two provided rectified images.

rectified_img_master

The rectified master image

rectified_img_slave

The rectified slave image

disparity_map_master

The output disparity map

rectified_mask_master

Optional mask of pixels to be matched from the rectified master image

use_disparity_guesses

Whether to use the input disparity map as initial guesses for the disparities to be computed

note

Invalid disparities are set to / recognized as std::numeric_limits<float>::quiet_NaN().

get_disparity_range(self: metavision_sdk_cv3d.SimpleStereoBlockMatchingAlgorithm) tuple

Gets the disparity range.

min_disp

Minimum disparity

max_disp

Maximum disparity

set_depth_range(self: metavision_sdk_cv3d.SimpleStereoBlockMatchingAlgorithm, stereo_rectified_geometry: metavision_sdk_cv.RectifiedStereoGeometry, min_depth: float, max_depth: float) None

Sets the disparity range based on the stereo geometry and the specified depth range.

set_disparity_range(self: metavision_sdk_cv3d.SimpleStereoBlockMatchingAlgorithm, stereo_rectified_geometry: metavision_sdk_cv.RectifiedStereoGeometry, min_disp: int, max_disp: int) None

Sets the disparity range.

metavision_sdk_cv3d.draw_edges(cam_geometry: metavision_sdk_cv.CameraGeometry, T_c_w: metavision_sdk_cv3d.EigenMatrix4f, model: metavision_sdk_cv3d.Model3d, edges: set, image: numpy.ndarray, color: list[int]) None

Draws the selected edges of a 3D model into the output frame.

metavision_sdk_cv3d.load_model_3d_from_json(path: str) object

Loads a 3D model from a JSON file.

metavision_sdk_cv3d.select_visible_edges(T_c_w: metavision_sdk_cv3d.EigenMatrix4f, model: metavision_sdk_cv3d.Model3d) set

Selects the visible edges of a 3D model given a camera’s pose.

class metavision_sdk_cv3d.Edgelet2dDetectionAlgorithm(self: metavision_sdk_cv3d.Edgelet2dDetectionAlgorithm, threshold: int = 0) None

Algorithm used to detect 2D edgelets in a time surface.

Constructor.

threshold

Detection tolerance threshold

see

is_fast_edge for more details

static get_empty_output_buffer() metavision_sdk_cv3d.EventEdgelet2dBuffer

This function returns an empty buffer of events of the correct type, which can later on be used as output_buf when calling process_events()

process(*args, **kwargs)

Overloaded function.

  1. process(self: metavision_sdk_cv3d.Edgelet2dDetectionAlgorithm, time_surface: metavision_sdk_core.MostRecentTimestampBuffer, in_events: numpy.ndarray[metavision_sdk_base._EventCD_decode], out_edgelets: metavision_sdk_cv3d.EventEdgelet2dBuffer) -> None

  2. process(self: metavision_sdk_cv3d.Edgelet2dDetectionAlgorithm, time_surface: metavision_sdk_core.MostRecentTimestampBuffer, in_events: metavision_sdk_base.EventCDBuffer, out_edgelets: metavision_sdk_cv3d.EventEdgelet2dBuffer) -> None

class metavision_sdk_cv3d.Edgelet2dTrackingAlgorithm(self: metavision_sdk_cv3d.Edgelet2dTrackingAlgorithm, params: metavision_sdk_cv3d.Edgelet2dTrackingAlgorithm.Parameters = <metavision_sdk_cv3d.Edgelet2dTrackingAlgorithm.Parameters object at 0x7fe85b377230>) None

Algorithm used to track 2D edgelets in a time surface.

Points are sampled along the edgelet’s direction (a.k.a support points) and matches are looked for on the two sides of the edgelet along its normal. A match is found when a timestamp more recent than a target timestamp if found in the time surface. A new line is then fitted from those matches and both a new direction and normal are computed.

warning

Because of the aperture problem, the tracking can drift very quickly. As a result, this algorithm should only be used if there are methods that constrain the tracking or to track edgelets that are orthogonal to the camera’s motion.

Constructor.

params

Parameters used by the tracking algorithm

class Parameters(self: metavision_sdk_cv3d.Edgelet2dTrackingAlgorithm.Parameters) None

Parameters used by the tracking algorithm.

property median_outlier_threshold

Distance to the median position of the support points’ matches above which a match is considered as outlier.

property n_support_points

Number of points sampled along the edgelet’s direction.

property search_radius

Radius in which matches are looked for each edgelet’s side.

property support_points_distance

Distance in pixels between the sampled points.

property threshold

Time tolerance used in the tracking.

static get_empty_edgelet_buffer() metavision_sdk_cv3d.EventEdgelet2dBuffer

This function returns an empty buffer of events of the correct type, which can later on be used as output_buf when calling process_events()

process(*args, **kwargs)

Overloaded function.

  1. process(self: metavision_sdk_cv3d.Edgelet2dTrackingAlgorithm, time_surface: metavision_sdk_core.MostRecentTimestampBuffer, target: int, in_edgelets: numpy.ndarray[Metavision::EventEdgelet2d], out_edgelets: metavision_sdk_cv3d.EventEdgelet2dBuffer) -> list

  2. process(self: metavision_sdk_cv3d.Edgelet2dTrackingAlgorithm, time_surface: metavision_sdk_core.MostRecentTimestampBuffer, target: int, in_edgelets: metavision_sdk_cv3d.EventEdgelet2dBuffer, out_edgelets: metavision_sdk_cv3d.EventEdgelet2dBuffer) -> list

class metavision_sdk_cv3d.Edgelet2dTrackingAlgorithm.Parameters(self: metavision_sdk_cv3d.Edgelet2dTrackingAlgorithm.Parameters) None

Parameters used by the tracking algorithm.

property median_outlier_threshold

Distance to the median position of the support points’ matches above which a match is considered as outlier.

property n_support_points

Number of points sampled along the edgelet’s direction.

property search_radius

Radius in which matches are looked for each edgelet’s side.

property support_points_distance

Distance in pixels between the sampled points.

property threshold

Time tolerance used in the tracking.

class metavision_sdk_cv3d.Model3dDetectionAlgorithm(self: metavision_sdk_cv3d.Model3dDetectionAlgorithm, cam_geometry: metavision_sdk_cv.CameraGeometry, model: metavision_sdk_cv3d.Model3d, time_surface: metavision_sdk_core.MostRecentTimestampBuffer, params: metavision_sdk_cv3d.Model3dDetectionAlgorithm.Parameters = <metavision_sdk_cv3d.Model3dDetectionAlgorithm.Parameters object at 0x7fe85b7cd170>) None

Algorithm that detects a known 3D model by detecting its edges in an events stream.

Support points are sampled along the 3D model’s visible edges and tracked in a time surface in which the events stream has been accumulated. Matches are looked for in the time surface by looking for timestamps on slopes that have been generated by moving edges having the same orientations as the 3D model’s ones. An edge is considered matched when a line can be fitted from its matches. When enough edges are matched, the 3D model’s pose is estimated by minimizing the orthogonal distance between the matches and their corresponding reprojected edge.

Constructor.

cam_geometry

Camera geometry instance allowing mapping coordinates from camera to image (and vice versa)

model

3D model to detect

time_surface

Time surface instance in which the events stream is accumulated

params

Algorithm’s parameters

class Parameters(self: metavision_sdk_cv3d.Model3dDetectionAlgorithm.Parameters) None

Parameters used by the model 3d detection algorithm.

property fitted_edges_ratio_

Matched edges to visible edges ratio above which a pose estimation is attempted.

property flow_radius_

Radius used to estimate the normal flow which gives the edge’s orientation.

property n_fitting_pts_

Minimum required number of matches for line fitting.

property search_radius_

Radius in which matches are looked for for each support point.

property support_point_step_

Distance, in pixels in the distorted image, between two support points.

property variance_threshold_

Variance of the support points around the fitted line below which an edge is considered matched.

process_events(*args, **kwargs)

Overloaded function.

  1. process_events(self: metavision_sdk_cv3d.Model3dDetectionAlgorithm, in_events: numpy.ndarray[metavision_sdk_base._EventCD_decode], out_T_c_w: metavision_sdk_cv3d.EigenMatrix4f) -> tuple

  2. process_events(self: metavision_sdk_cv3d.Model3dDetectionAlgorithm, in_events: metavision_sdk_base.EventCDBuffer, out_T_c_w: metavision_sdk_cv3d.EigenMatrix4f) -> tuple

set_init_pose(self: metavision_sdk_cv3d.Model3dDetectionAlgorithm, T_c_w: metavision_sdk_cv3d.EigenMatrix4f) None

Sets the camera’s pose from which the algorithm will try to detect the 3D model.

T_c_w

Camera’s initialization pose

class metavision_sdk_cv3d.Model3dDetectionAlgorithm.Parameters(self: metavision_sdk_cv3d.Model3dDetectionAlgorithm.Parameters) None

Parameters used by the model 3d detection algorithm.

property fitted_edges_ratio_

Matched edges to visible edges ratio above which a pose estimation is attempted.

property flow_radius_

Radius used to estimate the normal flow which gives the edge’s orientation.

property n_fitting_pts_

Minimum required number of matches for line fitting.

property search_radius_

Radius in which matches are looked for for each support point.

property support_point_step_

Distance, in pixels in the distorted image, between two support points.

property variance_threshold_

Variance of the support points around the fitted line below which an edge is considered matched.

class metavision_sdk_cv3d.Model3dTrackingAlgorithm(self: metavision_sdk_cv3d.Model3dTrackingAlgorithm, cam_geometry: metavision_sdk_cv.CameraGeometry, model: metavision_sdk_cv3d.Model3d, time_surface: metavision_sdk_core.MostRecentTimestampBuffer, params: metavision_sdk_cv3d.Model3dTrackingAlgorithm.Parameters = <metavision_sdk_cv3d.Model3dTrackingAlgorithm.Parameters object at 0x7fe85b8b08f0>) None

Algorithm that estimates the 6 DOF pose of a 3D model by tracking its edges in an events stream.

Support points are sampled along the 3D model’s visible edges and tracked in a time surface in which the events stream has been accumulated. Matches are looked for in the time surface within a fixed radius and a given accumulation time. A weight is then attributed to every support point according to the timestamp of the event they matched with. Finally, the pose is estimated using a weighted least squares to minimize the orthogonal distance between the matches and their corresponding reprojected edge.

The accumulation time used for matching can vary depending on how the algorithm is called. Indeed, the algorithm computes the accumulation time by maintaining a sliding buffer of the last N pose computation timestamps. As a result, the accumulation time will be fixed when the algorithm is called every N us and varying when called every N events. The latter is more interesting because, in that case, the accumulation will adapt to the motion of the camera. In case of fast motion, the tracking will restrict the matching to very recent events making it more robust to noise. Whereas in case of slow motion, the tracking will allow matching older events.

Constructor.

cam_geometry

Camera geometry instance allowing mapping coordinates from camera to image (and vice versa)

model

3D model to track

time_surface

Time surface instance in which the events stream is accumulated

params

Algorithm’s parameters

class Parameters(self: metavision_sdk_cv3d.Model3dTrackingAlgorithm.Parameters) None

Parameters used by the model 3d tracker algorithm.

property default_acc_time_us_

Default accumulation time used when the tracking is starting (i.e. the N last poses have not been estimated yet).

property most_recent_weight_

Weight attributed to the more recent matches.

property n_last_poses_

Number of past poses to consider to compute the accumulation time.

property nb_directional_axes_

Number of pre-computed axes to consider to quantize the normal of an edge

property oldest_weight_

Weight attributed to the oldest matches.

property search_radius_

Radius in which matches are looked for for each support point.

property support_point_step_

Distance, in pixels in the distorted image, between two support points.

process_events(*args, **kwargs)

Overloaded function.

  1. process_events(self: metavision_sdk_cv3d.Model3dTrackingAlgorithm, in_events: numpy.ndarray[metavision_sdk_base._EventCD_decode], out_T_c_w: metavision_sdk_cv3d.EigenMatrix4f) -> bool

  2. process_events(self: metavision_sdk_cv3d.Model3dTrackingAlgorithm, in_events: metavision_sdk_base.EventCDBuffer, out_T_c_w: metavision_sdk_cv3d.EigenMatrix4f) -> bool

set_previous_camera_pose(self: metavision_sdk_cv3d.Model3dTrackingAlgorithm, ts: int, T_c_w: metavision_sdk_cv3d.EigenMatrix4f) None

Initializes the tracking by setting a camera’s prior pose.

ts

Timestamp at which the pose was estimated

T_c_w

Camera’s prior pose

class metavision_sdk_cv3d.Model3dTrackingAlgorithm.Parameters(self: metavision_sdk_cv3d.Model3dTrackingAlgorithm.Parameters) None

Parameters used by the model 3d tracker algorithm.

property default_acc_time_us_

Default accumulation time used when the tracking is starting (i.e. the N last poses have not been estimated yet).

property most_recent_weight_

Weight attributed to the more recent matches.

property n_last_poses_

Number of past poses to consider to compute the accumulation time.

property nb_directional_axes_

Number of pre-computed axes to consider to quantize the normal of an edge

property oldest_weight_

Weight attributed to the oldest matches.

property search_radius_

Radius in which matches are looked for for each support point.

property support_point_step_

Distance, in pixels in the distorted image, between two support points.

class metavision_sdk_cv3d.PyramidalStereoBlockMatchingAlgorithm(self: metavision_sdk_cv3d.PyramidalStereoBlockMatchingAlgorithm, size_master: tuple, size_slave: tuple, plevel_max: int = 3, patch_radius: int = 3, min_contrast_to_second_best: float = 1.2000000476837158, max_ssd: float = 3.4028234663852886e+38, enable_low_confidence_matches: bool = False) None

Class to compute stereo disparities using a pyramidal block matching algorithm.

Class to compute stereo disparities using a pyramidal block matching algorithm.

compute_disparities(self: metavision_sdk_cv3d.PyramidalStereoBlockMatchingAlgorithm, rectified_img_master: numpy.ndarray, rectified_img_slave: numpy.ndarray, disparity_map: numpy.ndarray, rectified_mask_master: object = None) None

Computes the disparity map between two provided rectified images.

rectified_img_master

The rectified master image

rectified_img_slave

The rectified slave image

disparity_map_master

The output disparity map

rectified_mask_master

Optional mask of pixels to be matched from the rectified master image

get_disparity_range(self: metavision_sdk_cv3d.PyramidalStereoBlockMatchingAlgorithm) tuple

Gets the disparity range.

set_depth_range(self: metavision_sdk_cv3d.PyramidalStereoBlockMatchingAlgorithm, stereo_rectified_geometry: metavision_sdk_cv.RectifiedStereoGeometry, min_depth: float, max_depth: float) None

Sets the disparity range based on the stereo geometry and the specified depth range.

set_disparity_range(self: metavision_sdk_cv3d.PyramidalStereoBlockMatchingAlgorithm, stereo_rectified_geometry: metavision_sdk_cv.RectifiedStereoGeometry, min_disp: int, max_disp: int) None

Sets the disparity range.

class metavision_sdk_cv3d.SimpleStereoBlockMatchingAlgorithm(self: metavision_sdk_cv3d.SimpleStereoBlockMatchingAlgorithm, patch_radius: int = 3, min_contrast_to_second_best: float = 1.2000000476837158, max_ssd: float = 3.4028234663852886e+38, enable_low_confidence_matches: bool = False) None

Class to compute stereo disparities using a single-level block matching algorithm.

Class to compute stereo disparities using a single-level block matching algorithm.

compute_disparities(self: metavision_sdk_cv3d.SimpleStereoBlockMatchingAlgorithm, rectified_img_master: numpy.ndarray, rectified_img_slave: numpy.ndarray, disparity_map: numpy.ndarray, rectified_mask_master: object = None) None

Computes the disparity map between two provided rectified images.

rectified_img_master

The rectified master image

rectified_img_slave

The rectified slave image

disparity_map_master

The output disparity map

rectified_mask_master

Optional mask of pixels to be matched from the rectified master image

use_disparity_guesses

Whether to use the input disparity map as initial guesses for the disparities to be computed

note

Invalid disparities are set to / recognized as std::numeric_limits<float>::quiet_NaN().

get_disparity_range(self: metavision_sdk_cv3d.SimpleStereoBlockMatchingAlgorithm) tuple

Gets the disparity range.

min_disp

Minimum disparity

max_disp

Maximum disparity

set_depth_range(self: metavision_sdk_cv3d.SimpleStereoBlockMatchingAlgorithm, stereo_rectified_geometry: metavision_sdk_cv.RectifiedStereoGeometry, min_depth: float, max_depth: float) None

Sets the disparity range based on the stereo geometry and the specified depth range.

set_disparity_range(self: metavision_sdk_cv3d.SimpleStereoBlockMatchingAlgorithm, stereo_rectified_geometry: metavision_sdk_cv.RectifiedStereoGeometry, min_disp: int, max_disp: int) None

Sets the disparity range.