SDK CV3D Python bindings API
- class metavision_sdk_cv3d.Edgelet2dDetectionAlgorithm(self: metavision_sdk_cv3d.Edgelet2dDetectionAlgorithm, threshold: int = 0) None
Algorithm used to detect 2D edgelets in a time surface.
Constructor.
- threshold
Detection tolerance threshold
- see
is_fast_edge for more details
- static get_empty_output_buffer() metavision_sdk_cv3d.EventEdgelet2dBuffer
This function returns an empty buffer of events of the correct type, which can later on be used as output_buf when calling process_events()
- process(*args, **kwargs)
Overloaded function.
process(self: metavision_sdk_cv3d.Edgelet2dDetectionAlgorithm, time_surface: metavision_sdk_core.MostRecentTimestampBuffer, in_events: numpy.ndarray[metavision_sdk_base._EventCD_decode], out_edgelets: metavision_sdk_cv3d.EventEdgelet2dBuffer) -> None
process(self: metavision_sdk_cv3d.Edgelet2dDetectionAlgorithm, time_surface: metavision_sdk_core.MostRecentTimestampBuffer, in_events: metavision_sdk_base.EventCDBuffer, out_edgelets: metavision_sdk_cv3d.EventEdgelet2dBuffer) -> None
- class metavision_sdk_cv3d.Edgelet2dTrackingAlgorithm(self: metavision_sdk_cv3d.Edgelet2dTrackingAlgorithm, params: metavision_sdk_cv3d.Edgelet2dTrackingAlgorithm.Parameters = <metavision_sdk_cv3d.Edgelet2dTrackingAlgorithm.Parameters object at 0x7fbcc42a56b0>) None
Algorithm used to track 2D edgelets in a time surface.
Points are sampled along the edgelet’s direction (a.k.a support points) and matches are looked for on the two sides of the edgelet along its normal. A match is found when a timestamp more recent than a target timestamp if found in the time surface. A new line is then fitted from those matches and both a new direction and normal are computed.
- warning
Because of the aperture problem, the tracking can drift very quickly. As a result, this algorithm should only be used if there are methods that constrain the tracking or to track edgelets that are orthogonal to the camera’s motion.
Constructor.
- params
Parameters used by the tracking algorithm
- class Parameters(self: metavision_sdk_cv3d.Edgelet2dTrackingAlgorithm.Parameters) None
Parameters used by the tracking algorithm.
- property median_outlier_threshold
Distance to the median position of the support points’ matches above which a match is considered as outlier.
- property n_support_points
Number of points sampled along the edgelet’s direction.
- property search_radius
Radius in which matches are looked for each edgelet’s side.
- property support_points_distance
Distance in pixels between the sampled points.
- property threshold
Time tolerance used in the tracking.
- static get_empty_edgelet_buffer() metavision_sdk_cv3d.EventEdgelet2dBuffer
This function returns an empty buffer of events of the correct type, which can later on be used as output_buf when calling process_events()
- process(*args, **kwargs)
Overloaded function.
process(self: metavision_sdk_cv3d.Edgelet2dTrackingAlgorithm, time_surface: metavision_sdk_core.MostRecentTimestampBuffer, target: int, in_edgelets: numpy.ndarray[Metavision::EventEdgelet2d], out_edgelets: metavision_sdk_cv3d.EventEdgelet2dBuffer) -> list
process(self: metavision_sdk_cv3d.Edgelet2dTrackingAlgorithm, time_surface: metavision_sdk_core.MostRecentTimestampBuffer, target: int, in_edgelets: metavision_sdk_cv3d.EventEdgelet2dBuffer, out_edgelets: metavision_sdk_cv3d.EventEdgelet2dBuffer) -> list
- class metavision_sdk_cv3d.Model3d(self: metavision_sdk_cv3d.Model3d) None
Structure defining a 3D model.
- class EdgeBuffer
Buffer of 3D model’s edges.
- numpy(self: metavision_sdk_cv3d.Model3d.EdgeBuffer) numpy.ndarray[Metavision::Model3d::Edge]
Converts to a numpy array
- class Face
Structure defining 3D model’s face.
- edges_indexes_numpy(self: metavision_sdk_cv3d.Model3d.Face) numpy.ndarray[numpy.uint64]
Indexes to the model’s edges that form this face (Numpy array).
- property normal
Face’s normal
- class FaceBuffer
Buffer of 3D model’s faces.
- class VertexBuffer
Buffer of 3D model’s vertices.
- property edges
All the edges forming the 3D model’s faces.
- property faces
All the faces forming the 3D model.
- property vertices
All the vertices forming the 3D model.
- class metavision_sdk_cv3d.Model3dDetectionAlgorithm(self: metavision_sdk_cv3d.Model3dDetectionAlgorithm, cam_geometry: metavision_sdk_cv.CameraGeometry, model: metavision_sdk_cv3d.Model3d, time_surface: metavision_sdk_core.MostRecentTimestampBuffer, params: metavision_sdk_cv3d.Model3dDetectionAlgorithm.Parameters = <metavision_sdk_cv3d.Model3dDetectionAlgorithm.Parameters object at 0x7fbcc41e86b0>) None
Algorithm that detects a known 3D model by detecting its edges in an events stream.
Support points are sampled along the 3D model’s visible edges and tracked in a time surface in which the events stream has been accumulated. Matches are looked for in the time surface by looking for timestamps on slopes that have been generated by moving edges having the same orientations as the 3D model’s ones. An edge is considered matched when a line can be fitted from its matches. When enough edges are matched, the 3D model’s pose is estimated by minimizing the orthogonal distance between the matches and their corresponding reprojected edge.
Constructor.
- cam_geometry
Camera geometry instance allowing mapping coordinates from camera to image (and vice versa)
- model
3D model to detect
- time_surface
Time surface instance in which the events stream is accumulated
- params
Algorithm’s parameters
- class Parameters(self: metavision_sdk_cv3d.Model3dDetectionAlgorithm.Parameters) None
Parameters used by the model 3d detection algorithm.
- property fitted_edges_ratio_
Matched edges to visible edges ratio above which a pose estimation is attempted.
- property flow_radius_
Radius used to estimate the normal flow which gives the edge’s orientation.
- property n_fitting_pts_
Minimum required number of matches for line fitting.
- property search_radius_
Radius in which matches are looked for for each support point.
- property support_point_step_
Distance, in pixels in the distorted image, between two support points.
- property variance_threshold_
Variance of the support points around the fitted line below which an edge is considered matched.
- process_events(*args, **kwargs)
Overloaded function.
process_events(self: metavision_sdk_cv3d.Model3dDetectionAlgorithm, in_events: numpy.ndarray[metavision_sdk_base._EventCD_decode], out_T_c_w: metavision_sdk_cv3d.EigenMatrix4f) -> tuple
process_events(self: metavision_sdk_cv3d.Model3dDetectionAlgorithm, in_events: metavision_sdk_base.EventCDBuffer, out_T_c_w: metavision_sdk_cv3d.EigenMatrix4f) -> tuple
- set_init_pose(self: metavision_sdk_cv3d.Model3dDetectionAlgorithm, T_c_w: metavision_sdk_cv3d.EigenMatrix4f) None
Sets the camera’s pose from which the algorithm will try to detect the 3D model.
- T_c_w
Camera’s initialization pose
- class metavision_sdk_cv3d.Model3dTrackingAlgorithm(self: metavision_sdk_cv3d.Model3dTrackingAlgorithm, cam_geometry: metavision_sdk_cv.CameraGeometry, model: metavision_sdk_cv3d.Model3d, time_surface: metavision_sdk_core.MostRecentTimestampBuffer, params: metavision_sdk_cv3d.Model3dTrackingAlgorithm.Parameters = <metavision_sdk_cv3d.Model3dTrackingAlgorithm.Parameters object at 0x7fbcc4251370>) None
Algorithm that estimates the 6 DOF pose of a 3D model by tracking its edges in an events stream.
Support points are sampled along the 3D model’s visible edges and tracked in a time surface in which the events stream has been accumulated. Matches are looked for in the time surface within a fixed radius and a given accumulation time. A weight is then attributed to every support point according to the timestamp of the event they matched with. Finally, the pose is estimated using a weighted least squares to minimize the orthogonal distance between the matches and their corresponding reprojected edge.
The accumulation time used for matching can vary depending on how the algorithm is called. Indeed, the algorithm computes the accumulation time by maintaining a sliding buffer of the last N pose computation timestamps. As a result, the accumulation time will be fixed when the algorithm is called every N us and varying when called every N events. The latter is more interesting because, in that case, the accumulation will adapt to the motion of the camera. In case of fast motion, the tracking will restrict the matching to very recent events making it more robust to noise. Whereas in case of slow motion, the tracking will allow matching older events.
Constructor.
- cam_geometry
Camera geometry instance allowing mapping coordinates from camera to image (and vice versa)
- model
3D model to track
- time_surface
Time surface instance in which the events stream is accumulated
- params
Algorithm’s parameters
- class Parameters(self: metavision_sdk_cv3d.Model3dTrackingAlgorithm.Parameters) None
Parameters used by the model 3d tracker algorithm.
- property default_acc_time_us_
Default accumulation time used when the tracking is starting (i.e. the N last poses have not been estimated yet).
- property most_recent_weight_
Weight attributed to the more recent matches.
- property n_last_poses_
Number of past poses to consider to compute the accumulation time.
- property nb_directional_axes_
Number of pre-computed axes to consider to quantize the normal of an edge
- property oldest_weight_
Weight attributed to the oldest matches.
- property search_radius_
Radius in which matches are looked for for each support point.
- property support_point_step_
Distance, in pixels in the distorted image, between two support points.
- process_events(*args, **kwargs)
Overloaded function.
process_events(self: metavision_sdk_cv3d.Model3dTrackingAlgorithm, in_events: numpy.ndarray[metavision_sdk_base._EventCD_decode], out_T_c_w: metavision_sdk_cv3d.EigenMatrix4f) -> bool
process_events(self: metavision_sdk_cv3d.Model3dTrackingAlgorithm, in_events: metavision_sdk_base.EventCDBuffer, out_T_c_w: metavision_sdk_cv3d.EigenMatrix4f) -> bool
- set_previous_camera_pose(self: metavision_sdk_cv3d.Model3dTrackingAlgorithm, ts: int, T_c_w: metavision_sdk_cv3d.EigenMatrix4f) None
Initializes the tracking by setting a camera’s prior pose.
- ts
Timestamp at which the pose was estimated
- T_c_w
Camera’s prior pose
- metavision_sdk_cv3d.draw_edges(cam_geometry: metavision_sdk_cv.CameraGeometry, T_c_w: metavision_sdk_cv3d.EigenMatrix4f, model: metavision_sdk_cv3d.Model3d, edges: set, image: numpy.ndarray, color: List[int]) None
Draws the selected edges of a 3D model into the output frame.
- metavision_sdk_cv3d.load_model_3d_from_json(path: str) object
Loads a 3D model from a JSON file.
- metavision_sdk_cv3d.select_visible_edges(T_c_w: metavision_sdk_cv3d.EigenMatrix4f, model: metavision_sdk_cv3d.Model3d) set
Selects the visible edges of a 3D model given a camera’s pose.