SDK CV3D Algorithms

class Metavision::Edgelet2dDetectionAlgorithm

Algorithm used to detect 2D edgelets in a time surface.

Public Functions

Edgelet2dDetectionAlgorithm(timestamp threshold = 0)

Constructor.

See

is_fast_edge for more details

Parameters
  • threshold: Detection tolerance threshold

~Edgelet2dDetectionAlgorithm() = default

Destructor.

template<typename InputIt, typename OutputIt>
OutputIt process(const MostRecentTimestampBuffer &ts, InputIt begin, InputIt end, OutputIt d_begin)

Tries to detect 2D edgelets in the time surface at locations given by the input events.

Return

Iterator pointing to the past-the-end event added in the output 2D edgelets buffer

Template Parameters
  • InputIt: Read-Only input event iterator type

  • OutputIt: Read-Write output EventEdgelet2d event iterator type

Parameters
  • ts: Time surface in which 2D edgelets are looked for

  • begin: First iterator to the buffer of events whose locations will be looked at to detect edgelets

  • end: Last iterator to the buffer of events whose locations will be looked at to detect edgelets

  • d_begin: Output iterator of 2D edgelets buffer

class Metavision::Edgelet2dTrackingAlgorithm

Algorithm used to track 2D edgelets in a time surface.

Points are sampled along the edgelet’s direction (a.k.a support points) and matches are looked for on the two sides of the edgelet along its normal. A match is found when a timestamp more recent than a target timestamp if found in the time surface. A new line is then fitted from those matches and both a new direction and normal are computed.

Warning

Because of the aperture problem, the tracking can drift very quickly. As a result, this algorithm should only be used if there are methods that constrain the tracking or to track edgelets that are orthogonal to the camera’s motion.

Public Functions

Edgelet2dTrackingAlgorithm(const Parameters &params = Parameters())

Constructor.

Parameters
  • params: Parameters used by the tracking algorithm

~Edgelet2dTrackingAlgorithm() = default

Destructor.

template<typename Edgelet2dIt, typename StatusIt, typename OutputEdgelet2dIt>
OutputEdgelet2dIt process(const MostRecentTimestampBuffer &time_surface, timestamp target, Edgelet2dIt edgelet_begin, Edgelet2dIt edgelet_end, OutputEdgelet2dIt d_begin, StatusIt status_begin)

Tracks the input 2D edgelets in the input time surface.

For each input 2D edgelet a status is updated to indicate whether the corresponding edgelet has been tracked or not. When an edgelet is successfully tracked, an updated 2D edgelet (i.e. corresponding to the matched edgelet in the time surface) is outputted. This means that a user needs to either, pass a std::back_insert_iterator to these two buffers, or, pre-allocate them with the same size as the input 2D edgelets buffer’s one.

Return

The last iterator to the matched 2D edgelets

Warning

The output matched 2D edgelets buffer needs to be resized to only contain the matched edgelets, or a std::back_insert_iterator needs to be passed instead.

Edgelet2dTrackingAlgorithm algo;
MostRecentTimestampBuffer time_surface(height, width, n_channels);
std::vector<EventEdgelet2d> detected_edgelets;
// Detect edgelets
...
std::vector<EventEdgelet2d> tracked_edgelets(detected_edgelets.size());
std::vector<bool> statuses(detected_edgelets.size());

auto tracked_edglet_end = algo.process(time_surface, target_ts, detected_edgelets.cbegin(),
                                       detected_edgelets.cend(), tracked_edgelets.begin(), statuses.begin());
tracked_edgelets.resize(std::distance(tracked_edgelets.end(), tracked_edglet_end));
or
Edgelet2dTrackingAlgorithm algo;
MostRecentTimestampBuffer time_surface(height, width, n_channels);
std::vector<EventEdgelet2d> detected_edgelets;
// Detect edgelets
...
std::vector<EventEdgelet2d> tracked_edgelets;
std::vector<bool> statuses);

auto tracked_edglet_end = algo.process(time_surface, target_ts, detected_edgelets.cbegin(),
                                       detected_edgelets.cend(),
                                       std::back_insert_iterator(tracked_edgelets.begin()),
                                       std::back_insert_iterator(statuses.begin()));

Template Parameters
  • Edgelet2dIt: Iterator type of the input 2D edgelets

  • StatusIt: Iterator type of the input statuses

  • OutputEdgelet2dIt: Iterator type of the output 2D edgelets

Parameters
  • time_surface: The time surface in which 2D edgelets are looked for

  • target: Target timestamp used for matching. A timestamp ts in the time surface will match a support point if ts > (target - threshold_)

  • edgelet_begin: First iterator to the buffer of 2D edgelets that will be looked for

  • edgelet_end: Last iterator to the buffer of 2D edgelets that will be looked for

  • d_begin: First iterator to the matched 2D edgelets buffer

  • status_begin: First iterator to the tracking statuses buffer

const Parameters &get_parameters() const

Returns the algorithm’s parameters.

struct Parameters

Parameters used by the tracking algorithm.

Public Members

unsigned int search_radius_ = 3

Radius in which matches are looked for each edgelet’s side.

unsigned int n_support_points_ = 3

Number of points sampled along the edgelet’s direction.

float support_points_distance_ = 2

Distance in pixels between the sampled points.

timestamp threshold_ = 3000

Time tolerance used in the tracking.

unsigned int median_outlier_threshold_ = 1

Distance to the median position of the support points’ matches above which a match is considered as outlier.

class Metavision::Model3dDetectionAlgorithm

Algorithm that detects a known 3D model by detecting its edges in an events stream.

Support points are sampled along the 3D model’s visible edges and tracked in a time surface in which the events stream has been accumulated. Matches are looked for in the time surface by looking for timestamps on slopes that have been generated by moving edges having the same orientations as the 3D model’s ones. An edge is considered matched when a line can be fitted from its matches. When enough edges are matched, the 3D model’s pose is estimated by minimizing the orthogonal distance between the matches and their corresponding reprojected edge.

Public Functions

Model3dDetectionAlgorithm(const CameraGeometry32f &cam_geometry, const Model3d &model, MostRecentTimestampBuffer &time_surface, const Parameters &params = Parameters())

Constructor.

Parameters
  • cam_geometry: Camera geometry instance allowing mapping coordinates from camera to image (and vice versa)

  • model: 3D model to detect

  • time_surface: Time surface instance in which the events stream is accumulated

  • params: Algorithm’s parameters

void set_init_pose(const Eigen::Matrix4f &T_c_w)

Sets the camera’s pose from which the algorithm will try to detect the 3D model.

Parameters
  • T_c_w: Camera’s initialization pose

template<typename InputIt>
bool process_events(InputIt it_begin, InputIt it_end, Eigen::Matrix4f &T_c_w, std::set<size_t> *visible_edges = nullptr, std::set<size_t> *detected_edges = nullptr)

Tries to detect the 3D model from the input events buffer.

Return

True if the detection has succeeded, false otherwise

Note

The estimated pose might be not very accurate but accurate enough to initiate a tracking phase

Template Parameters
  • InputIt: Read-Only input event iterator type.

Parameters
  • [in] it_begin: Iterator to the first input event

  • [in] it_end: Iterator to the past-the-end event

  • [out] T_c_w: Camera’s pose if the detection has succeeded

  • [out] visible_edges: If filled, contains the model’s edges visible from the initialization pose

  • [out] detected_edges: If filled, contains the model’s successfully detected edges

struct Parameters

Parameters used by the model 3d detection algorithm.

Public Members

std::uint32_t support_point_step_ = 10

Distance, in pixels in the distorted image, between two support points

std::uint32_t search_radius_ = 10

Radius in which matches are looked for for each support point.

std::uint32_t flow_radius_ = 3

Radius used to estimate the normal flow which gives the edge’s orientation

std::uint32_t n_fitting_pts_ = 3

Minimum required number of matches for line fitting.

float variance_threshold_ = 2e-5f

Variance of the support points around the fitted line below which an edge is considered matched

float fitted_edges_ratio_ = 0.5f

Matched edges to visible edges ratio above which a pose estimation is attempted

class Metavision::Model3dTrackingAlgorithm

Algorithm that estimates the 6 DOF pose of a 3D model by tracking its edges in an events stream.

Support points are sampled along the 3D model’s visible edges and tracked in a time surface in which the events stream has been accumulated. Matches are looked for in the time surface within a fixed radius and a given accumulation time. A weight is then attributed to every support point according to the timestamp of the event they matched with. Finally, the pose is estimated using a weighted least squares to minimize the orthogonal distance between the matches and their corresponding reprojected edge.

The accumulation time used for matching can vary depending on how the algorithm is called. Indeed, the algorithm computes the accumulation time by maintaining a sliding buffer of the last N pose computation timestamps. As a result, the accumulation time will be fixed when the algorithm is called every N us and varying when called every N events. The latter is more interesting because, in that case, the accumulation will adapt to the motion of the camera. In case of fast motion, the tracking will restrict the matching to very recent events making it more robust to noise. Whereas in case of slow motion, the tracking will allow matching older events.

Public Functions

Model3dTrackingAlgorithm(const CameraGeometry32f &cam_geometry, const Model3d &model, MostRecentTimestampBuffer &time_surface, const Parameters &params = Parameters())

Constructor.

Parameters
  • cam_geometry: Camera geometry instance allowing mapping coordinates from camera to image (and vice versa)

  • model: 3D model to track

  • time_surface: Time surface instance in which the events stream is accumulated

  • params: Algorithm’s parameters

void set_previous_camera_pose(timestamp ts, const Eigen::Matrix4f &T_c_w)

Initializes the tracking by setting a camera’s prior pose.

Parameters
  • ts: Timestamp at which the pose was estimated

  • T_c_w: Camera’s prior pose

template<typename InputIt>
bool process_events(InputIt it_begin, InputIt it_end, Eigen::Matrix4f &T_c_w)

Tries to track the 3D model from the input events buffer.

Return

True if the tracking has succeeded, false otherwise

Template Parameters
  • InputIt: Read-Only input event iterator type.

Parameters
  • [in] it_begin: Iterator to the first input event

  • [in] it_end: Iterator to the past-the-end event

  • [out] T_c_w: Camera’s pose if the tracking has succeeded

struct Parameters

Parameters used by the model 3d tracker algorithm.

Public Members

std::uint32_t search_radius_ = 3

Radius in which matches are looked for for each support point.

std::uint32_t support_point_step_ = 10

Distance, in pixels in the distorted image, between two support points

std::uint32_t n_last_poses_ = 5

Number of past poses to consider to compute the accumulation time.

timestamp default_acc_time_us_ = 3000

Default accumulation time used when the tracking is starting (i.e. the N last poses have not been estimated yet)

float oldest_weight_ = 0.1f

Weight attributed to the oldest matches.

float most_recent_weight_ = 1.f

Weight attributed to the more recent matches.