SDK CV Python bindings API

class metavision_sdk_cv.ModulatedLightDetectorAlgorithm(self: metavision_sdk_cv.ModulatedLightDetectorAlgorithm, width: int, height: int, num_bits: int, base_period_us: int, tolerance: float) None

Constructs a ModulatedLightDetectorAlgorithm object with the specified parameters.

params

The parameters for configuring the ModulatedLightDetectorAlgorithm :std::invalid_argument: if the resolution is not valid or if the size of a word is greater than 32

static get_empty_output_buffer() metavision_sdk_cv.EventSourceIdBuffer

This function returns an empty buffer of events of the correct type, which can later on be used as output_buf when calling process_events()

process_events(*args, **kwargs)

Overloaded function.

  1. process_events(self: metavision_sdk_cv.ModulatedLightDetectorAlgorithm, input_np: numpy.ndarray[metavision_sdk_base._EventCD_decode], output_buf: metavision_sdk_cv.EventSourceIdBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes a numpy array as input and writes the results into the specified output event buffer
input_np

input chunk of events (numpy structured array whose fields are (‘x’, ‘y’, ‘p’, ‘t’). Note that this order is mandatory)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

  1. process_events(self: metavision_sdk_cv.ModulatedLightDetectorAlgorithm, input_buf: metavision_sdk_base.EventCDBuffer, output_buf: metavision_sdk_cv.EventSourceIdBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes an event buffer as input and writes the results into a distinct output event buffer
input_buf

input chunk of events (event buffer)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

class metavision_sdk_cv.ActiveMarkerTrackerAlgorithm(self: metavision_sdk_cv.ActiveMarkerTrackerAlgorithm, update_radius: bool, distance_pct: float, inactivity_period_us: int, monitoring_frequency_hz: float, radius: float, min_radius: float, min_event_weight: float, max_event_weight: float, weight_slope: float, sources_to_track: std::set<unsigned int, std::less<unsigned int>, std::allocator<unsigned int> >) None

Constructor.

params

Parameters of the algorithm

sources_to_track

List of unique IDs of the LEDs that constitute the active marker to be tracked

static get_empty_output_buffer() metavision_sdk_cv.EventActiveTrackBuffer

This function returns an empty buffer of events of the correct type, which can later on be used as output_buf when calling process_events()

process_events(*args, **kwargs)

Overloaded function.

  1. process_events(self: metavision_sdk_cv.ActiveMarkerTrackerAlgorithm, input_np: numpy.ndarray[Metavision::EventSourceId], output_buf: metavision_sdk_cv.EventActiveTrackBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes a numpy array as input and writes the results into the specified output event buffer
input_np

input chunk of events (numpy structured array whose fields are (‘x’, ‘y’, ‘p’, ‘t’). Note that this order is mandatory)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

  1. process_events(self: metavision_sdk_cv.ActiveMarkerTrackerAlgorithm, input_buf: metavision_sdk_cv.EventSourceIdBuffer, output_buf: metavision_sdk_cv.EventActiveTrackBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes an event buffer as input and writes the results into a distinct output event buffer
input_buf

input chunk of events (event buffer)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

class metavision_sdk_cv.ActivityNoiseFilterAlgorithm(self: metavision_sdk_cv.ActivityNoiseFilterAlgorithm, width: int, height: int, threshold: int) None

Filter that accepts events if a similar event has happened during a certain time window in the past, in the neighborhood of its coordinates.

Builds a new ActivityNoiseFilterAlgorithm object.

width

Maximum X coordinate of the events in the stream

height

Maximum Y coordinate of the events in the stream

threshold

Length of the time window for activity filtering (in us)

static get_empty_output_buffer() metavision_sdk_base.EventCDBuffer

This function returns an empty buffer of events of the correct type, which can later on be used as output_buf when calling process_events()

process_events(*args, **kwargs)

Overloaded function.

  1. process_events(self: metavision_sdk_cv.ActivityNoiseFilterAlgorithm, input_np: numpy.ndarray[metavision_sdk_base._EventCD_decode], output_buf: metavision_sdk_base.EventCDBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes a numpy array as input and writes the results into the specified output event buffer
input_np

input chunk of events (numpy structured array whose fields are (‘x’, ‘y’, ‘p’, ‘t’). Note that this order is mandatory)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

  1. process_events(self: metavision_sdk_cv.ActivityNoiseFilterAlgorithm, input_buf: metavision_sdk_base.EventCDBuffer, output_buf: metavision_sdk_base.EventCDBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes an event buffer as input and writes the results into a distinct output event buffer
input_buf

input chunk of events (event buffer)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

process_events_(self: metavision_sdk_cv.ActivityNoiseFilterAlgorithm, events_buf: metavision_sdk_base.EventCDBuffer) None

This method is used to apply the current algorithm on a chunk of events. It takes an event buffer as input/output. This should only be used when the number of output events is the same as the number of input events

events_buf

Buffer of events used as input/output. Its content will be overwritten. It can be converted to a numpy structured array using .numpy()

class metavision_sdk_cv.AntiFlickerAlgorithm(self: metavision_sdk_cv.AntiFlickerAlgorithm, width: int, height: int, filter_length: int = 7, min_freq: float = 50.0, max_freq: float = 70.0, diff_thresh_us: int = 1500) None

Algorithm used to remove flickering events given a frequency interval.

Parameters
  • width (int) – Sensor’s width

  • height (int) – Sensor’s height

  • filter_length (int) – Number of measures of the same period before outputting an event

  • min_freq (float) – Minimum frequency of the flickering interval

  • max_freq (float) – Maximum frequency of the flickering interval

  • diff_thresh_us (unsigned int) – Maximum difference (us) allowed between two consecutive periods to be considered the same.

static get_empty_output_buffer() metavision_sdk_base.EventCDBuffer

This function returns an empty buffer of events of the correct type, which can later on be used as output_buf when calling process_events()

process_events(*args, **kwargs)

Overloaded function.

  1. process_events(self: metavision_sdk_cv.AntiFlickerAlgorithm, input_np: numpy.ndarray[metavision_sdk_base._EventCD_decode], output_buf: metavision_sdk_base.EventCDBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes a numpy array as input and writes the results into the specified output event buffer
input_np

input chunk of events (numpy structured array whose fields are (‘x’, ‘y’, ‘p’, ‘t’). Note that this order is mandatory)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

  1. process_events(self: metavision_sdk_cv.AntiFlickerAlgorithm, input_buf: metavision_sdk_base.EventCDBuffer, output_buf: metavision_sdk_base.EventCDBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes an event buffer as input and writes the results into a distinct output event buffer
input_buf

input chunk of events (event buffer)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

set_difference_threshold(self: metavision_sdk_cv.AntiFlickerAlgorithm, diff_thresh: float) None

Sets the difference allowed between two periods to be considered the same.

diff_thresh

Maximum difference allowed between two successive periods to be considered the same

set_filter_length(self: metavision_sdk_cv.AntiFlickerAlgorithm, filter_length: int) bool

Sets filter’s length.

filter_length

Number of values in the output median filter

return

false if value could not be set (invalid value)

set_max_freq(self: metavision_sdk_cv.AntiFlickerAlgorithm, max_freq: float) bool

Sets maximum frequency of the flickering interval.

note

The value given has to be strictly superior to minimum frequency :max_freq: Maximum frequency of the flickering interval

return

false if value could not be set (invalid value)

set_min_freq(self: metavision_sdk_cv.AntiFlickerAlgorithm, min_freq: float) bool

Sets minimum frequency of the flickering interval.

note

The value given has to be strictly inferior to maximum frequency :min_freq: Minimum frequency of the flickering interval

return

false if value could not be set (invalid value)

class metavision_sdk_cv.CameraGeometry(*args, **kwargs)

A camera geometry is a mathematical model allowing to map points from world to image plane and vice versa

Overloaded function.

  1. __init__(self: metavision_sdk_cv.CameraGeometry, width: int, height: int, K: numpy.ndarray[numpy.float32], D: numpy.ndarray[numpy.float32]) -> None

  2. __init__(self: metavision_sdk_cv.CameraGeometry, width: int, height: int, P: numpy.ndarray[numpy.float32], IP: numpy.ndarray[numpy.float32], cx: float, cy: float, A: numpy.ndarray[numpy.float32], zoom_factor: float = 1) -> None

camera_to_img(self: metavision_sdk_cv.CameraGeometry, pt_c: Buffer, pt_dist_img: Buffer) None

Maps a point from the camera’s coordinates system into the distorted image plane.

pt_c

The 3D point in the camera’s coordinates system

pt_dist_img

The mapped point in the distorted image plane

camera_to_undist_img(self: metavision_sdk_cv.CameraGeometry, pt_c: Buffer, pt_undist_img: Buffer) None

Maps a point from the camera’s coordinates system into the undistorted image plane.

pt_c

The 3D point in the camera’s coordinates system

pt_undist_img

The mapped point in the undistorted image plane

get_distance_to_image_plane(self: metavision_sdk_cv.CameraGeometry) float

Gets the distance between the camera’s optical center and the undistorted image plane.

get_distortion_maps(self: metavision_sdk_cv.CameraGeometry, mapx: Buffer, mapy: Buffer) None
get_homography_and_distortion_maps(self: metavision_sdk_cv.CameraGeometry, H: Buffer, mapx: Buffer, mapy: Buffer) None
get_image_size(self: metavision_sdk_cv.CameraGeometry) tuple

Gets the sensor’s size, returns a tuple: (width, height)

get_img_to_undist_norm_jacobian(self: metavision_sdk_cv.CameraGeometry, pt_dist_img: Buffer, pt_undist_norm: Buffer, J: Buffer) None

Computes the undistortion function’s jacobian (Row major mode matrix)

pt_dist_img

The point in the distorted image plane at which the jacobian is computed

pt_undist_norm

The point in the undistorted normalized image plane

J

The computed jacobian

get_undist_norm_to_img_jacobian(self: metavision_sdk_cv.CameraGeometry, pt_undist_norm: Buffer, pt_dist_img: Buffer, J: Buffer) None

Computes the distortion function’s jacobian (Row major mode matrix)

pt_undist_norm

The point in the undistorted normalized image plane at which the jacobian is computed

pt_dist_img

The point in the distorted image plane

J

The computed jacobian

get_undist_norm_to_undist_img_transform(self: metavision_sdk_cv.CameraGeometry, m: Buffer) None

Gets the transform that maps a point from the undistorted normalized image plane (i.e. Z = 1) into the undistorted image plane (row major mode matrix)

m

The transform

get_undistortion_maps(self: metavision_sdk_cv.CameraGeometry, mapx: Buffer, mapy: Buffer) None
img_to_undist_norm(self: metavision_sdk_cv.CameraGeometry, pt_dist_img: Buffer, pt_undist_norm: Buffer) None

Maps a point from the distorted image plane into the undistorted normalized image plane.

pt_dist_img

The point in the distorted image plane

pt_undist_norm

The mapped point in the undistorted normalized image plane

undist_img_to_undist_norm(self: metavision_sdk_cv.CameraGeometry, pt_undist_img: Buffer, pt_undist_norm: Buffer) None

Maps a point from the undistorted image plane into the undistorted normalized image plane.

pt_undist_img

The point in the undistorted image plane

pt_undist_norm

The mapped point in the undistorted normalized image plane

undist_norm_to_dist_norm(self: metavision_sdk_cv.CameraGeometry, pt_undist_norm: Buffer, pt_dist_norm: Buffer) None

Maps a point from the undistorted normalized image plane into the distorted normalized image plane.

pt_undist_norm

The mapped point in the undistorted normalized image plane

pt_dist_norm

The mapped point in the distorted normalized image plane

undist_norm_to_img(self: metavision_sdk_cv.CameraGeometry, pt_undist_norm: Buffer, pt_dist_img: Buffer) None

Maps a point from the undistorted normalized image plane into the distorted image plane.

pt_undist_norm

The point in the undistorted normalized image plane

pt_dist_img

The mapped point in the distorted image plane

undist_norm_to_undist_img(self: metavision_sdk_cv.CameraGeometry, pt_undist_norm: Buffer, pt_undist_img: Buffer) None

Maps a point from the undistorted normalized image plane into the normalized image plane.

pt_undist_norm

The point in the undistorted normalized image plane

pt_undist_img

The mapped point in the undistorted image plane

vector_img_to_undist_norm(self: metavision_sdk_cv.CameraGeometry, ctr_dist_img: Buffer, vec_dist_img: Buffer, ctr_undist_norm: Buffer, vec_undist_norm: Buffer) None

Maps a vector from the distorted image plane into the undistorted normalized image plane.

ctr_dist_img

The vector’s starting point in the distorted image plane

vec_dist_img

The vector in the distorted image plane (the vector must be normalized)

ctr_undist_norm

The vector’s starting point in the undistorted normalized image plane

vec_undist_norm

The vector in the undistorted normalized image plane

note

The output vector is normalized

vector_undist_norm_to_img(self: metavision_sdk_cv.CameraGeometry, ctr_undist_norm: Buffer, vec_undist_norm: Buffer, ctr_dist_img: Buffer, vec_dist_img: Buffer) None

Maps a vector from the undistorted normalized image plane into the distorted image plane.

ctr_undist_norm

The vector’s starting point in the undistorted normalized image plane

vec_undist_norm

The vector in the undistorted normalized image plane (the vector must be normalized)

ctr_dist_img

The vector’s starting point in the distorted image plane

vec_dist_img

The mapped vector in the distorted image plane

note

The output vector is normalized

metavision_sdk_cv.load_camera_geometry(json_path: std::filesystem::__cxx11::path) metavision_sdk_cv.CameraGeometry

Loads a camera geometry from a JSON file.

class metavision_sdk_cv.RectifiedCameraGeometry

Class representing the rectified geometry for a given camera.

K(self: metavision_sdk_cv.RectifiedCameraGeometry) numpy.ndarray[numpy.float32[3, 3]]

Returns the 3x3 projection matrix mapping from rectified camera coordinates to rectified undistorted image coordinates.

R(self: metavision_sdk_cv.RectifiedCameraGeometry) numpy.ndarray[numpy.float32[3, 3]]

Returns the 3x3 rotation matrix mapping from regular undistorted normalized coordinates to rectified normalized coordinates.

camera_to_rect_camera(self: metavision_sdk_cv.RectifiedCameraGeometry, pt3_camera: Buffer, pt3_rect: Buffer) None

Maps a 3d point from camera coordinates to rectified camera coordinates.

get_mask(self: metavision_sdk_cv.RectifiedCameraGeometry, border_size: int = 0) numpy.ndarray[numpy.uint8]

Returns the mask representing valid pixels in the rectified image, with an optional disabled border.

border_size

The size of the border to disable in the returned rectified image mask.

return

The rectified image mask

height(self: metavision_sdk_cv.RectifiedCameraGeometry) int

Returns the height of the rectified image.

img_to_rect_undist_img(*args, **kwargs)

Overloaded function.

  1. img_to_rect_undist_img(self: metavision_sdk_cv.RectifiedCameraGeometry, map_img: numpy.ndarray, map_rect_undist_img: numpy.ndarray) -> None

Remaps a regular image to a rectified image.

map_img

The regular image to remap

map_rect_undist_img

The remapped rectified image

  1. img_to_rect_undist_img(self: metavision_sdk_cv.RectifiedCameraGeometry, pt2_img: Buffer, pt2_rect_undist_img: Buffer) -> None

Maps a 2d point from regular image coordinates to rectified image coordinates.

rect_camera_to_camera(self: metavision_sdk_cv.RectifiedCameraGeometry, pt3_rect: Buffer, pt3_camera: Buffer) None

Maps a 3d point from rectified camera coordinates to camera coordinates.

rect_camera_to_rect_undist_img(self: metavision_sdk_cv.RectifiedCameraGeometry, pt3_rect: Buffer, pt2_rect_undist_img: Buffer) None

Projects a 3d point from camera coordinates into the corresponding 2d point in rectified image coordinates.

rect_undist_img_to_img(self: metavision_sdk_cv.RectifiedCameraGeometry, pt2_img: Buffer, pt3_rectmaster: Buffer) None

Maps a 2d point from rectified image coordinates to regular image coordinates.

rect_undist_img_to_rect_undist_norm(self: metavision_sdk_cv.RectifiedCameraGeometry, pt2_rect_undist_img: Buffer, pt2_rect_undist_norm: Buffer) None

Maps a 2d point from rectified image coordinates to rectified normalized coordinates.

width(self: metavision_sdk_cv.RectifiedCameraGeometry) int

Returns the width of the rectified image.

class metavision_sdk_cv.RectifiedStereoGeometry

Class representing the rectified stereo geometry for a given camera pair.

disparity_map_to_camera_point_cloud(self: metavision_sdk_cv.RectifiedStereoGeometry, disp_map_master: numpy.ndarray[numpy.float32]) list

Converts a disparity map in master rectified image coordinates to a 3d point cloud in master regular camera coordinates.

note

Invalid disparities, recognized as std::numeric_limits<FloatType>::quiet_NaN(), are ignored.

disparity_map_to_depth_map(self: metavision_sdk_cv.RectifiedStereoGeometry, disp_map_master: numpy.ndarray[numpy.float32], depth_map_master: numpy.ndarray[numpy.float32]) None

Converts a disparity map in master rectified image coordinates to a depth map in master rectified camera coordinates.

note

Invalid disparities, recognized as std::numeric_limits<FloatType>::quiet_NaN(), are converted to a depth of -1.f.

disparity_sign(self: metavision_sdk_cv.RectifiedStereoGeometry) float

returns the sign of valid disparity values.

disparity_to_depth(self: metavision_sdk_cv.RectifiedStereoGeometry, disp_master: float) float

Converts a disparity value in master rectified image coordinates to a depth value in master rectified camera coordinates.

note

Invalid disparities, recognized as std::numeric_limits<FloatType>::quiet_NaN(), are converted to a depth of -1.f.

disparity_to_xyz(self: metavision_sdk_cv.RectifiedStereoGeometry, disp_master: float, pt2_rect_undist_img: Buffer, pt3_rect_master: Buffer) None

Converts a disparity value and 2d point in master rectified image coordinates to a 3d point in master rectified camera coordinates.

note

Invalid disparities, recognized as std::numeric_limits<FloatType>::quiet_NaN(), are converted to a point with depth -1.f.

is_rectified_horizontally(self: metavision_sdk_cv.RectifiedStereoGeometry) bool

Indicates whether the rectification is done horizontally.

master(self: metavision_sdk_cv.RectifiedStereoGeometry) metavision_sdk_cv.RectifiedCameraGeometry

Returns the rectified geometry of the master camera.

rect_camera_master_to_rect_camera_slave(self: metavision_sdk_cv.RectifiedStereoGeometry, pt3_rectmaster: Buffer, pt3_rectslave: Buffer) None

Maps a 3d point from master rectified camera coordinates to slave rectified camera coordinates.

rect_camera_slave_to_rect_camera_master(self: metavision_sdk_cv.RectifiedStereoGeometry, pt3_rectslave: Buffer, pt3_rectmaster: Buffer) None

Maps a 3d point from slave rectified camera coordinates to master rectified camera coordinates.

slave(self: metavision_sdk_cv.RectifiedStereoGeometry) metavision_sdk_cv.RectifiedCameraGeometry

Returns the rectified geometry of the slave camera.

class metavision_sdk_cv.StereoCameraGeometry

Class representing the stereo camera geometry.

Note

The stereo camera geometry implementation only supports pinhole camera models :FloatType: The floating point type used for the computations

R_m_s(self: metavision_sdk_cv.StereoCameraGeometry) numpy.ndarray[numpy.float32[3, 3]]

Returns the 3x3 rotation matrix mapping 3d points from the slave camera coordinates to the master camera.

R_s_m(self: metavision_sdk_cv.StereoCameraGeometry) numpy.ndarray[numpy.float32[3, 3]]

Returns the 3x3 rotation matrix mapping 3d points from the master camera coordinates to the slave camera.

T_m_s(self: metavision_sdk_cv.StereoCameraGeometry) numpy.ndarray[numpy.float32[4, 4]]

Returns the 4x4 transformation matrix mapping 3d points from the slave camera coordinates to the master camera coordinates.

T_s_m(self: metavision_sdk_cv.StereoCameraGeometry) numpy.ndarray[numpy.float32[4, 4]]

Returns the 4x4 transformation matrix mapping 3d points from the master camera coordinates to the slave camera coordinates.

camera_master_to_camera_slave(self: metavision_sdk_cv.StereoCameraGeometry, pt3_master: Buffer, pt3_slave: Buffer) None

Maps a 3d point from the master camera coordinates to the slave camera coordinates.

camera_slave_to_camera_master(self: metavision_sdk_cv.StereoCameraGeometry, pt3_slave: Buffer, pt3_master: Buffer) None

Maps a 3d point from the slave camera coordinates to the master camera coordinates.

clone(self: metavision_sdk_cv.StereoCameraGeometry) metavision_sdk_cv.StereoCameraGeometry

Clones the stereo camera geometry.

pos_m_s(self: metavision_sdk_cv.StereoCameraGeometry) numpy.ndarray[numpy.float32[3, 1]]

Returns the 3d position of the optical center of the master camera expressed in the slave camera coordinates.

pos_s_m(self: metavision_sdk_cv.StereoCameraGeometry) numpy.ndarray[numpy.float32[3, 1]]

Returns the 3d position of the optical center of the slave camera expressed in the master camera coordinates.

proj_master(self: metavision_sdk_cv.StereoCameraGeometry) metavision_sdk_cv.CameraGeometry

Returns the pinhole camera geometry of the master camera.

proj_slave(self: metavision_sdk_cv.StereoCameraGeometry) metavision_sdk_cv.CameraGeometry

Returns the pinhole camera geometry of the slave camera.

rect(*args, **kwargs)

Overloaded function.

  1. rect(self: metavision_sdk_cv.StereoCameraGeometry) -> metavision_sdk_cv.RectifiedStereoGeometry

Returns the stereo rectified geometry.

note

This function initializes the rectified geometry if not previously done.

  1. rect(self: metavision_sdk_cv.StereoCameraGeometry) -> metavision_sdk_cv.RectifiedStereoGeometry

Returns the stereo rectified geometry.

std::runtime_error

if the rectified geometry has not been initialized

rect_master(*args, **kwargs)

Overloaded function.

  1. rect_master(self: metavision_sdk_cv.StereoCameraGeometry) -> metavision_sdk_cv.RectifiedCameraGeometry

Returns the rectified stereo geometry of the master camera.

note

This function initializes the rectified geometry if not previously done.

  1. rect_master(self: metavision_sdk_cv.StereoCameraGeometry) -> metavision_sdk_cv.RectifiedCameraGeometry

Returns the rectified stereo geometry of the master camera.

std::runtime_error

if the rectified geometry has not been initialized

rect_slave(*args, **kwargs)

Overloaded function.

  1. rect_slave(self: metavision_sdk_cv.StereoCameraGeometry) -> metavision_sdk_cv.RectifiedCameraGeometry

Returns the rectified stereo geometry of the slave camera.

note

This function initializes the rectified geometry if not previously done.

  1. rect_slave(self: metavision_sdk_cv.StereoCameraGeometry) -> metavision_sdk_cv.RectifiedCameraGeometry

Returns the rectified stereo geometry of the slave camera.

std::runtime_error

if the rectified geometry has not been initialized

t_m_s(self: metavision_sdk_cv.StereoCameraGeometry) numpy.ndarray[numpy.float32[3, 1]]

Returns the 3x1 translation vector mapping 3d points from the slave camera coordinates to the master camera.

t_s_m(self: metavision_sdk_cv.StereoCameraGeometry) numpy.ndarray[numpy.float32[3, 1]]

Returns the 3x1 translation vector mapping 3d points from the master camera coordinates to the slave camera.

metavision_sdk_cv.load_stereo_camera_geometry(json_path: os.PathLike, initialize_rectified_geometry: bool = False) metavision_sdk_cv.StereoCameraGeometry
class metavision_sdk_cv.DenseFlowFrameGeneratorAlgorithm(self: metavision_sdk_cv.DenseFlowFrameGeneratorAlgorithm, width: int, height: int, maximum_flow_magnitude: float, flow_magnitude_scale: float, visualization_method: Metavision::DenseFlowFrameGeneratorAlgorithm::VisualizationMethod, accumulation_policy: Metavision::DenseFlowFrameGeneratorAlgorithm::AccumulationPolicy, resolution_subsampling: int = -1) None

Algorithm used to generate visualization images of dense optical flow streams.

class AccumulationPolicy(self: metavision_sdk_cv.DenseFlowFrameGeneratorAlgorithm.AccumulationPolicy, value: int) None

Policy for accumulating multiple flow events at a given pixel

Members:

Average

PeakMagnitude

Last

property name
DenseFlowFrameGeneratorAlgorithm(self: metavision_sdk_cv.DenseFlowFrameGeneratorAlgorithm, flow_buf: metavision_sdk_cv.EventOpticalFlowBuffer) None

Processes a buffer of flow events.

EventIt

Read-Only input event iterator type. Works for iterators over buffers of EventOpticalFlow or equivalent :it_begin: Iterator to the first input event

it_end

Iterator to the past-the-end event

note

Successive calls to process_events will accumulate data at each pixel until generate or reset is called.

class VisualizationMethod(self: metavision_sdk_cv.DenseFlowFrameGeneratorAlgorithm.VisualizationMethod, value: int) None

Method to visualize dense flow fields

Members:

DenseColorMap

Arrows

property name
generate(self: metavision_sdk_cv.DenseFlowFrameGeneratorAlgorithm, frame: numpy.ndarray) None

Generates a flow visualization frame.

frame

Frame that will contain the flow visualization

allocate

Allocates the frame if true. Otherwise, the user must ensure the validity of the input frame. This is to be used when the data ptr must not change (external allocation, ROI over another cv::Mat, …).

note

In DenseColorMap mode, the frame will be reset to zero prior to being filled with the flow visualization. In Arrows mode, the flow visualization will be overlaid on top of the input frame. :invalid_argument: if the frame doesn’t have the expected type and geometry

generate_legend_image(self: metavision_sdk_cv.DenseFlowFrameGeneratorAlgorithm, legend_frame: numpy.ndarray) None

Generates a legend image for the flow visualization.

legend_frame

Frame that will contain the flow visualization legend

square_size

Size of the generated image

allocate

Allocates the frame if true. Otherwise, the user must ensure the validity of the input frame. This is to be used when the data ptr must not change (external allocation, ROI over another cv::Mat, …) :invalid_argument: if the frame doesn’t have the expected type

process_events(self: metavision_sdk_cv.DenseFlowFrameGeneratorAlgorithm, flow_np: numpy.ndarray[Metavision::EventOpticalFlow]) None

Processes a buffer of flow events.

EventIt

Read-Only input event iterator type. Works for iterators over buffers of EventOpticalFlow or equivalent :it_begin: Iterator to the first input event

it_end

Iterator to the past-the-end event

note

Successive calls to process_events will accumulate data at each pixel until generate or reset is called.

class metavision_sdk_cv.Event2dFrequencyBuffer(self: metavision_sdk_cv.Event2dFrequencyBuffer, size: int = 0) None

Constructor

numpy(self: metavision_sdk_cv.Event2dFrequencyBuffer, copy: bool = False) numpy.ndarray[Metavision::Event2dFrequency<float>]
Copy

if True, allocates new memory and returns a copy of the events. If False, use the same memory

resize(self: metavision_sdk_cv.Event2dFrequencyBuffer, size: int) None

resizes the buffer to the specified size

size

the new size of the buffer

class metavision_sdk_cv.Event2dFrequencyClusterBuffer(self: metavision_sdk_cv.Event2dFrequencyClusterBuffer, size: int = 0) None

Constructor

numpy(self: metavision_sdk_cv.Event2dFrequencyClusterBuffer, copy: bool = False) numpy.ndarray[Metavision::Event2dFrequencyCluster<float>]
Copy

if True, allocates new memory and returns a copy of the events. If False, use the same memory

resize(self: metavision_sdk_cv.Event2dFrequencyClusterBuffer, size: int) None

resizes the buffer to the specified size

size

the new size of the buffer

class metavision_sdk_cv.Event2dPeriodBuffer(self: metavision_sdk_cv.Event2dPeriodBuffer, size: int = 0) None

Constructor

numpy(self: metavision_sdk_cv.Event2dPeriodBuffer, copy: bool = False) numpy.ndarray[Metavision::Event2dPeriod<float>]
Copy

if True, allocates new memory and returns a copy of the events. If False, use the same memory

resize(self: metavision_sdk_cv.Event2dPeriodBuffer, size: int) None

resizes the buffer to the specified size

size

the new size of the buffer

class metavision_sdk_cv.EventOpticalFlowBuffer(self: metavision_sdk_cv.EventOpticalFlowBuffer, size: int = 0) None

Constructor

numpy(self: metavision_sdk_cv.EventOpticalFlowBuffer, copy: bool = False) numpy.ndarray[Metavision::EventOpticalFlow]
Copy

if True, allocates new memory and returns a copy of the events. If False, use the same memory

resize(self: metavision_sdk_cv.EventOpticalFlowBuffer, size: int) None

resizes the buffer to the specified size

size

the new size of the buffer

class metavision_sdk_cv.FrequencyAlgorithm(self: metavision_sdk_cv.FrequencyAlgorithm, width: int, height: int, filter_length: int = 7, min_freq: float = 10.0, max_freq: float = 150.0, diff_thresh_us: int = 1500, output_all_burst_events: bool = False) None

Algorithm used to estimate the flickering frequency (Hz) of the pixels of the sensor.

Parameters
  • width (int) – Sensor’s width height (int): Sensor’s height filter_length (int): Number of measures of the same period before outputting an event

  • min_freq (float) – Minimum frequency to output

  • max_freq (float) – Maximum frequency to output

  • diff_thresh_us (unsigned int) – Maximum difference (us) allowed between two consecutive periods to be considered the same.

  • output_all_burst_events (bool) – Whether all the events of a burst must be output or not

static get_empty_output_buffer() metavision_sdk_cv.Event2dFrequencyBuffer

This function returns an empty buffer of events of the correct type, which can later on be used as output_buf when calling process_events()

process_events(*args, **kwargs)

Overloaded function.

  1. process_events(self: metavision_sdk_cv.FrequencyAlgorithm, input_np: numpy.ndarray[metavision_sdk_base._EventCD_decode], output_buf: metavision_sdk_cv.Event2dFrequencyBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes a numpy array as input and writes the results into the specified output event buffer
input_np

input chunk of events (numpy structured array whose fields are (‘x’, ‘y’, ‘p’, ‘t’). Note that this order is mandatory)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

  1. process_events(self: metavision_sdk_cv.FrequencyAlgorithm, input_buf: metavision_sdk_base.EventCDBuffer, output_buf: metavision_sdk_cv.Event2dFrequencyBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes an event buffer as input and writes the results into a distinct output event buffer
input_buf

input chunk of events (event buffer)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

set_difference_threshold(self: metavision_sdk_cv.FrequencyAlgorithm, diff_thresh: float) None

Sets the difference allowed between two periods to be considered the same.

diff_thresh

Maximum difference allowed between two successive periods to be considered the same

set_filter_length(self: metavision_sdk_cv.FrequencyAlgorithm, filter_length: int) bool

Sets filter filter length.

filter_length

Number of values in the output median filter

return

false if value could not be set (invalid value)

set_max_freq(self: metavision_sdk_cv.FrequencyAlgorithm, max_freq: float) bool

Sets maximum frequency to output.

note

The value given has to be > minimum frequency :max_freq: Maximum frequency to output

return

false if value could not be set (invalid value)

set_min_freq(self: metavision_sdk_cv.FrequencyAlgorithm, min_freq: float) bool

Sets minimum frequency to output.

note

The value given has to be < maximum frequency :min_freq: Minimum frequency to output

return

false if value could not be set (invalid value)

class metavision_sdk_cv.FrequencyClusteringAlgorithm(self: metavision_sdk_cv.FrequencyClusteringAlgorithm, width: int, height: int, min_cluster_size: int = 1, max_frequency_diff: float = 5.0, max_time_diff: int = 1000, filter_alpha: float = 0.10000000149011612) None

Fequency clustering algorithm. Processes input frequency events and groups them in clusters.

An event belongs to a cluster if it is connected (8-connectivity) to the cluster, its timestamp is within a certain threshold of the last update of the cluster and its frequency is within a certain threshold of the last updated frequency.

The final position of each cluster is a filtered version of the position of the events that get associated to it.

Parameters
  • width (int) – Sensor’s width

  • height (int) – Sensor’s height

  • min_cluster_size (int) – Minimum size of a cluster to be output (in pixels)

  • max_frequency_diff (float) – Maximum frequency difference for an input event to be associated to an existing cluster

  • max_time_diff (int) – Maximum time difference to link an event to an existing cluster

  • filter_alpha (float) – Filter weight for updating the cluster position with a new event

static get_empty_output_buffer() metavision_sdk_cv.Event2dFrequencyClusterBuffer

This function returns an empty buffer of events of the correct type, which can later on be used as output_buf when calling process_events()

process_events(*args, **kwargs)

Overloaded function.

  1. process_events(self: metavision_sdk_cv.FrequencyClusteringAlgorithm, input_np: numpy.ndarray[Metavision::Event2dFrequency<float>], output_buf: metavision_sdk_cv.Event2dFrequencyClusterBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes a numpy array as input and writes the results into the specified output event buffer
input_np

input chunk of events (numpy structured array whose fields are (‘x’, ‘y’, ‘p’, ‘t’). Note that this order is mandatory)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

  1. process_events(self: metavision_sdk_cv.FrequencyClusteringAlgorithm, input_buf: metavision_sdk_cv.Event2dFrequencyBuffer, output_buf: metavision_sdk_cv.Event2dFrequencyClusterBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes an event buffer as input and writes the results into a distinct output event buffer
input_buf

input chunk of events (event buffer)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

class metavision_sdk_cv.PlaneFittingFlowAlgorithm(self: metavision_sdk_cv.PlaneFittingFlowAlgorithm, width: int, height: int, radius: int = 3, normalized_flow_magnitude: float = 100, min_spatial_consistency_ratio: float = - 1, max_spatial_consistency_ratio: float = - 1, fitting_error_tolerance: int = - 1, neighbor_sample_fitting_fraction: float = 0.30000001192092896) None

This class is an optimized implementation of the dense optical flow approach proposed in Benosman R., Clercq C., Lagorce X., Ieng S. H., & Bartolozzi C. (2013). Event-based visual flow. IEEE transactions on neural networks and learning systems, 25(2), 407-417.

note

This dense optical flow approach estimates the flow along the edge’s normal, by fitting a plane locally in the time-surface. The plane fitting helps regularize the estimation, but estimated flow results are still relatively sensitive to noise. The algorithm is run for each input event, generating a dense stream of flow events, but making it relatively costly on high event-rate scenes.

see

TripletMatchingFlowAlgorithm algorithm for a more efficient but more noise sensitive dense optical flow approach.

see

SparseOpticalFlowAlgorithm algorithm for a flow algorithm based on sparse feature tracking, estimating the full scene motion, staged hence more efficient on high event-rate scenes, but also more complex to tune and dependent on the presence of trackable features in the scene.

static get_empty_output_buffer() metavision_sdk_cv.EventOpticalFlowBuffer

This function returns an empty buffer of events of the correct type, which can later on be used as output_buf when calling process_events()

process_events(*args, **kwargs)

Overloaded function.

  1. process_events(self: metavision_sdk_cv.PlaneFittingFlowAlgorithm, input_np: numpy.ndarray[metavision_sdk_base._EventCD_decode], output_buf: metavision_sdk_cv.EventOpticalFlowBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes a numpy array as input and writes the results into the specified output event buffer
input_np

input chunk of events (numpy structured array whose fields are (‘x’, ‘y’, ‘p’, ‘t’). Note that this order is mandatory)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

  1. process_events(self: metavision_sdk_cv.PlaneFittingFlowAlgorithm, input_buf: metavision_sdk_base.EventCDBuffer, output_buf: metavision_sdk_cv.EventOpticalFlowBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes an event buffer as input and writes the results into a distinct output event buffer
input_buf

input chunk of events (event buffer)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

class metavision_sdk_cv.PlaneFittingFlowEstimator(self: metavision_sdk_cv.PlaneFittingFlowEstimator, radius: int = 3, enable_flow_normalization: bool = False, min_spatial_consistency_ratio: float = - 1.0, max_spatial_consistency_ratio: float = - 1.0, fitting_error_tolerance: int = - 1, neighbor_sample_fitting_fraction: float = 0.30000001192092896) None

Class computing the flow’s component in the normal direction of an edge moving in a time surface.

The flow is computed by selecting recent timestamp values in a time surface around a given location, fitting a plane to these timestamps using linear least-squares and inferring the flow from the plane’s estimated parameters. This class enables rejecting visual flow estimates based on two quality indicators. The first indicator is the plane fitting error on the timestamps of the timesurface, which is checked to lie within a configured tolerance. The second indicator, denoted spatial consistency, measures the consistency between the radius of the considered neighborhood and the distance covered by the edge during the time period observed in the local timesurface. The visual flow estimates the speed of the local edge and we can calculate the distance covered by the local edge between the timestamp of the oldest event used for plane fitting and the center timestamp. The ratio between this covered distance and the radius of the neighborhood can be seen as a quality indicator for the estimated visual flow, and can be used to reject visual flow estimates when the spatial consistency ratio lies outside a configured range.

Constructor.

radius

Radius used to select timestamps in a time surface around a given location

enable_flow_normalization

Flag to indicate if the estimated flow should be normalized

min_spatial_consistency_ratio

Lower bound of the acceptable range for the spatial consistency ratio quality indicator. Pass a negative value to disable this test.

max_spatial_consistency_ratio

Upper bound of the acceptable range for the spatial consistency ratio quality indicator. Pass a negative value to disable this test.

fitting_error_tolerance

Tolerance used to accept visual flow estimates with low enough fitting error. Pass a negative value to disable this test.

neighbor_sample_fitting_fraction

Fraction used to determine how many timestamps from the timesurface neighborhood are used to fit the plane.

get_flow(self: metavision_sdk_cv.PlaneFittingFlowEstimator, time_surface: metavision_sdk_core.MostRecentTimestampBuffer, x: int, y: int, c: int = 0, time_limit: int = - 1) tuple

Tries to estimate the visual flow at the given location

time_surface

Input time surface

x

Abscissa at which the flow is to be estimated

y

Ordinate at which the flow is to be estimated

c

Polarity at which timestamps are to be sampled. If the value is -1, the polarity is automatically determined by looking at the most recent timestamp at the given location

time_limit

Optional parameter that contains the oldest timestamp used during the flow estimation if the estimation has succeeded

return

tuple (True, vx, vy) if the estimation has succeeded, (False, None, None) otherwise. vx and vy are expressed in pixels/s

class metavision_sdk_cv.PeriodAlgorithm(self: metavision_sdk_cv.PeriodAlgorithm, width: int, height: int, filter_length: int = 7, min_period: float = 6500, max_period: float = 100000.0, diff_thresh_us: int = 1500, output_all_burst_events: bool = False) None

Algorithm used to estimate the flickering period of the pixels of the sensor.

output_all_burst_events (bool): Whether all the events of a burst must be output or not

static get_empty_output_buffer() metavision_sdk_cv.Event2dPeriodBuffer

This function returns an empty buffer of events of the correct type, which can later on be used as output_buf when calling process_events()

process_events(*args, **kwargs)

Overloaded function.

  1. process_events(self: metavision_sdk_cv.PeriodAlgorithm, input_np: numpy.ndarray[metavision_sdk_base._EventCD_decode], output_buf: metavision_sdk_cv.Event2dPeriodBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes a numpy array as input and writes the results into the specified output event buffer
input_np

input chunk of events (numpy structured array whose fields are (‘x’, ‘y’, ‘p’, ‘t’). Note that this order is mandatory)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

  1. process_events(self: metavision_sdk_cv.PeriodAlgorithm, input_buf: metavision_sdk_base.EventCDBuffer, output_buf: metavision_sdk_cv.Event2dPeriodBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes an event buffer as input and writes the results into a distinct output event buffer
input_buf

input chunk of events (event buffer)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

set_difference_threshold(self: metavision_sdk_cv.PeriodAlgorithm, diff_thresh: float) None

Sets the difference allowed between two periods to be considered the same.

diff_thresh

Maximum difference allowed between two successive periods to be considered the same

set_filter_length(self: metavision_sdk_cv.PeriodAlgorithm, filter_length: int) bool

Sets filter filter length.

filter_length

Number of values in the output median filter

return

false if value could not be set (invalid value)

set_max_period(self: metavision_sdk_cv.PeriodAlgorithm, max_period: float) bool

Sets maximum period to output.

note

The value max_period has to be larger than the minimum period :max_period: Maximum period to output

return

false if value could not be set (invalid value)

set_min_period(self: metavision_sdk_cv.PeriodAlgorithm, min_period: float) bool

Sets minimum period to output.

note

The value min_period has to be smaller than the maximum period :min_period: Minimum period (us) to output

return

false if value could not be set (invalid value)

class metavision_sdk_cv.SparseFlowFrameGeneratorAlgorithm(self: metavision_sdk_cv.SparseFlowFrameGeneratorAlgorithm) None
add_flow_for_frame_update(*args, **kwargs)

Overloaded function.

  1. add_flow_for_frame_update(self: metavision_sdk_cv.SparseFlowFrameGeneratorAlgorithm, flow_np: numpy.ndarray[Metavision::EventOpticalFlow]) -> None

Stores one motion arrow per centroid (several optical flow events may have the same centroid) in the motion arrow map to be displayed later using the update_frame_with_flow method.

  1. add_flow_for_frame_update(self: metavision_sdk_cv.SparseFlowFrameGeneratorAlgorithm, flow_buf: metavision_sdk_cv.EventOpticalFlowBuffer) -> None

Stores one motion arrow per centroid (several optical flow events may have the same centroid) in the motion arrow map to be displayed later using the update_frame_with_flow method.

clear_ids(self: metavision_sdk_cv.SparseFlowFrameGeneratorAlgorithm) None
update_frame_with_flow(self: metavision_sdk_cv.SparseFlowFrameGeneratorAlgorithm, display_mat: numpy.ndarray) None

Updates the input frame with the centroids’ motion stored in the history.

Clears the history afterwards

class metavision_sdk_cv.SparseOpticalFlowAlgorithm(*args, **kwargs)

Overloaded function.

  1. __init__(self: metavision_sdk_cv.SparseOpticalFlowAlgorithm, width: int, height: int, config: metavision_sdk_cv.SparseOpticalFlowConfigPreset = <SparseOpticalFlowConfigPreset.FastObjects: 1>) -> None

  2. __init__(self: metavision_sdk_cv.SparseOpticalFlowAlgorithm, width: int, height: int, distance_gain: float = 0.05000000074505806, damping: float = 0.7070000171661377, omega_cutoff: float = 7.0, min_cluster_size: int = 7, max_link_time: int = 30000, match_polarity: bool = True, use_simple_match: bool = True, full_square: bool = True, last_event_only: bool = False, size_threshold: int = 100000000) -> None

static get_empty_output_buffer() metavision_sdk_cv.EventOpticalFlowBuffer

This function returns an empty buffer of events of the correct type, which can later on be used as output_buf when calling process_events()

process_events(*args, **kwargs)

Overloaded function.

  1. process_events(self: metavision_sdk_cv.SparseOpticalFlowAlgorithm, input_np: numpy.ndarray[metavision_sdk_base._EventCD_decode], output_buf: metavision_sdk_cv.EventOpticalFlowBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes a numpy array as input and writes the results into the specified output event buffer
input_np

input chunk of events (numpy structured array whose fields are (‘x’, ‘y’, ‘p’, ‘t’). Note that this order is mandatory)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

  1. process_events(self: metavision_sdk_cv.SparseOpticalFlowAlgorithm, input_buf: metavision_sdk_base.EventCDBuffer, output_buf: metavision_sdk_cv.EventOpticalFlowBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes an event buffer as input and writes the results into a distinct output event buffer
input_buf

input chunk of events (event buffer)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

class metavision_sdk_cv.SpatioTemporalContrastAlgorithm(self: metavision_sdk_cv.SpatioTemporalContrastAlgorithm, width: int, height: int, threshold: int, cut_trail: bool = True) None

The SpatioTemporalContrast Filter is a noise filter using the exponential response of a pixel to a change of light to filter out wrong detections and trails.

For an event to be forwarded, it needs to be preceded by another one in a given time window, this ensures that the spatio temporal contrast detection is strong enough. It is also possible to then cut all the following events up to a change of polarity in the stream for that particular pixel (strong trail removal). Note that this will remove signal if 2 following edges of the same polarity are detected (which should not happen that frequently).

note

The timestamp may be stored in different types 64 bits, 32 bits or 16 bits. The behavior may vary from one size to the other since the number of significant bits may change. Before using the version with less than 32 bits check that the behavior is still valid for the usage.

Builds a new SpatioTemporalContrast object.

width

Maximum X coordinate of the events in the stream

height

Maximum Y coordinate of the events in the stream

threshold

Length of the time window for filtering (in us)

cut_trail

If true, after an event goes through, it removes all events until change of polarity

static get_empty_output_buffer() metavision_sdk_base.EventCDBuffer

This function returns an empty buffer of events of the correct type, which can later on be used as output_buf when calling process_events()

process_events(*args, **kwargs)

Overloaded function.

  1. process_events(self: metavision_sdk_cv.SpatioTemporalContrastAlgorithm, input_np: numpy.ndarray[metavision_sdk_base._EventCD_decode], output_buf: metavision_sdk_base.EventCDBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes a numpy array as input and writes the results into the specified output event buffer
input_np

input chunk of events (numpy structured array whose fields are (‘x’, ‘y’, ‘p’, ‘t’). Note that this order is mandatory)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

  1. process_events(self: metavision_sdk_cv.SpatioTemporalContrastAlgorithm, input_buf: metavision_sdk_base.EventCDBuffer, output_buf: metavision_sdk_base.EventCDBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes an event buffer as input and writes the results into a distinct output event buffer
input_buf

input chunk of events (event buffer)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

process_events_(self: metavision_sdk_cv.SpatioTemporalContrastAlgorithm, events_buf: metavision_sdk_base.EventCDBuffer) None

This method is used to apply the current algorithm on a chunk of events. It takes an event buffer as input/output. This should only be used when the number of output events is the same as the number of input events

events_buf

Buffer of events used as input/output. Its content will be overwritten. It can be converted to a numpy structured array using .numpy()

class metavision_sdk_cv.TimeGradientFlowAlgorithm(self: metavision_sdk_cv.TimeGradientFlowAlgorithm, width: int, height: int, radius: int, min_flow_mag: float, max_flow_mag: float, bit_cut: int) None

This class is a local and dense implementation of Optical Flow from events.

It computes the optical flow along the edge’s normal by analyzing the recent timestamps at only the left, right, top and down K-pixel far neighbors (i.e. not the whole neighborhood). Thus, the estimated flow results are still quite sensitive to noise. The algorithm is run for each input event, generating a dense stream of flow events, but making it relatively costly on high event-rate scenes. The bit size of the timestamp representation can be reduced to accelerate the processing.
note

This approach is dense in the sense that it processes events at the sensor resolution and produces OpticalFlowEvents potentially on the whole sensor matrix. :timestamp_type: Type of the timestamp used in to compute the optical flow. Typically Metavision::timestamp. Can be used with lighter type (like std::uint32_t) to lower processing time when critical.

static get_empty_output_buffer() metavision_sdk_cv.EventOpticalFlowBuffer

This function returns an empty buffer of events of the correct type, which can later on be used as output_buf when calling process_events()

process_events(*args, **kwargs)

Overloaded function.

  1. process_events(self: metavision_sdk_cv.TimeGradientFlowAlgorithm, input_np: numpy.ndarray[metavision_sdk_base._EventCD_decode], output_buf: metavision_sdk_cv.EventOpticalFlowBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes a numpy array as input and writes the results into the specified output event buffer
input_np

input chunk of events (numpy structured array whose fields are (‘x’, ‘y’, ‘p’, ‘t’). Note that this order is mandatory)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

  1. process_events(self: metavision_sdk_cv.TimeGradientFlowAlgorithm, input_buf: metavision_sdk_base.EventCDBuffer, output_buf: metavision_sdk_cv.EventOpticalFlowBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes an event buffer as input and writes the results into a distinct output event buffer
input_buf

input chunk of events (event buffer)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

class metavision_sdk_cv.TrailFilterAlgorithm(self: metavision_sdk_cv.TrailFilterAlgorithm, width: int, height: int, threshold: int) None

Filter that accepts an event either if the last event at the same coordinates was of different polarity, or if it happened at least a given amount of time after the last event.

Builds a new TrailFilterAlgorithmT object.

width

Maximum X coordinate of the events in the stream

height

Maximum Y coordinate of the events in the stream

threshold

Length of the time window for activity filtering (in us)

static get_empty_output_buffer() metavision_sdk_base.EventCDBuffer

This function returns an empty buffer of events of the correct type, which can later on be used as output_buf when calling process_events()

process_events(*args, **kwargs)

Overloaded function.

  1. process_events(self: metavision_sdk_cv.TrailFilterAlgorithm, input_np: numpy.ndarray[metavision_sdk_base._EventCD_decode], output_buf: metavision_sdk_base.EventCDBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes a numpy array as input and writes the results into the specified output event buffer
input_np

input chunk of events (numpy structured array whose fields are (‘x’, ‘y’, ‘p’, ‘t’). Note that this order is mandatory)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

  1. process_events(self: metavision_sdk_cv.TrailFilterAlgorithm, input_buf: metavision_sdk_base.EventCDBuffer, output_buf: metavision_sdk_base.EventCDBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes an event buffer as input and writes the results into a distinct output event buffer
input_buf

input chunk of events (event buffer)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

process_events_(self: metavision_sdk_cv.TrailFilterAlgorithm, events_buf: metavision_sdk_base.EventCDBuffer) None

This method is used to apply the current algorithm on a chunk of events. It takes an event buffer as input/output. This should only be used when the number of output events is the same as the number of input events

events_buf

Buffer of events used as input/output. Its content will be overwritten. It can be converted to a numpy structured array using .numpy()

class metavision_sdk_cv.TripletMatchingFlowAlgorithm(*args, **kwargs)

This class implements the dense optical flow approach proposed in Shiba S., Aoki Y., & Gallego G. (2022). "Fast Event-Based Optical Flow Estimation by Triplet Matching". IEEE Signal Processing Letters, 29, 2712-2716.

note

This dense optical flow approach estimates the flow along the edge’s normal, by locally searching for aligned events triplets. The flow is estimated by averaging all aligned event triplets found, which helps regularize the estimates, but results are still relatively sensitive to noise. The algorithm is run for each input event, generating a dense stream of flow events, but making it relatively costly on high event-rate scenes.

see

PlaneFittingFlowAlgorithm algorithm for slightly more accurate but more expensive dense optical flow approach.

see

SparseOpticalFlowAlgorithm algorithm for a flow algorithm based on sparse feature tracking, estimating the full scene motion, staged hence more efficient on high event-rate scenes, but also more complex to tune and dependent on the presence of trackable features in the scene.

Overloaded function.

  1. __init__(self: metavision_sdk_cv.TripletMatchingFlowAlgorithm, width: int, height: int, radius: float, dt_min: int, dt_max: int) -> None

  2. __init__(self: metavision_sdk_cv.TripletMatchingFlowAlgorithm, width: int, height: int, radius: float, min_flow_mag: float, max_flow_mag: float) -> None

static get_empty_output_buffer() metavision_sdk_cv.EventOpticalFlowBuffer

This function returns an empty buffer of events of the correct type, which can later on be used as output_buf when calling process_events()

process_events(*args, **kwargs)

Overloaded function.

  1. process_events(self: metavision_sdk_cv.TripletMatchingFlowAlgorithm, input_np: numpy.ndarray[metavision_sdk_base._EventCD_decode], output_buf: metavision_sdk_cv.EventOpticalFlowBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes a numpy array as input and writes the results into the specified output event buffer
input_np

input chunk of events (numpy structured array whose fields are (‘x’, ‘y’, ‘p’, ‘t’). Note that this order is mandatory)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

  1. process_events(self: metavision_sdk_cv.TripletMatchingFlowAlgorithm, input_buf: metavision_sdk_base.EventCDBuffer, output_buf: metavision_sdk_cv.EventOpticalFlowBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes an event buffer as input and writes the results into a distinct output event buffer
input_buf

input chunk of events (event buffer)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()