SDK CV Python bindings API

class metavision_sdk_cv.ActivityNoiseFilterAlgorithm

Filter that accepts events if a similar event has happened during a certain time window in the past, in the neighborhood of its coordinates.

static get_empty_output_buffer()metavision_sdk_base.EventCDBuffer

This function returns an empty buffer of events of the correct type, which can later on be used as output_buf when calling process_events()

process_events(*args, **kwargs)

Overloaded function.

  1. process_events(self: metavision_sdk_cv.ActivityNoiseFilterAlgorithm, input_np: numpy.ndarray[metavision_sdk_base._EventCD_decode], output_buf: metavision_sdk_base.EventCDBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes a numpy array as input and writes the results into the specified output event buffer
input_np

input chunk of events (numpy structured array)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

  1. process_events(self: metavision_sdk_cv.ActivityNoiseFilterAlgorithm, input_buf: metavision_sdk_base.EventCDBuffer, output_buf: metavision_sdk_base.EventCDBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes an event buffer as input and writes the results into a distinct output event buffer
input_buf

input chunk of events (event buffer)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

process_events_(self: metavision_sdk_cv.ActivityNoiseFilterAlgorithm, events_buf: metavision_sdk_base.EventCDBuffer)None

This method is used to apply the current algorithm on a chunk of events. It takes an event buffer as input/output. This should only be used when the number of output events is the same as the number of input events

events_buf

Buffer of events used as input/output. Its content will be overwritten. It can be converted to a numpy structured array using .numpy()

class metavision_sdk_cv.AntiFlickerAlgorithm

Algorithm used to remove flickering events given a frequency interval.

static get_empty_output_buffer()metavision_sdk_base.EventCDBuffer

This function returns an empty buffer of events of the correct type, which can later on be used as output_buf when calling process_events()

process_events(*args, **kwargs)

Overloaded function.

  1. process_events(self: metavision_sdk_cv.AntiFlickerAlgorithm, input_np: numpy.ndarray[metavision_sdk_base._EventCD_decode], output_buf: metavision_sdk_base.EventCDBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes a numpy array as input and writes the results into the specified output event buffer
input_np

input chunk of events (numpy structured array)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

  1. process_events(self: metavision_sdk_cv.AntiFlickerAlgorithm, input_buf: metavision_sdk_base.EventCDBuffer, output_buf: metavision_sdk_base.EventCDBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes an event buffer as input and writes the results into a distinct output event buffer
input_buf

input chunk of events (event buffer)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

set_difference_threshold(self: metavision_sdk_cv.AntiFlickerAlgorithm, diff_thresh: float)None

Sets the difference allowed between two periods to be considered the same.

diff_thresh

Maximum difference allowed between two successive periods to be considered the same

set_filter_length(self: metavision_sdk_cv.AntiFlickerAlgorithm, filter_length: int)bool

Sets filter’s length.

filter_length

Number of values in the output median filter

return

false if value could not be set (invalid value)

set_max_freq(self: metavision_sdk_cv.AntiFlickerAlgorithm, max_freq: float)bool

Sets maximum frequency of the flickering interval.

note

The value given has to be strictly superior to minimum frequency :max_freq: Maximum frequency of the flickering interval

return

false if value could not be set (invalid value)

set_min_freq(self: metavision_sdk_cv.AntiFlickerAlgorithm, min_freq: float)bool

Sets minimum frequency of the flickering interval.

note

The value given has to be strictly inferior to maximum frequency :min_freq: Minimum frequency of the flickering interval

return

false if value could not be set (invalid value)

class metavision_sdk_cv.CameraGeometry

A camera geometry is a mathematical model allowing to map points from world to image plane and vice versa

camera_to_img(self: metavision_sdk_cv.CameraGeometry, pt_c: buffer, pt_dist_img: buffer)None

Maps a point from the camera’s coordinates system into the distorted image plane.

pt_c

The 3D point in the camera’s coordinates system

pt_dist_img

The mapped point in the distorted image plane

camera_to_undist_img(self: metavision_sdk_cv.CameraGeometry, pt_c: buffer, pt_undist_img: buffer)None

Maps a point from the camera’s coordinates system into the undistorted image plane.

pt_c

The 3D point in the camera’s coordinates system

pt_undist_img

The mapped point in the undistorted image plane

get_distance_to_image_plane(self: metavision_sdk_cv.CameraGeometry)float

Gets the distance between the camera’s optical center and the undistorted image plane.

get_distortion_maps(self: metavision_sdk_cv.CameraGeometry, mapx: buffer, mapy: buffer)None
get_homography_and_distortion_maps(self: metavision_sdk_cv.CameraGeometry, H: buffer, mapx: buffer, mapy: buffer)None
get_image_size(self: metavision_sdk_cv.CameraGeometry)tuple

Gets the sensor’s size, returns a tuple: (width, height)

get_img_to_undist_norm_jacobian(self: metavision_sdk_cv.CameraGeometry, pt_dist_img: buffer, pt_undist_norm: buffer, J: buffer)None

Computes the undistortion function’s jacobian (Row major mode matrix)

pt_dist_img

The point in the distorted image plane at which the jacobian is computed

pt_undist_norm

The point in the undistorted normalized image plane

J

The computed jacobian

get_undist_norm_to_img_jacobian(self: metavision_sdk_cv.CameraGeometry, pt_undist_norm: buffer, pt_dist_img: buffer, J: buffer)None

Computes the distortion function’s jacobian (Row major mode matrix)

pt_undist_norm

The point in the undistorted normalized image plane at which the jacobian is computed

pt_dist_img

The point in the distorted image plane

J

The computed jacobian

get_undist_norm_to_undist_img_transform(self: metavision_sdk_cv.CameraGeometry, m: buffer)None

Gets the transform that maps a point from the undistorted normalized image plane (i.e. Z = 1) into the undistorted image plane (row major mode matrix)

m

The transform

get_undistortion_maps(self: metavision_sdk_cv.CameraGeometry, mapx: buffer, mapy: buffer)None
img_to_undist_norm(self: metavision_sdk_cv.CameraGeometry, pt_dist_img: buffer, pt_undist_norm: buffer)None

Maps a point from the distorted image plane into the undistorted normalized image plane.

pt_dist_img

The point in the distorted image plane

pt_undist_norm

The mapped point in the undistorted normalized image plane

undist_img_to_undist_norm(self: metavision_sdk_cv.CameraGeometry, pt_undist_img: buffer, pt_undist_norm: buffer)None

Maps a point from the undistorted image plane into the undistorted normalized image plane.

pt_undist_img

The point in the undistorted image plane

pt_undist_norm

The mapped point in the undistorted normalized image plane

undist_norm_to_dist_norm(self: metavision_sdk_cv.CameraGeometry, pt_undist_norm: buffer, pt_dist_norm: buffer)None

Maps a point from the undistorted normalized image plane into the distorted normalized image plane.

pt_undist_norm

The mapped point in the undistorted normalized image plane

pt_dist_norm

The mapped point in the distorted normalized image plane

undist_norm_to_img(self: metavision_sdk_cv.CameraGeometry, pt_undist_norm: buffer, pt_dist_img: buffer)None

Maps a point from the undistorted normalized image plane into the distorted image plane.

pt_undist_norm

The point in the undistorted normalized image plane

pt_dist_img

The mapped point in the distorted image plane

undist_norm_to_undist_img(self: metavision_sdk_cv.CameraGeometry, pt_undist_norm: buffer, pt_undist_img: buffer)None

Maps a point from the undistorted normalized image plane into the normalized image plane.

pt_undist_norm

The point in the undistorted normalized image plane

pt_undist_img

The mapped point in the undistorted image plane

vector_img_to_undist_norm(self: metavision_sdk_cv.CameraGeometry, ctr_dist_img: buffer, vec_dist_img: buffer, ctr_undist_norm: buffer, vec_undist_norm: buffer)None

Maps a vector from the distorted image plane into the undistorted normalized image plane.

ctr_dist_img

The vector’s starting point in the distorted image plane

vec_dist_img

The vector in the distorted image plane (the vector must be normalized)

ctr_undist_norm

The vector’s starting point in the undistorted normalized image plane

vec_undist_norm

The vector in the undistorted normalized image plane

note

The output vector is normalized

vector_undist_norm_to_img(self: metavision_sdk_cv.CameraGeometry, ctr_undist_norm: buffer, vec_undist_norm: buffer, ctr_dist_img: buffer, vec_dist_img: buffer)None

Maps a vector from the undistorted normalized image plane into the distorted image plane.

ctr_undist_norm

The vector’s starting point in the undistorted normalized image plane

vec_undist_norm

The vector in the undistorted normalized image plane (the vector must be normalized)

ctr_dist_img

The vector’s starting point in the distorted image plane

vec_dist_img

The mapped vector in the distorted image plane

note

The output vector is normalized

class metavision_sdk_cv.Event2dFrequencyBuffer
numpy(self: metavision_sdk_cv.Event2dFrequencyBuffer, copy: bool = False)numpy.ndarray[Metavision::Event2dFrequency<float>]
Copy

if True, allocates new memory and returns a copy of the events. If False, use the same memory

class metavision_sdk_cv.Event2dFrequencyClusterBuffer
numpy(self: metavision_sdk_cv.Event2dFrequencyClusterBuffer, copy: bool = False)numpy.ndarray[Metavision::Event2dFrequencyCluster<float>]
Copy

if True, allocates new memory and returns a copy of the events. If False, use the same memory

class metavision_sdk_cv.Event2dPeriodBuffer
numpy(self: metavision_sdk_cv.Event2dPeriodBuffer, copy: bool = False)numpy.ndarray[Metavision::Event2dPeriod<float>]
Copy

if True, allocates new memory and returns a copy of the events. If False, use the same memory

class metavision_sdk_cv.EventOpticalFlowBuffer
numpy(self: metavision_sdk_cv.EventOpticalFlowBuffer, copy: bool = False)numpy.ndarray[Metavision::EventOpticalFlow]
Copy

if True, allocates new memory and returns a copy of the events. If False, use the same memory

class metavision_sdk_cv.FlowFrameGeneratorAlgorithm
add_flow_for_frame_update(*args, **kwargs)

Overloaded function.

  1. add_flow_for_frame_update(self: metavision_sdk_cv.FlowFrameGeneratorAlgorithm, flow_np: numpy.ndarray[Metavision::EventOpticalFlow]) -> None

Stores one motion arrow per centroid (several optical flow events may have the same centroid) in the motion arrow map to be displayed later using the update_frame_with_flow method.

  1. add_flow_for_frame_update(self: metavision_sdk_cv.FlowFrameGeneratorAlgorithm, flow_buf: metavision_sdk_cv.EventOpticalFlowBuffer) -> None

Stores one motion arrow per centroid (several optical flow events may have the same centroid) in the motion arrow map to be displayed later using the update_frame_with_flow method.

clear_ids(self: metavision_sdk_cv.FlowFrameGeneratorAlgorithm)None
update_frame_with_flow(self: metavision_sdk_cv.FlowFrameGeneratorAlgorithm, display_mat: numpy.ndarray)None

Updates the input frame with the centroids’ motion stored in the history.

Clears the history afterwards

class metavision_sdk_cv.FrequencyAlgorithm

Algorithm used to estimate the flickering frequency (Hz) of the pixels of the sensor.

static get_empty_output_buffer()metavision_sdk_cv.Event2dFrequencyBuffer

This function returns an empty buffer of events of the correct type, which can later on be used as output_buf when calling process_events()

process_events(*args, **kwargs)

Overloaded function.

  1. process_events(self: metavision_sdk_cv.FrequencyAlgorithm, input_np: numpy.ndarray[metavision_sdk_base._EventCD_decode], output_buf: metavision_sdk_cv.Event2dFrequencyBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes a numpy array as input and writes the results into the specified output event buffer
input_np

input chunk of events (numpy structured array)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

  1. process_events(self: metavision_sdk_cv.FrequencyAlgorithm, input_buf: metavision_sdk_base.EventCDBuffer, output_buf: metavision_sdk_cv.Event2dFrequencyBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes an event buffer as input and writes the results into a distinct output event buffer
input_buf

input chunk of events (event buffer)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

set_difference_threshold(self: metavision_sdk_cv.FrequencyAlgorithm, diff_thresh: float)None

Sets the difference allowed between two periods to be considered the same.

diff_thresh

Maximum difference allowed between two successive periods to be considered the same

set_filter_length(self: metavision_sdk_cv.FrequencyAlgorithm, filter_length: int)bool

Sets filter filter length.

filter_length

Number of values in the output median filter

return

false if value could not be set (invalid value)

set_max_freq(self: metavision_sdk_cv.FrequencyAlgorithm, max_freq: float)bool

Sets maximum frequency to output.

note

The value given has to be > minimum frequency :max_freq: Maximum frequency to output

return

false if value could not be set (invalid value)

set_min_freq(self: metavision_sdk_cv.FrequencyAlgorithm, min_freq: float)bool

Sets minimum frequency to output.

note

The value given has to be < maximum frequency :min_freq: Minimum frequency to output

return

false if value could not be set (invalid value)

class metavision_sdk_cv.FrequencyClusteringAlgorithm

Fequency clustering algorithm. Processes input frequency events and groups them in clusters.

An event belongs to a cluster if it is connected (8-connectivity) to the cluster, its timestamp is within a certain threshold of the last update of the cluster and its frequency is within a certain threshold of the last updated frequency.

The final position of each cluster is a filtered version of the position of the events that get associated to it.

static get_empty_output_buffer()metavision_sdk_cv.Event2dFrequencyClusterBuffer

This function returns an empty buffer of events of the correct type, which can later on be used as output_buf when calling process_events()

process_events(*args, **kwargs)

Overloaded function.

  1. process_events(self: metavision_sdk_cv.FrequencyClusteringAlgorithm, input_np: numpy.ndarray[Metavision::Event2dFrequency<float>], output_buf: metavision_sdk_cv.Event2dFrequencyClusterBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes a numpy array as input and writes the results into the specified output event buffer
input_np

input chunk of events (numpy structured array)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

  1. process_events(self: metavision_sdk_cv.FrequencyClusteringAlgorithm, input_buf: metavision_sdk_cv.Event2dFrequencyBuffer, output_buf: metavision_sdk_cv.Event2dFrequencyClusterBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes an event buffer as input and writes the results into a distinct output event buffer
input_buf

input chunk of events (event buffer)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

class metavision_sdk_cv.NormalFlowEstimator

Class computing the flow’s component in the normal direction of an edge moving in a time surface.

The flow is computed by selecting recent timestamp values in a time surface around a given location, fitting a plane to these timestamps using linear least-squares and inferring the flow from the plane’s estimated parameters. This class enables rejecting visual flow estimates based on two quality indicators. The first indicator is the plane fitting error on the timestamps of the timesurface, which is checked to lie within a configured tolerance. The second indicator, denoted spatial consistency, measures the consistency between the radius of the considered neighborhood and the distance covered by the edge during the time period observed in the local timesurface. The visual flow estimates the speed of the local edge and we can calculate the distance covered by the local edge between the timestamp of the oldest event used for plane fitting and the center timestamp. The ratio between this covered distance and the radius of the neighborhood can be seen as a quality indicator for the estimated visual flow, and can be used to reject visual flow estimates when the spatial consistency ratio lies outside a configured range.

get_flow(self: metavision_sdk_cv.NormalFlowEstimator, time_surface: metavision_sdk_core.MostRecentTimestampBuffer, x: int, y: int, c: int = 0, time_limit: int = - 1)tuple

Tries to estimate the visual flow at the given location

time_surface

Input time surface

x

Abscissa at which the flow is to be estimated

y

Ordinate at which the flow is to be estimated

c

Polarity at which timestamps are to be sampled. If the value is -1, the polarity is automatically determined by looking at the most recent timestamp at the given location

time_limit

Optional parameter that contains the oldest timestamp used during the flow estimation if the estimation has succeeded

return

tuple (True, vx, vy) if the estimation has succeeded, (False, None, None) otherwise. vx and vy are expressed in pixels/s

class metavision_sdk_cv.PeriodAlgorithm

Algorithm used to estimate the flickering period of the pixels of the sensor.

static get_empty_output_buffer()metavision_sdk_cv.Event2dPeriodBuffer

This function returns an empty buffer of events of the correct type, which can later on be used as output_buf when calling process_events()

process_events(*args, **kwargs)

Overloaded function.

  1. process_events(self: metavision_sdk_cv.PeriodAlgorithm, input_np: numpy.ndarray[metavision_sdk_base._EventCD_decode], output_buf: metavision_sdk_cv.Event2dPeriodBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes a numpy array as input and writes the results into the specified output event buffer
input_np

input chunk of events (numpy structured array)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

  1. process_events(self: metavision_sdk_cv.PeriodAlgorithm, input_buf: metavision_sdk_base.EventCDBuffer, output_buf: metavision_sdk_cv.Event2dPeriodBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes an event buffer as input and writes the results into a distinct output event buffer
input_buf

input chunk of events (event buffer)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

set_difference_threshold(self: metavision_sdk_cv.PeriodAlgorithm, diff_thresh: float)None

Sets the difference allowed between two periods to be considered the same.

diff_thresh

Maximum difference allowed between two successive periods to be considered the same

set_filter_length(self: metavision_sdk_cv.PeriodAlgorithm, filter_length: int)bool

Sets filter filter length.

filter_length

Number of values in the output median filter

return

false if value could not be set (invalid value)

set_max_period(self: metavision_sdk_cv.PeriodAlgorithm, max_period: float)bool

Sets maximum period to output.

note

The value max_period has to be larger than the minimum period :max_period: Maximum period to output

return

false if value could not be set (invalid value)

set_min_period(self: metavision_sdk_cv.PeriodAlgorithm, min_period: float)bool

Sets minimum period to output.

note

The value min_period has to be smaller than the maximum period :min_period: Minimum period (us) to output

return

false if value could not be set (invalid value)

class metavision_sdk_cv.RoiMaskAlgorithm

Class that only propagates events which are contained in a certain region of interest.

The Region Of Interest (ROI) is defined by a mask (cv::Mat). An event is validated if the mask at the event position stores a positive number.

Alternatively, the user can enable different rectangular regions defined by the upper left corner and the bottom right corner that propagates any event inside them.

enable_rectangle(self: metavision_sdk_cv.RoiMaskAlgorithm, x0: int, y0: int, x1: int, y1: int)None

Enables a rectangular region defined by the upper left corner and the bottom right corner that propagates any event inside them.

x0

X coordinate of the upper left corner

y0

Y coordinate of the upper left corner

x1

X coordinate of the lower right corner

y1

Y coordinate of the lower right corner

static get_empty_output_buffer()metavision_sdk_base.EventCDBuffer

This function returns an empty buffer of events of the correct type, which can later on be used as output_buf when calling process_events()

max_height(self: metavision_sdk_cv.RoiMaskAlgorithm)int

Returns the maximum number of pixels (height) of the mask.

return

Maximum height of the mask

max_width(self: metavision_sdk_cv.RoiMaskAlgorithm)int

Returns the maximum number of pixels (width) of the mask.

return

Maximum width of the mask

pixel_mask(self: metavision_sdk_cv.RoiMaskAlgorithm)numpy.ndarray[numpy.float64]

Returns the pixel mask of the filter.

return

cv::Mat containing the pixel mask of the filter

process_events(*args, **kwargs)

Overloaded function.

  1. process_events(self: metavision_sdk_cv.RoiMaskAlgorithm, input_np: numpy.ndarray[metavision_sdk_base._EventCD_decode], output_buf: metavision_sdk_base.EventCDBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes a numpy array as input and writes the results into the specified output event buffer
input_np

input chunk of events (numpy structured array)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

  1. process_events(self: metavision_sdk_cv.RoiMaskAlgorithm, input_buf: metavision_sdk_base.EventCDBuffer, output_buf: metavision_sdk_base.EventCDBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes an event buffer as input and writes the results into a distinct output event buffer
input_buf

input chunk of events (event buffer)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

process_events_(self: metavision_sdk_cv.RoiMaskAlgorithm, events_buf: metavision_sdk_base.EventCDBuffer)None

This method is used to apply the current algorithm on a chunk of events. It takes an event buffer as input/output. This should only be used when the number of output events is the same as the number of input events

events_buf

Buffer of events used as input/output. Its content will be overwritten. It can be converted to a numpy structured array using .numpy()

set_pixel_mask(self: metavision_sdk_cv.RoiMaskAlgorithm, mask: numpy.ndarray[numpy.float64])None

Sets the pixel mask of the filter.

mask

Pixel mask to be used while filtering

class metavision_sdk_cv.RotateEventsAlgorithm

class that allows to rotate an event stream.

Note

We assume the rotation to happen with respect to the center of the image

static get_empty_output_buffer()metavision_sdk_base.EventCDBuffer

This function returns an empty buffer of events of the correct type, which can later on be used as output_buf when calling process_events()

process_events(*args, **kwargs)

Overloaded function.

  1. process_events(self: metavision_sdk_cv.RotateEventsAlgorithm, input_np: numpy.ndarray[metavision_sdk_base._EventCD_decode], output_buf: metavision_sdk_base.EventCDBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes a numpy array as input and writes the results into the specified output event buffer
input_np

input chunk of events (numpy structured array)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

  1. process_events(self: metavision_sdk_cv.RotateEventsAlgorithm, input_buf: metavision_sdk_base.EventCDBuffer, output_buf: metavision_sdk_base.EventCDBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes an event buffer as input and writes the results into a distinct output event buffer
input_buf

input chunk of events (event buffer)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

process_events_(*args, **kwargs)

Overloaded function.

  1. process_events_(self: metavision_sdk_cv.RotateEventsAlgorithm, events_np: numpy.ndarray[metavision_sdk_base._EventCD_decode]) -> None

This method is used to apply the current algorithm on a chunk of events. It takes a numpy array as input/output. This method should only be used when the number of output events is the same as the number of input events

events_np

numpy structured array of events used as input/output. Its content will be overwritten

  1. process_events_(self: metavision_sdk_cv.RotateEventsAlgorithm, events_buf: metavision_sdk_base.EventCDBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes an event buffer as input/output. This should only be used when the number of output events is the same as the number of input events

events_buf

Buffer of events used as input/output. Its content will be overwritten. It can be converted to a numpy structured array using .numpy()

set_rotation(self: metavision_sdk_cv.RotateEventsAlgorithm, new_angle: float)None

Sets the new rotation angle.

new_angle

New angle in rad

class metavision_sdk_cv.SparseOpticalFlowAlgorithm
static get_empty_output_buffer()metavision_sdk_cv.EventOpticalFlowBuffer

This function returns an empty buffer of events of the correct type, which can later on be used as output_buf when calling process_events()

process_events(*args, **kwargs)

Overloaded function.

  1. process_events(self: metavision_sdk_cv.SparseOpticalFlowAlgorithm, input_np: numpy.ndarray[metavision_sdk_base._EventCD_decode], output_buf: metavision_sdk_cv.EventOpticalFlowBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes a numpy array as input and writes the results into the specified output event buffer
input_np

input chunk of events (numpy structured array)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

  1. process_events(self: metavision_sdk_cv.SparseOpticalFlowAlgorithm, input_buf: metavision_sdk_base.EventCDBuffer, output_buf: metavision_sdk_cv.EventOpticalFlowBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes an event buffer as input and writes the results into a distinct output event buffer
input_buf

input chunk of events (event buffer)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

class metavision_sdk_cv.SpatioTemporalContrastAlgorithm

The SpatioTemporalContrast Filter is a noise filter using the exponential response of a pixel to a change of light to filter out wrong detections and trails.

For an event to be forwarded, it needs to be preceded by another one in a given time window, this ensures that the spatio temporal contrast detection is strong enough. It is also possible to then cut all the following events up to a change of polarity in the stream for that particular pixel (strong trail removal). Note that this will remove signal if 2 following edges of the same polarity are detected (which should not happen that frequently).

note

The timestamp may be stored in different types 64 bits, 32 bits or 16 bits. The behavior may vary from one size to the other since the number of significant bits may change. Before using the version with less than 32 bits check that the behavior is still valid for the usage.

static get_empty_output_buffer()metavision_sdk_base.EventCDBuffer

This function returns an empty buffer of events of the correct type, which can later on be used as output_buf when calling process_events()

process_events(*args, **kwargs)

Overloaded function.

  1. process_events(self: metavision_sdk_cv.SpatioTemporalContrastAlgorithm, input_np: numpy.ndarray[metavision_sdk_base._EventCD_decode], output_buf: metavision_sdk_base.EventCDBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes a numpy array as input and writes the results into the specified output event buffer
input_np

input chunk of events (numpy structured array)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

  1. process_events(self: metavision_sdk_cv.SpatioTemporalContrastAlgorithm, input_buf: metavision_sdk_base.EventCDBuffer, output_buf: metavision_sdk_base.EventCDBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes an event buffer as input and writes the results into a distinct output event buffer
input_buf

input chunk of events (event buffer)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

process_events_(self: metavision_sdk_cv.SpatioTemporalContrastAlgorithm, events_buf: metavision_sdk_base.EventCDBuffer)None

This method is used to apply the current algorithm on a chunk of events. It takes an event buffer as input/output. This should only be used when the number of output events is the same as the number of input events

events_buf

Buffer of events used as input/output. Its content will be overwritten. It can be converted to a numpy structured array using .numpy()

class metavision_sdk_cv.TrailFilterAlgorithm

Filter that accepts an event either if the last event at the same coordinates was of different polarity, or if it happened at least a given amount of time after the last event.

static get_empty_output_buffer()metavision_sdk_base.EventCDBuffer

This function returns an empty buffer of events of the correct type, which can later on be used as output_buf when calling process_events()

process_events(*args, **kwargs)

Overloaded function.

  1. process_events(self: metavision_sdk_cv.TrailFilterAlgorithm, input_np: numpy.ndarray[metavision_sdk_base._EventCD_decode], output_buf: metavision_sdk_base.EventCDBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes a numpy array as input and writes the results into the specified output event buffer
input_np

input chunk of events (numpy structured array)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

  1. process_events(self: metavision_sdk_cv.TrailFilterAlgorithm, input_buf: metavision_sdk_base.EventCDBuffer, output_buf: metavision_sdk_base.EventCDBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes an event buffer as input and writes the results into a distinct output event buffer
input_buf

input chunk of events (event buffer)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

process_events_(self: metavision_sdk_cv.TrailFilterAlgorithm, events_buf: metavision_sdk_base.EventCDBuffer)None

This method is used to apply the current algorithm on a chunk of events. It takes an event buffer as input/output. This should only be used when the number of output events is the same as the number of input events

events_buf

Buffer of events used as input/output. Its content will be overwritten. It can be converted to a numpy structured array using .numpy()

class metavision_sdk_cv.TransposeEventsAlgorithm

Class that switches X and Y coordinates of an event stream. This filter changes the dimensions of the corresponding frame (width and height are switched)

static get_empty_output_buffer()metavision_sdk_base.EventCDBuffer

This function returns an empty buffer of events of the correct type, which can later on be used as output_buf when calling process_events()

process_events(*args, **kwargs)

Overloaded function.

  1. process_events(self: metavision_sdk_cv.TransposeEventsAlgorithm, input_np: numpy.ndarray[metavision_sdk_base._EventCD_decode], output_buf: metavision_sdk_base.EventCDBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes a numpy array as input and writes the results into the specified output event buffer
input_np

input chunk of events (numpy structured array)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

  1. process_events(self: metavision_sdk_cv.TransposeEventsAlgorithm, input_buf: metavision_sdk_base.EventCDBuffer, output_buf: metavision_sdk_base.EventCDBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes an event buffer as input and writes the results into a distinct output event buffer
input_buf

input chunk of events (event buffer)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

process_events_(*args, **kwargs)

Overloaded function.

  1. process_events_(self: metavision_sdk_cv.TransposeEventsAlgorithm, events_np: numpy.ndarray[metavision_sdk_base._EventCD_decode]) -> None

This method is used to apply the current algorithm on a chunk of events. It takes a numpy array as input/output. This method should only be used when the number of output events is the same as the number of input events

events_np

numpy structured array of events used as input/output. Its content will be overwritten

  1. process_events_(self: metavision_sdk_cv.TransposeEventsAlgorithm, events_buf: metavision_sdk_base.EventCDBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes an event buffer as input/output. This should only be used when the number of output events is the same as the number of input events

events_buf

Buffer of events used as input/output. Its content will be overwritten. It can be converted to a numpy structured array using .numpy()