SDK Core Python bindings API

class metavision_sdk_core.AdaptiveRateEventsSplitterAlgorithm(self: metavision_sdk_core.AdaptiveRateEventsSplitterAlgorithm, height: int, width: int, thr_var_per_event: float = 0.0005, downsampling_factor: int = 2) None

Class used to split a stream of events into slices of variable duration and variable number of events.

This algorithm produces reasonably sharp slices of events, based on the content of the stream itself Internally, it computes the variance per event as a criterion for the sharpness of the current slice of events. An additional criterion is the maximum proportion of active pixels containing both positive and negative events.

Constructs a new AdaptiveRateEventsSplitterAlgorithm.

height

height of the input frame of events

width

width of the input frame of events

thr_var_per_event

minimum variance per pixel value to reach before considering splitting the slice

downsampling_factor

performs a downsampling of the input before computing the statistics. Original coordinates will be multiplied by 2**(-downsampling_factor)

static get_empty_output_buffer() metavision_sdk_base.EventCDBuffer

This function returns an empty buffer of events of the correct type, which can later on be used as output_buf when calling process_events()

process_events(*args, **kwargs)

Overloaded function.

  1. process_events(self: metavision_sdk_core.AdaptiveRateEventsSplitterAlgorithm, events_np: numpy.ndarray[metavision_sdk_base._EventCD_decode]) -> bool

Takes a chunk of events (numpy array of EventCD) and updates the internal state of the EventsSplitter. Returns True if the frame is ready, False otherwise.

  1. process_events(self: metavision_sdk_core.AdaptiveRateEventsSplitterAlgorithm, events_buf: metavision_sdk_base.EventCDBuffer) -> bool

Takes a chunk of events (EventCDBuffer) and updates the internal state of the EventsSplitter. Returns True if the frame is ready, False otherwise.

retrieve_events(self: metavision_sdk_core.AdaptiveRateEventsSplitterAlgorithm, events_buf: metavision_sdk_base.EventCDBuffer) None

Retrieves the events (EventCDBuffer) and reinitializes the state of the EventsSplitter.

class metavision_sdk_core.BaseFrameGenerationAlgorithm
static bg_color_default() tuple

Returns default Prophesee dark palette background color.

static generate_frame(*args, **kwargs)

Overloaded function.

  1. generate_frame(events: numpy.ndarray[metavision_sdk_base._EventCD_decode], frame: numpy.ndarray, accumulation_time_us: int = 0, palette: metavision_sdk_core.ColorPalette = <ColorPalette.Dark: 1>) -> None

Stand-alone (static) method to generate a frame from events

All events in the interval ]t - dt, t] are used where t the timestamp of the last event in the buffer, and dt is accumulation_time_us. If accumulation_time_us is kept to 0, all input events are used. If there is no events, a frame filled with the background color will be generated

events

Numpy structured array whose fields are (‘x’, ‘y’, ‘p’, ‘t’). Note that this order is mandatory

frame

Pre-allocated frame that will be filled with CD events. It must have the same geometry as the input event source, and the color corresponding to the given palette (3 channels by default)

accumulation_time_us

Time range of events to update the frame with (in us). 0 to use all events.

palette

The Prophesee’s color palette to use

  1. generate_frame(events: Metavision::RollingEventBuffer<Metavision::EventCD>, frame: numpy.ndarray, accumulation_time_us: int = 0, palette: metavision_sdk_core.ColorPalette = <ColorPalette.Dark: 1>) -> None

Stand-alone (static) method to generate a frame from events

All events in the interval ]t - dt, t] are used where t the timestamp of the last event in the buffer, and dt is accumulation_time_us. If accumulation_time_us is kept to 0, all input events are used. If there is no events, a frame filled with the background color will be generated

events

Numpy structured array whose fields are (‘x’, ‘y’, ‘p’, ‘t’). Note that this order is mandatory

frame

Pre-allocated frame that will be filled with CD events. It must have the same geometry as the input event source, and the color corresponding to the given palette (3 channels by default)

accumulation_time_us

Time range of events to update the frame with (in us). 0 to use all events.

palette

The Prophesee’s color palette to use

get_dimension(self: metavision_sdk_core.BaseFrameGenerationAlgorithm) tuple

Gets the frame’s dimension, a tuple (height, width, channels)

static off_color_default() tuple

Returns default Prophesee dark palette negative event color.

static on_color_default() tuple

Returns default Prophesee dark palette positive event color.

set_color_palette(self: metavision_sdk_core.BaseFrameGenerationAlgorithm, palette: metavision_sdk_core.ColorPalette) None

Sets the color used to generate the frame.

palette

The Prophesee’s color palette to use

set_colors(self: metavision_sdk_core.BaseFrameGenerationAlgorithm, background_color: list[int], on_color: list[int], off_color: list[int], colored: bool = True) None

Sets the color used to generate the frame.

bg_color

Color used as background, when no events were received for a pixel

on_color

Color used for on events

off_color

Color used for off events

colored

If the generated frame should be grayscale (single channel) or in color (three channels)

class metavision_sdk_core.ContrastMapGenerationAlgorithm(self: metavision_sdk_core.ContrastMapGenerationAlgorithm, width: int, height: int, contrast_on: float = 1.2000000476837158, contrast_off: float = - 1.0) None

Constructor.

width

Width of the input event stream.

height

Height of the input event stream.

contrast_on

Contrast value for ON events.

contrast_off

Contrast value for OFF events. If non-positive, the contrast is set to the inverse of the contrast_on value.

generate(*args, **kwargs)

Overloaded function.

  1. generate(self: metavision_sdk_core.ContrastMapGenerationAlgorithm, frame: numpy.ndarray) -> None

Generates the contrast map and resets the internal state.

contrast_map

Output contrast map, swapped with the one maintained internally.

  1. generate(self: metavision_sdk_core.ContrastMapGenerationAlgorithm, frame: numpy.ndarray, tonemapping_factor: float, tonemapping_bias: float) -> None

Generates the tonemapped contrast map and resets the internal state.

contrast_map_tonnemapped

Output tonemapped contrast map.

tonemapping_factor

Tonemapping factor.

tonemapping_bias

Tonemapping bias.

process_events(self: metavision_sdk_core.ContrastMapGenerationAlgorithm, events_np: numpy.ndarray[metavision_sdk_base._EventCD_decode]) None

Processes a buffer of events for later frame generation

events_np

numpy structured array of events whose fields are (‘x’, ‘y’, ‘p’, ‘t’). Note that this order is mandatory

reset(self: metavision_sdk_core.ContrastMapGenerationAlgorithm) None

Resets the internal state.

class metavision_sdk_core.ColorPalette(self: metavision_sdk_core.ColorPalette, value: int) None

Members:

Light

Dark

CoolWarm

Gray

class metavision_sdk_core.ColorType(self: metavision_sdk_core.ColorType, value: int) None

Members:

Background

Positive

Negative

Auxiliary

class metavision_sdk_core.EventPreprocessor

Processes events to update data from a tensor.

This is the base class. It handles the rescaling of the events if necessary. It also provides accessors to get the shape of the output tensor. Derived class implement the computation. Calling process_events() on this base class triggers the computation to update the provided tensor. This tensor can typically be used as input of a neural network. :InputIt: The type of the input iterator for the range of events to process

static create_DiffProcessor(input_event_width: int, input_event_height: int, max_incr_per_pixel: float = 5, clip_value_after_normalization: float = 1.0, scale_width: float = 1.0, scale_height: float = 1.0) metavision_sdk_core.EventPreprocessor

Constructor.

event_input_width

Maximum width of input events

event_input_height

Maximum height of input events

max_incr_per_pixel

Maximum number of increments per pixel. This is used to normalize the contribution of each event

clip_value_after_normalization

Clipping value to apply after normalization (typically: 1.)

width_scale

Scale on the width previously applied to input events. This factor is considered to modulate the contribution of each event at its coordinates.

height_scale

Scale on the height previously applied to input events. This factor is considered to modulate the contribution of each event at its coordinates.

static create_EventCubeProcessor(delta_t: int, input_event_width: int, input_event_height: int, num_utbins: int, split_polarity: bool, max_incr_per_pixel: float = 63.75, clip_value_after_normalization: float = 1.0, scale_width: float = 1.0, scale_height: float = 1.0) metavision_sdk_core.EventPreprocessor

Constructor.

delta_t

Delta time used to accumulate events inside the frame

event_input_width

Width of the event stream

event_input_height

Height of the event stream

num_utbins

Number of micro temporal bins

split_polarity

Process positive and negative events into separate channels

max_incr_per_pixel

Maximum number of increments per pixel. This is used to normalize the contribution of each event

clip_value_after_normalization

Clipping value to apply after normalization (typically: 1.)

width_scale

Scale on the width previously applied to input events. This factor is considered to modulate the contribution of each event at its coordinates.

height_scale

Scale on the height previously applied to input events. This factor is considered to modulate the contribution of each event at its coordinates.

static create_HardwareDiffProcessor(input_event_width: int, input_event_height: int, min_val: int, max_val: int, allow_rollover: bool = True) metavision_sdk_core.EventPreprocessor

Constructor.

width

Width of the event stream

height

Height of the event stream

min_val

Lower representable value

max_val

Higher representable value

allow_rollover

If true, a roll-over will be realized when reaching minimal or maximal value. Else, the pixel value will be saturated.

static create_HardwareHistoProcessor(input_event_width: int, input_event_height: int, neg_saturation: int = 255, pos_saturation: int = 255) metavision_sdk_core.EventPreprocessor

Constructor.

width

Width of the event stream

height

Height of the event stream

neg_saturation

Maximum value for the count of negative events in the histogram at each pixel

pos_saturation

Maximum value for the count of positive events in the histogram at each pixel

static create_HistoProcessor(input_event_width: int, input_event_height: int, max_incr_per_pixel: float = 5, clip_value_after_normalization: float = 1.0, use_CHW: bool = True, scale_width: float = 1.0, scale_height: float = 1.0) metavision_sdk_core.EventPreprocessor

Constructor.

event_input_width

Maximum width of input events

event_input_height

Maximum height of input events

max_incr_per_pixel

Maximum number of increments per pixel. This is used to normalize the contribution of each event

clip_value_after_normalization

Clipping value to apply after normalization (typically: 1.)

use_CHW

Boolean to define frame dimension order, True if the fields’ frame order is (Channel, Height, Width)

width_scale

Scale on the width previously applied to input events. This factor is considered to modulate the contribution of each event at its coordinates.

height_scale

Scale on the height previously applied to input events. This factor is considered to modulate the contribution of each event at its coordinates.

static create_TimeSurfaceProcessor(input_event_width: int, input_event_height: int, split_polarity: bool = True) metavision_sdk_core.EventPreprocessor

Creates a TimeSurfaceProcessor instance.

input_event_width

Width of the input event stream.

input_event_height

Height of the input event stream.

split_polarity

(optional) If True, polarities will be managed separately in the TimeSurface. Else, a single channel will be used for both polarities.

get_frame_channels(self: metavision_sdk_core.EventPreprocessor) int

Returns the number of channels of the output frame.

get_frame_height(self: metavision_sdk_core.EventPreprocessor) int

Returns the height of the output frame.

get_frame_shape(self: metavision_sdk_core.EventPreprocessor) list[int]

Returns the frame shape.

get_frame_size(self: metavision_sdk_core.EventPreprocessor) int

Returns the number of values in the output frame.

get_frame_width(self: metavision_sdk_core.EventPreprocessor) int

Returns the width of the output frame.

init_output_tensor(self: metavision_sdk_core.EventPreprocessor) numpy.ndarray
is_CHW(self: metavision_sdk_core.EventPreprocessor) bool

Returns true if the output tensor shape has CHW layout.

process_events(self: metavision_sdk_core.EventPreprocessor, cur_frame_start_ts: int, events_np: numpy.ndarray[metavision_sdk_base._EventCD_decode], frame_tensor_np: numpy.ndarray) None

Takes a chunk of events (numpy array of EventCD) and updates the frame_tensor (numpy array of float)

class metavision_sdk_core.EventRescalerAlgorithm(self: metavision_sdk_core.EventRescalerAlgorithm, scale_width: float, scale_height: float) None

Base class to operate a rescaling of events locations in both horizontal and vertical directions.

Constructor.

scale_width

The horizontal scale for events

scale_height

The vertical scale for events

static get_empty_output_buffer() metavision_sdk_base.EventCDBuffer

This function returns an empty buffer of events of the correct type, which can later on be used as output_buf when calling process_events()

process_events(*args, **kwargs)

Overloaded function.

  1. process_events(self: metavision_sdk_core.EventRescalerAlgorithm, input_np: numpy.ndarray[metavision_sdk_base._EventCD_decode], output_buf: metavision_sdk_base.EventCDBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes a numpy array as input and writes the results into the specified output event buffer
input_np

input chunk of events (numpy structured array whose fields are (‘x’, ‘y’, ‘p’, ‘t’). Note that this order is mandatory)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

  1. process_events(self: metavision_sdk_core.EventRescalerAlgorithm, input_buf: metavision_sdk_base.EventCDBuffer, output_buf: metavision_sdk_base.EventCDBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes an event buffer as input and writes the results into a distinct output event buffer
input_buf

input chunk of events (event buffer)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

process_events_(self: metavision_sdk_core.EventRescalerAlgorithm, events_np: numpy.ndarray[metavision_sdk_base._EventCD_decode]) None

This method is used to apply the current algorithm on a chunk of events. It takes a numpy array as input/output. This method should only be used when the number of output events is the same as the number of input events

events_np

numpy structured array of events whose fields are (‘x’, ‘y’, ‘p’, ‘t’) used as input/output. Its content will be overwritten

class metavision_sdk_core.EventsIntegrationAlgorithm(self: metavision_sdk_core.EventsIntegrationAlgorithm, width: int, height: int, decay_time: int = 1000000, contrast_on: float = 1.2000000476837158, contrast_off: float = - 1.0, tonemapping_max_ev_count: int = 5, gaussian_blur_kernel_radius: int = 1, diffusion_weight: float = 0.0) None

Constructor.

width

Width of the input event stream.

height

Height of the input event stream.

decay_time

Time constant for the exponential decay of the integrated grayscale values.

contrast_on

Contrast value for ON events.

contrast_off

Contrast value for OFF events. If non-positive, the contrast is set to the inverse of the contrast_on value.

tonemapping_max_ev_count

Maximum number of events to consider for tonemapping to 8-bits range.

gaussian_blur_kernel_radius

Radius of the Gaussian blur kernel. If non-positive, no blur is applied.

diffusion_weight

Weight for slowly diffusing 4-neighboring intensities into the central ones, to smooth reconstructed intensities in the case of static camera. Clamped to [0; 0.25], 0 meaning no diffusion and 0.25 meaning ignoring central intensity.

generate(self: metavision_sdk_core.EventsIntegrationAlgorithm, frame: numpy.ndarray) None

Generates the grayscale frame at the timestamp of the last received event.

process_events(self: metavision_sdk_core.EventsIntegrationAlgorithm, events_np: numpy.ndarray[metavision_sdk_base._EventCD_decode]) None

Processes a buffer of events for later frame generation

events_np

numpy structured array of events whose fields are (‘x’, ‘y’, ‘p’, ‘t’). Note that this order is mandatory

reset(self: metavision_sdk_core.EventsIntegrationAlgorithm) None

Resets the internal state.

class metavision_sdk_core.FlipXAlgorithm(self: metavision_sdk_core.FlipXAlgorithm, width_minus_one: int) None

Class that allows to mirror the X axis of an event stream.

The transfer function of this filter impacts only the X coordinates of the Event2d by: x = width_minus_one - x

Builds a new FlipXAlgorithm object with the given width.

width_minus_one

Maximum X coordinate of the events (width-1)

static get_empty_output_buffer() metavision_sdk_base.EventCDBuffer

This function returns an empty buffer of events of the correct type, which can later on be used as output_buf when calling process_events()

process_events(*args, **kwargs)

Overloaded function.

  1. process_events(self: metavision_sdk_core.FlipXAlgorithm, input_np: numpy.ndarray[metavision_sdk_base._EventCD_decode], output_buf: metavision_sdk_base.EventCDBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes a numpy array as input and writes the results into the specified output event buffer
input_np

input chunk of events (numpy structured array whose fields are (‘x’, ‘y’, ‘p’, ‘t’). Note that this order is mandatory)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

  1. process_events(self: metavision_sdk_core.FlipXAlgorithm, input_buf: metavision_sdk_base.EventCDBuffer, output_buf: metavision_sdk_base.EventCDBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes an event buffer as input and writes the results into a distinct output event buffer
input_buf

input chunk of events (event buffer)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

process_events_(self: metavision_sdk_core.FlipXAlgorithm, events_np: numpy.ndarray[metavision_sdk_base._EventCD_decode]) None

This method is used to apply the current algorithm on a chunk of events. It takes a numpy array as input/output. This method should only be used when the number of output events is the same as the number of input events

events_np

numpy structured array of events whose fields are (‘x’, ‘y’, ‘p’, ‘t’) used as input/output. Its content will be overwritten

class metavision_sdk_core.FlipYAlgorithm(self: metavision_sdk_core.FlipYAlgorithm, height_minus_one: int) None

Class that allows to mirror the Y axis of an event stream.

The transfer function of this filter impacts only the Y coordinates of the Event2d by: y = height_minus_one - y

Builds a new FlipYAlgorithm object with the given height.

height_minus_one

Maximum Y coordinate of the events

static get_empty_output_buffer() metavision_sdk_base.EventCDBuffer

This function returns an empty buffer of events of the correct type, which can later on be used as output_buf when calling process_events()

process_events(*args, **kwargs)

Overloaded function.

  1. process_events(self: metavision_sdk_core.FlipYAlgorithm, input_np: numpy.ndarray[metavision_sdk_base._EventCD_decode], output_buf: metavision_sdk_base.EventCDBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes a numpy array as input and writes the results into the specified output event buffer
input_np

input chunk of events (numpy structured array whose fields are (‘x’, ‘y’, ‘p’, ‘t’). Note that this order is mandatory)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

  1. process_events(self: metavision_sdk_core.FlipYAlgorithm, input_buf: metavision_sdk_base.EventCDBuffer, output_buf: metavision_sdk_base.EventCDBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes an event buffer as input and writes the results into a distinct output event buffer
input_buf

input chunk of events (event buffer)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

process_events_(self: metavision_sdk_core.FlipYAlgorithm, events_np: numpy.ndarray[metavision_sdk_base._EventCD_decode]) None

This method is used to apply the current algorithm on a chunk of events. It takes a numpy array as input/output. This method should only be used when the number of output events is the same as the number of input events

events_np

numpy structured array of events whose fields are (‘x’, ‘y’, ‘p’, ‘t’) used as input/output. Its content will be overwritten

class metavision_sdk_core.MostRecentTimestampBuffer(self: metavision_sdk_core.MostRecentTimestampBuffer, rows: int, cols: int, channels: int = 1) None

Class representing a buffer of the most recent timestamps observed at each pixel of the camera.

A most recent timestamp buffer is also called time surface.

note

The interface follows the one of cv::Mat

Initialization constructor.

rows

Sensor’s height

cols

Sensor’s width

channels

Number of channels

property channels

Gets the number of channels of the buffer.

property cols

Gets the number of columns of the buffer.

generate_img_time_surface(self: metavision_sdk_core.MostRecentTimestampBuffer, last_ts: int, delta_t: int, out: numpy.ndarray) None

Generates a CV_8UC1 image of the time surface for the 2 channels.

Side-by-side: negative polarity time surface, positive polarity time surface The time surface is normalized between last_ts (255) and last_ts - delta_t (0)

last_ts

Last timestamp value stored in the buffer

delta_t

Delta time, with respect to last_t, above which timestamps are not considered for the image generation

out

The produced image

generate_img_time_surface_collapsing_channels(self: metavision_sdk_core.MostRecentTimestampBuffer, last_ts: int, delta_t: int, out: numpy.ndarray) None

Generates a CV_8UC1 image of the time surface, merging the 2 channels.

The time surface is normalized between last_ts (255) and last_ts - delta_t (0)

last_ts

Last timestamp value stored in the buffer

delta_t

Delta time, with respect to last_t, above which timestamps are not considered for the image generation

out

The produced image

max_across_channels_at(self: metavision_sdk_core.MostRecentTimestampBuffer, y: int, x: int) int

Retrieves the maximum timestamp across channels at the specified pixel.

y

The pixel’s ordinate

x

The pixel’s abscissa

return

The maximum timestamp at that pixel across all the channels in the buffer

numpy(self: metavision_sdk_core.MostRecentTimestampBuffer, copy: bool = False) numpy.ndarray[numpy.int64]

Converts to a numpy array

property rows

Gets the number of rows of the buffer.

set_to(self: metavision_sdk_core.MostRecentTimestampBuffer, ts: int) None

Sets all elements of the timestamp buffer to a constant.

ts

The constant timestamp value

class metavision_sdk_core.OnDemandFrameGenerationAlgorithm(self: metavision_sdk_core.OnDemandFrameGenerationAlgorithm, width: int, height: int, accumulation_time_us: int = 10000, palette: metavision_sdk_core.ColorPalette = <ColorPalette.Dark: 1>) None

Constructor.

width

Sensor’s width (in pixels)

height

Sensor’s height (in pixels)

accumulation_time_us

Time range of events to update the frame with (in us) (See set_accumulation_time_us)

palette

The Prophesee’s color palette to use

generate(self: metavision_sdk_core.OnDemandFrameGenerationAlgorithm, ts: int, frame: numpy.ndarray) None

Generates a frame.

ts

Timestamp at which to generate the frame

frame

Frame that will be filled with CD events

allocate

Allocates the frame if true. Otherwise, the user must ensure the validity of the input frame. This is to be used when the data ptr must not change (external allocation, ROI over another cv::Mat, …)

warning

This method is expected to be called with timestamps increasing monotonically. :invalid_argument: if ts is older than the last frame generation and reset method hasn’t been called in the meantime

invalid_argument

if the frame doesn’t have the expected type and geometry

get_accumulation_time_us(self: metavision_sdk_core.OnDemandFrameGenerationAlgorithm) int

Returns the current accumulation time (in us).

process_events(self: metavision_sdk_core.OnDemandFrameGenerationAlgorithm, events_np: numpy.ndarray[metavision_sdk_base._EventCD_decode]) None

Processes a buffer of events for later frame generation

events_np

numpy structured array of events whose fields are (‘x’, ‘y’, ‘p’, ‘t’). Note that this order is mandatory

set_accumulation_time_us(self: metavision_sdk_core.OnDemandFrameGenerationAlgorithm, accumulation_time_us: int) None

Sets the accumulation time (in us) to use to generate a frame.

Frame generated will only hold events in the interval [t - dt, t[ where t is the timestamp at which the frame is generated, and dt the accumulation time. However, if accumulation_time_us is set to 0, all events since the last generated frame are used

accumulation_time_us

Time range of events to update the frame with (in us)

class metavision_sdk_core.PeriodicFrameGenerationAlgorithm(self: metavision_sdk_core.PeriodicFrameGenerationAlgorithm, sensor_width: int, sensor_height: int, accumulation_time_us: int = 10000, fps: float = 0.0, palette: metavision_sdk_core.ColorPalette = <ColorPalette.Dark: 1>) None

Inherits BaseFrameGenerationAlgorithm. Algorithm that generates frames from events at a fixed rate (fps). The reference clock used is the one of the input events

Parameters
  • sensor_width (int) – Sensor’s width (in pixels)

  • sensor_height (int) – Sensor’s height (in pixels)

  • accumulation_time_us (timestamp) – Accumulation time (in us) (@ref set_accumulation_time_us)

  • fps (float) – The fps at which to generate the frames. The time reference used is the one from the input events (@ref set_fps)

  • palette (ColorPalette) – The Prophesee’s color palette to use (@ref set_color_palette)

@throw std::invalid_argument If the input fps is not positive or if the input accumulation time is not strictly positive

force_generate(self: metavision_sdk_core.PeriodicFrameGenerationAlgorithm) None

Forces the generation of a frame for the current period with the input events that have been processed.

This is intended to be used at the end of a process if one wants to generate frames with the remaining events This effectively calls the output_cb and updates the next timestamp at which a frame is to be generated

get_accumulation_time_us(self: metavision_sdk_core.PeriodicFrameGenerationAlgorithm) int

Returns the current accumulation time (in us).

get_fps(self: metavision_sdk_core.PeriodicFrameGenerationAlgorithm) float

Returns the current fps at which frames are generated.

process_events(self: metavision_sdk_core.PeriodicFrameGenerationAlgorithm, events_np: numpy.ndarray[metavision_sdk_base._EventCD_decode]) None

Processes a buffer of events for later frame generation

events_np

numpy structured array of events whose fields are (‘x’, ‘y’, ‘p’, ‘t’). Note that this order is mandatory

reset(self: metavision_sdk_core.PeriodicFrameGenerationAlgorithm) None

Resets the internal states.

warning

the user is responsible for explicitly calling force_generate if needed to retrieve the frame for the last processed events.

set_accumulation_time_us(self: metavision_sdk_core.PeriodicFrameGenerationAlgorithm, accumulation_time_us: int) None

Sets the accumulation time (in us) to use to generate a frame.

Frame generated will only hold events in the interval [t - dt, t[ where t is the timestamp at which the frame is generated, and dt the accumulation time

accumulation_time_us

Time range of events to update the frame with (in us)

set_fps(self: metavision_sdk_core.PeriodicFrameGenerationAlgorithm, fps: float) None

Sets the fps at which to generate frames and thus the frequency of the asynchronous calls.

The time reference used is the one from the input events

fps

The fps to use. If the fps is 0, the current accumulation time is used to compute it :std::invalid_argument: If the input fps is negative

set_output_callback(self: metavision_sdk_core.PeriodicFrameGenerationAlgorithm, arg0: object) None

Sets a callback to retrieve the frame

skip_frames_up_to(self: metavision_sdk_core.PeriodicFrameGenerationAlgorithm, ts: int) None

Skips the generation of frames up to the timestamp ts.

ts

Timestamp up to which only one image will be generated, i.e. the closest full timeslice before this timestamp

class metavision_sdk_core.PolarityFilterAlgorithm(self: metavision_sdk_core.PolarityFilterAlgorithm, polarity: int = 0) None

Class filter that only propagates events of a certain polarity.

Creates a PolarityFilterAlgorithm class with the given polarity.

polarity

Polarity to keep

static get_empty_output_buffer() metavision_sdk_base.EventCDBuffer

This function returns an empty buffer of events of the correct type, which can later on be used as output_buf when calling process_events()

process_events(*args, **kwargs)

Overloaded function.

  1. process_events(self: metavision_sdk_core.PolarityFilterAlgorithm, input_np: numpy.ndarray[metavision_sdk_base._EventCD_decode], output_buf: metavision_sdk_base.EventCDBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes a numpy array as input and writes the results into the specified output event buffer
input_np

input chunk of events (numpy structured array whose fields are (‘x’, ‘y’, ‘p’, ‘t’). Note that this order is mandatory)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

  1. process_events(self: metavision_sdk_core.PolarityFilterAlgorithm, input_buf: metavision_sdk_base.EventCDBuffer, output_buf: metavision_sdk_base.EventCDBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes an event buffer as input and writes the results into a distinct output event buffer
input_buf

input chunk of events (event buffer)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

process_events_(self: metavision_sdk_core.PolarityFilterAlgorithm, events_buf: metavision_sdk_base.EventCDBuffer) None

This method is used to apply the current algorithm on a chunk of events. It takes an event buffer as input/output. This should only be used when the number of output events is the same as the number of input events

events_buf

Buffer of events used as input/output. Its content will be overwritten. It can be converted to a numpy structured array using .numpy()

class metavision_sdk_core.PolarityInverterAlgorithm(self: metavision_sdk_core.PolarityInverterAlgorithm) None

Class that implements a Polarity Inverter filter.

The filter changes the polarity of all the filtered events.

Builds a new PolarityInverterAlgorithm object.

static get_empty_output_buffer() metavision_sdk_base.EventCDBuffer

This function returns an empty buffer of events of the correct type, which can later on be used as output_buf when calling process_events()

process_events(*args, **kwargs)

Overloaded function.

  1. process_events(self: metavision_sdk_core.PolarityInverterAlgorithm, input_np: numpy.ndarray[metavision_sdk_base._EventCD_decode], output_buf: metavision_sdk_base.EventCDBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes a numpy array as input and writes the results into the specified output event buffer
input_np

input chunk of events (numpy structured array whose fields are (‘x’, ‘y’, ‘p’, ‘t’). Note that this order is mandatory)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

  1. process_events(self: metavision_sdk_core.PolarityInverterAlgorithm, input_buf: metavision_sdk_base.EventCDBuffer, output_buf: metavision_sdk_base.EventCDBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes an event buffer as input and writes the results into a distinct output event buffer
input_buf

input chunk of events (event buffer)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

process_events_(self: metavision_sdk_core.PolarityInverterAlgorithm, events_np: numpy.ndarray[metavision_sdk_base._EventCD_decode]) None

This method is used to apply the current algorithm on a chunk of events. It takes a numpy array as input/output. This method should only be used when the number of output events is the same as the number of input events

events_np

numpy structured array of events whose fields are (‘x’, ‘y’, ‘p’, ‘t’) used as input/output. Its content will be overwritten

class metavision_sdk_core.RoiFilterAlgorithm(self: metavision_sdk_core.RoiFilterAlgorithm, x0: int, y0: int, x1: int, y1: int, output_relative_coordinates: bool = False) None

Class that only propagates events which are contained in a certain Region of Interest (ROI) defined by the coordinates of the upper left corner and the lower right corner.

Builds a new RoiFilterAlgorithm object which propagates events in the given window.

x0

X coordinate of the upper left corner of the ROI window

y0

Y coordinate of the upper left corner of the ROI window

x1

X coordinate of the lower right corner of the ROI window

y1

Y coordinate of the lower right corner of the ROI window

output_relative_coordinates

If false, events that passed the ROI filter are expressed in the whole image coordinates. If true, they are expressed in the ROI coordinates system (i.e. top left of the ROI region is (0,0))

static get_empty_output_buffer() metavision_sdk_base.EventCDBuffer

This function returns an empty buffer of events of the correct type, which can later on be used as output_buf when calling process_events()

is_resetting(self: metavision_sdk_core.RoiFilterAlgorithm) bool

Returns true if the algorithm returns events expressed in coordinates relative to the ROI.

return

true if the algorithm is resetting the filtered events

process_events(*args, **kwargs)

Overloaded function.

  1. process_events(self: metavision_sdk_core.RoiFilterAlgorithm, input_np: numpy.ndarray[metavision_sdk_base._EventCD_decode], output_buf: metavision_sdk_base.EventCDBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes a numpy array as input and writes the results into the specified output event buffer
input_np

input chunk of events (numpy structured array whose fields are (‘x’, ‘y’, ‘p’, ‘t’). Note that this order is mandatory)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

  1. process_events(self: metavision_sdk_core.RoiFilterAlgorithm, input_buf: metavision_sdk_base.EventCDBuffer, output_buf: metavision_sdk_base.EventCDBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes an event buffer as input and writes the results into a distinct output event buffer
input_buf

input chunk of events (event buffer)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

process_events_(self: metavision_sdk_core.RoiFilterAlgorithm, events_buf: metavision_sdk_base.EventCDBuffer) None

This method is used to apply the current algorithm on a chunk of events. It takes an event buffer as input/output. This should only be used when the number of output events is the same as the number of input events

events_buf

Buffer of events used as input/output. Its content will be overwritten. It can be converted to a numpy structured array using .numpy()

property x0

Returns the x coordinate of the upper left corner of the ROI window.

Returns

X coordinate of the upper left corner

property x1

Returns the x coordinate of the lower right corner of the ROI window.

Returns

X coordinate of the lower right corner

property y0

Returns the y coordinate of the upper left corner of the ROI window.

Returns

Y coordinate of the upper left corner

property y1

Returns the y coordinate of the lower right corner of the ROI window.

Returns

Y coordinate of the lower right corner

class metavision_sdk_core.RoiMaskAlgorithm(self: metavision_sdk_core.RoiMaskAlgorithm, pixel_mask: numpy.ndarray[numpy.float64]) None

Class that only propagates events which are contained in a certain region of interest.

The Region Of Interest (ROI) is defined by a mask (cv::Mat). An event is validated if the mask at the event position stores a positive number.

Alternatively, the user can enable different rectangular regions defined by the upper left corner and the bottom right corner that propagates any event inside them.

Builds a new RoiMaskAlgorithm object which propagates events in the given window.

pixel_mask

Mask of pixels that should be retained (pixel <= 0 is filtered)

enable_rectangle(self: metavision_sdk_core.RoiMaskAlgorithm, x0: int, y0: int, x1: int, y1: int) None

Enables a rectangular region defined by the upper left corner and the bottom right corner that propagates any event inside them.

x0

X coordinate of the upper left corner

y0

Y coordinate of the upper left corner

x1

X coordinate of the lower right corner

y1

Y coordinate of the lower right corner

static get_empty_output_buffer() metavision_sdk_base.EventCDBuffer

This function returns an empty buffer of events of the correct type, which can later on be used as output_buf when calling process_events()

max_height(self: metavision_sdk_core.RoiMaskAlgorithm) int

Returns the maximum number of pixels (height) of the mask.

return

Maximum height of the mask

max_width(self: metavision_sdk_core.RoiMaskAlgorithm) int

Returns the maximum number of pixels (width) of the mask.

return

Maximum width of the mask

pixel_mask(self: metavision_sdk_core.RoiMaskAlgorithm) numpy.ndarray[numpy.float64]

Returns the pixel mask of the filter.

return

cv::Mat containing the pixel mask of the filter

process_events(*args, **kwargs)

Overloaded function.

  1. process_events(self: metavision_sdk_core.RoiMaskAlgorithm, input_np: numpy.ndarray[metavision_sdk_base._EventCD_decode], output_buf: metavision_sdk_base.EventCDBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes a numpy array as input and writes the results into the specified output event buffer
input_np

input chunk of events (numpy structured array whose fields are (‘x’, ‘y’, ‘p’, ‘t’). Note that this order is mandatory)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

  1. process_events(self: metavision_sdk_core.RoiMaskAlgorithm, input_buf: metavision_sdk_base.EventCDBuffer, output_buf: metavision_sdk_base.EventCDBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes an event buffer as input and writes the results into a distinct output event buffer
input_buf

input chunk of events (event buffer)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

process_events_(self: metavision_sdk_core.RoiMaskAlgorithm, events_buf: metavision_sdk_base.EventCDBuffer) None

This method is used to apply the current algorithm on a chunk of events. It takes an event buffer as input/output. This should only be used when the number of output events is the same as the number of input events

events_buf

Buffer of events used as input/output. Its content will be overwritten. It can be converted to a numpy structured array using .numpy()

set_pixel_mask(self: metavision_sdk_core.RoiMaskAlgorithm, mask: numpy.ndarray[numpy.float64]) None

Sets the pixel mask of the filter.

mask

Pixel mask to be used while filtering

class metavision_sdk_core.RollingEventBufferConfig(self: metavision_sdk_core.RollingEventBufferConfig) None
static make_n_events(: int) metavision_sdk_core.RollingEventBufferConfig

Creates a RollingEventBufferConfig for the N_EVENTS mode.

n_events

Number of events to store

return

a RollingEventBufferConfig for the N_EVENTS mode

static make_n_us(: int) metavision_sdk_core.RollingEventBufferConfig

Creates a RollingEventBufferConfig for the N_US mode.

n_us

Time slice duration in microseconds

return

a RollingEventBufferConfig for the N_US mode

class metavision_sdk_core.RollingEventCDBuffer(self: metavision_sdk_core.RollingEventCDBuffer, arg0: metavision_sdk_core.RollingEventBufferConfig) None

Constructs a RollingEventBuffer with the specified configuration.

config

The configuration for the rolling buffer

capacity(self: metavision_sdk_core.RollingEventCDBuffer) int

Returns the maximum capacity of the buffer.

return

The maximum capacity of the buffer

clear(self: metavision_sdk_core.RollingEventCDBuffer) None

Clears the buffer, removing all stored events.

empty(self: metavision_sdk_core.RollingEventCDBuffer) bool

Checks if the buffer is empty.

return

true if the buffer is empty, false otherwise

insert_events(*args, **kwargs)

Overloaded function.

  1. insert_events(self: metavision_sdk_core.RollingEventCDBuffer, input_np: numpy.ndarray[metavision_sdk_base._EventCD_decode]) -> None

This function inserts events from a numpy array into the rolling buffer based on the current mode (N_US or N_EVENTS)
input_np

input chunk of events

  1. insert_events(self: metavision_sdk_core.RollingEventCDBuffer, input_buf: metavision_sdk_base.EventCDBuffer) -> None

This function inserts events from an event buffer into the rolling buffer based on the current mode (N_US or N_EVENTS)
input_buf

input chunk of events

size(self: metavision_sdk_core.RollingEventCDBuffer) int

Returns the current number of events stored in the buffer.

return

The number of events stored

class metavision_sdk_core.RotateEventsAlgorithm(self: metavision_sdk_core.RotateEventsAlgorithm, width_minus_one: int, height_minus_one: int, rotation: float) None

class that allows to rotate an event stream.

Note

We assume the rotation to happen with respect to the center of the image

Builds a new RotateEventsAlgorithm object with the given width and height.

width_minus_one

Maximum X coordinate of the events (width-1)

height_minus_one

Maximum Y coordinate of the events (height-1)

rotation

Value in radians used for the rotation

static get_empty_output_buffer() metavision_sdk_base.EventCDBuffer

This function returns an empty buffer of events of the correct type, which can later on be used as output_buf when calling process_events()

process_events(*args, **kwargs)

Overloaded function.

  1. process_events(self: metavision_sdk_core.RotateEventsAlgorithm, input_np: numpy.ndarray[metavision_sdk_base._EventCD_decode], output_buf: metavision_sdk_base.EventCDBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes a numpy array as input and writes the results into the specified output event buffer
input_np

input chunk of events (numpy structured array whose fields are (‘x’, ‘y’, ‘p’, ‘t’). Note that this order is mandatory)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

  1. process_events(self: metavision_sdk_core.RotateEventsAlgorithm, input_buf: metavision_sdk_base.EventCDBuffer, output_buf: metavision_sdk_base.EventCDBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes an event buffer as input and writes the results into a distinct output event buffer
input_buf

input chunk of events (event buffer)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

process_events_(*args, **kwargs)

Overloaded function.

  1. process_events_(self: metavision_sdk_core.RotateEventsAlgorithm, events_np: numpy.ndarray[metavision_sdk_base._EventCD_decode]) -> None

This method is used to apply the current algorithm on a chunk of events. It takes a numpy array as input/output. This method should only be used when the number of output events is the same as the number of input events

events_np

numpy structured array of events whose fields are (‘x’, ‘y’, ‘p’, ‘t’) used as input/output. Its content will be overwritten

  1. process_events_(self: metavision_sdk_core.RotateEventsAlgorithm, events_buf: metavision_sdk_base.EventCDBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes an event buffer as input/output. This should only be used when the number of output events is the same as the number of input events

events_buf

Buffer of events used as input/output. Its content will be overwritten. It can be converted to a numpy structured array using .numpy()

set_rotation(self: metavision_sdk_core.RotateEventsAlgorithm, new_angle: float) None

Sets the new rotation angle.

new_angle

New angle in rad

class metavision_sdk_core.TransposeEventsAlgorithm(self: metavision_sdk_core.TransposeEventsAlgorithm) None

Class that switches X and Y coordinates of an event stream. This filter changes the dimensions of the corresponding frame (width and height are switched)

static get_empty_output_buffer() metavision_sdk_base.EventCDBuffer

This function returns an empty buffer of events of the correct type, which can later on be used as output_buf when calling process_events()

process_events(*args, **kwargs)

Overloaded function.

  1. process_events(self: metavision_sdk_core.TransposeEventsAlgorithm, input_np: numpy.ndarray[metavision_sdk_base._EventCD_decode], output_buf: metavision_sdk_base.EventCDBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes a numpy array as input and writes the results into the specified output event buffer
input_np

input chunk of events (numpy structured array whose fields are (‘x’, ‘y’, ‘p’, ‘t’). Note that this order is mandatory)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

  1. process_events(self: metavision_sdk_core.TransposeEventsAlgorithm, input_buf: metavision_sdk_base.EventCDBuffer, output_buf: metavision_sdk_base.EventCDBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes an event buffer as input and writes the results into a distinct output event buffer
input_buf

input chunk of events (event buffer)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

process_events_(*args, **kwargs)

Overloaded function.

  1. process_events_(self: metavision_sdk_core.TransposeEventsAlgorithm, events_np: numpy.ndarray[metavision_sdk_base._EventCD_decode]) -> None

This method is used to apply the current algorithm on a chunk of events. It takes a numpy array as input/output. This method should only be used when the number of output events is the same as the number of input events

events_np

numpy structured array of events whose fields are (‘x’, ‘y’, ‘p’, ‘t’) used as input/output. Its content will be overwritten

  1. process_events_(self: metavision_sdk_core.TransposeEventsAlgorithm, events_buf: metavision_sdk_base.EventCDBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes an event buffer as input/output. This should only be used when the number of output events is the same as the number of input events

events_buf

Buffer of events used as input/output. Its content will be overwritten. It can be converted to a numpy structured array using .numpy()

metavision_sdk_core.EventBbox : numpy.dtype for numpy structured arrays of EventBbox

DType class corresponding to the scalar type and dtype of the same name.

Please see numpy.dtype for the typical way to create dtype instances and arrays.dtypes for additional information.

class metavision_sdk_core.EventBboxBuffer(self: metavision_sdk_core.EventBboxBuffer, size: int = 0) None

Constructor

numpy(self: metavision_sdk_core.EventBboxBuffer, copy: bool = False) numpy.ndarray[Metavision::EventBbox]
Copy

if True, allocates new memory and returns a copy of the events. If False, use the same memory

resize(self: metavision_sdk_core.EventBboxBuffer, size: int) None

resizes the buffer to the specified size

size

the new size of the buffer

metavision_sdk_core.EventTrackedBox : numpy.dtype for numpy structured arrays of EventTrackedBox

DType class corresponding to the scalar type and dtype of the same name.

Please see numpy.dtype for the typical way to create dtype instances and arrays.dtypes for additional information.

class metavision_sdk_core.EventTrackedBoxBuffer(self: metavision_sdk_core.EventTrackedBoxBuffer, size: int = 0) None

Constructor

numpy(self: metavision_sdk_core.EventTrackedBoxBuffer, copy: bool = False) numpy.ndarray[Metavision::EventTrackedBox]
Copy

if True, allocates new memory and returns a copy of the events. If False, use the same memory

resize(self: metavision_sdk_core.EventTrackedBoxBuffer, size: int) None

resizes the buffer to the specified size

size

the new size of the buffer