SDK Core Python bindings API

class metavision_sdk_core.AdaptiveRateEventsSplitterAlgorithm

Class used to split a stream of events into slices of variable duration and variable number of events.

This algorithm produces reasonably sharp slices of events, based on the content of the stream itself Internally, it computes the variance per event as a criterion for the sharpness of the current slice of events. An additional criterion is the maximum proportion of active pixels containing both positive and negative events.

static get_empty_output_buffer()metavision_sdk_base.EventCDBuffer

This function returns an empty buffer of events of the correct type, which can later on be used as output_buf when calling process_events()

process_events(*args, **kwargs)

Overloaded function.

  1. process_events(self: metavision_sdk_core.AdaptiveRateEventsSplitterAlgorithm, events_np: numpy.ndarray[metavision_sdk_base._EventCD_decode]) -> bool

Takes a chunk of events (numpy array of EventCD) and updates the internal state of the EventsSplitter. Returns True if the frame is ready, False otherwise.

  1. process_events(self: metavision_sdk_core.AdaptiveRateEventsSplitterAlgorithm, events_buf: metavision_sdk_base.EventCDBuffer) -> bool

Takes a chunk of events (EventCDBuffer) and updates the internal state of the EventsSplitter. Returns True if the frame is ready, False otherwise.

retrieve_events(self: metavision_sdk_core.AdaptiveRateEventsSplitterAlgorithm, events_buf: metavision_sdk_base.EventCDBuffer)None

Retrieves the events (EventCDBuffer) and reinitializes the state of the EventsSplitter.

class metavision_sdk_core.BaseFrameGenerationAlgorithm
static bg_color_default()tuple

Returns default Prophesee dark palette background color.

static generate_frame(events: numpy.ndarray[metavision_sdk_base._EventCD_decode], frame: numpy.ndarray, accumulation_time_us: int = 0, palette: metavision_sdk_core.ColorPalette = <ColorPalette.Dark: 1>)None

Stand-alone (static) method to generate a frame from events

All events in the interval ]t - dt, t] are used where t the timestamp of the last event in the buffer, and dt is accumulation_time_us. If accumulation_time_us is kept to 0, all input events are used. If there is no events, a frame filled with the background color will be generated

events

Array of input events

frame

Pre-allocated frame that will be filled with CD events. It must have the same geometry as the input event source, and the color corresponding to the given palette (3 channels by default)

accumulation_time_us

Time range of events to update the frame with (in us). 0 to use all events.

palette

The Prophesee’s color palette to use

get_dimension(self: metavision_sdk_core.BaseFrameGenerationAlgorithm)tuple

Gets the frame’s dimension, a tuple (height, width, channels)

static off_color_default()tuple

Returns default Prophesee dark palette negative event color.

static on_color_default()tuple

Returns default Prophesee dark palette positive event color.

set_color_palette(self: metavision_sdk_core.BaseFrameGenerationAlgorithm, palette: metavision_sdk_core.ColorPalette)None

Sets the color used to generate the frame.

palette

The Prophesee’s color palette to use

set_colors(self: metavision_sdk_core.BaseFrameGenerationAlgorithm, background_color: List[int], on_color: List[int], off_color: List[int], colored: bool = True)None

Sets the color used to generate the frame.

bg_color

Color used as background, when no events were received for a pixel

on_color

Color used for on events

off_color

Color used for off events

colored

If the generated frame should be grayscale (single channel) or in color (three channels)

class metavision_sdk_core.ColorPalette

Members:

Light

Dark

Gray

class metavision_sdk_core.ColorType

Members:

Background

Positive

Negative

Auxiliary

class metavision_sdk_core.FlipXAlgorithm

Class that allows to mirror the X axis of an event stream.

The transfer function of this filter impacts only the X coordinates of the Event2d by: x = width_minus_one - x

static get_empty_output_buffer()metavision_sdk_base.EventCDBuffer

This function returns an empty buffer of events of the correct type, which can later on be used as output_buf when calling process_events()

process_events(*args, **kwargs)

Overloaded function.

  1. process_events(self: metavision_sdk_core.FlipXAlgorithm, input_np: numpy.ndarray[metavision_sdk_base._EventCD_decode], output_buf: metavision_sdk_base.EventCDBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes a numpy array as input and writes the results into the specified output event buffer
input_np

input chunk of events (numpy structured array)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

  1. process_events(self: metavision_sdk_core.FlipXAlgorithm, input_buf: metavision_sdk_base.EventCDBuffer, output_buf: metavision_sdk_base.EventCDBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes an event buffer as input and writes the results into a distinct output event buffer
input_buf

input chunk of events (event buffer)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

process_events_(self: metavision_sdk_core.FlipXAlgorithm, events_np: numpy.ndarray[metavision_sdk_base._EventCD_decode])None

This method is used to apply the current algorithm on a chunk of events. It takes a numpy array as input/output. This method should only be used when the number of output events is the same as the number of input events

events_np

numpy structured array of events used as input/output. Its content will be overwritten

class metavision_sdk_core.FlipYAlgorithm

Class that allows to mirror the Y axis of an event stream.

The transfer function of this filter impacts only the Y coordinates of the Event2d by: y = height_minus_one - y

static get_empty_output_buffer()metavision_sdk_base.EventCDBuffer

This function returns an empty buffer of events of the correct type, which can later on be used as output_buf when calling process_events()

process_events(*args, **kwargs)

Overloaded function.

  1. process_events(self: metavision_sdk_core.FlipYAlgorithm, input_np: numpy.ndarray[metavision_sdk_base._EventCD_decode], output_buf: metavision_sdk_base.EventCDBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes a numpy array as input and writes the results into the specified output event buffer
input_np

input chunk of events (numpy structured array)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

  1. process_events(self: metavision_sdk_core.FlipYAlgorithm, input_buf: metavision_sdk_base.EventCDBuffer, output_buf: metavision_sdk_base.EventCDBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes an event buffer as input and writes the results into a distinct output event buffer
input_buf

input chunk of events (event buffer)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

process_events_(self: metavision_sdk_core.FlipYAlgorithm, events_np: numpy.ndarray[metavision_sdk_base._EventCD_decode])None

This method is used to apply the current algorithm on a chunk of events. It takes a numpy array as input/output. This method should only be used when the number of output events is the same as the number of input events

events_np

numpy structured array of events used as input/output. Its content will be overwritten

class metavision_sdk_core.OnDemandFrameGenerationAlgorithm
generate(self: metavision_sdk_core.OnDemandFrameGenerationAlgorithm, ts: int, frame: numpy.ndarray)None

Generates a frame.

ts

Timestamp at which to generate the frame

frame

Frame that will be filled with CD events

allocate

Allocates the frame if true. Otherwise, the user must ensure the validity of the input frame. This is to be used when the data ptr must not change (external allocation, ROI over another cv::Mat, …)

warning

This method is expected to be called with timestamps increasing monotonically. :invalid_argument: exception if ts is older than the last frame generation and reset method hasn’t been called in the meantime

invalid_argument

exception if the frame doesn’t have the expected type and geometry

get_accumulation_time_us(self: metavision_sdk_core.OnDemandFrameGenerationAlgorithm)int

Returns the current accumulation time (in us).

process_events(self: metavision_sdk_core.OnDemandFrameGenerationAlgorithm, events_np: numpy.ndarray[metavision_sdk_base._EventCD_decode])None

Processes a buffer of events for later frame generation

events_np

numpy structured array of events

set_accumulation_time_us(self: metavision_sdk_core.OnDemandFrameGenerationAlgorithm, accumulation_time_us: int)None

Sets the accumulation time (in us) to use to generate a frame.

Frame generated will only hold events in the interval [t - dt, t[ where t is the timestamp at which the frame is generated, and dt the accumulation time. However, if accumulation_time_us is set to 0, all events since the last generated frame are used :accumulation_time_us: Time range of events to update the frame with (in us)

class metavision_sdk_core.PeriodicFrameGenerationAlgorithm
force_generate(self: metavision_sdk_core.PeriodicFrameGenerationAlgorithm)None

Forces the generation of a frame for the current period with the input events that have been processed.

This is intended to be used at the end of a process if one wants to generate frames with the remaining events This effectively calls the output_cb and updates the next timestamp at which a frame is to be generated

get_accumulation_time_us(self: metavision_sdk_core.PeriodicFrameGenerationAlgorithm)int

Returns the current accumulation time (in us).

get_fps(self: metavision_sdk_core.PeriodicFrameGenerationAlgorithm)float

Returns the current fps at which frames are generated.

process_events(self: metavision_sdk_core.PeriodicFrameGenerationAlgorithm, events_np: numpy.ndarray[metavision_sdk_base._EventCD_decode])None

Processes a buffer of events for later frame generation

events_np

numpy structured array of events

reset(self: metavision_sdk_core.PeriodicFrameGenerationAlgorithm)None

Resets the internal states.

set_accumulation_time_us(self: metavision_sdk_core.PeriodicFrameGenerationAlgorithm, accumulation_time_us: int)None

Sets the accumulation time (in us) to use to generate a frame.

Frame generated will only hold events in the interval [t - dt, t[ where t is the timestamp at which the frame is generated, and dt the accumulation time :accumulation_time_us: Time range of events to update the frame with (in us)

set_fps(self: metavision_sdk_core.PeriodicFrameGenerationAlgorithm, fps: float)None

Sets the fps at which to generate frames and thus the frequency of the asynchronous calls.

The time reference used is the one from the input events :fps: The fps to use. If the fps is 0, the current accumulation time is used to compute it
std::invalid_argument

If the input fps is negative

set_output_callback(self: metavision_sdk_core.PeriodicFrameGenerationAlgorithm, arg0: object)None

Sets a callback to retrieve the frame

skip_frames_up_to(self: metavision_sdk_core.PeriodicFrameGenerationAlgorithm, ts: int)None

Skips the generation of frames up to the timestamp ts.

ts

Timestamp up to which only one image will be generated, i.e. the closest full timeslice before this timestamp

class metavision_sdk_core.MostRecentTimestampBuffer

Class representing a buffer of the most recent timestamps observed at each pixel of the camera.

A most recent timestamp buffer is also called time surface.
note

The interface follows the one of cv::Mat

property channels

Gets the number of channels of the buffer.

property cols

Gets the number of columns of the buffer.

generate_img_time_surface(self: metavision_sdk_core.MostRecentTimestampBuffer, last_ts: int, delta_t: int, out: numpy.ndarray)None

Generates a CV_8UC1 image of the time surface for the 2 channels.

Side-by-side: negative polarity time surface, positive polarity time surface The time surface is normalized between last_ts (0) and last_ts - delta_t (255)

last_ts

Last timestamp value stored in the buffer

delta_t

Delta time, with respect to last_t, above which timestamps are not considered for the image generation

out

The produced image

generate_img_time_surface_collapsing_channels(self: metavision_sdk_core.MostRecentTimestampBuffer, last_ts: int, delta_t: int, out: numpy.ndarray)None

Generates a CV_8UC1 image of the time surface, merging the 2 channels.

The time surface is normalized between last_ts (0) and last_ts - delta_t (255)

last_ts

Last timestamp value stored in the buffer

delta_t

Delta time, with respect to last_t, above which timestamps are not considered for the image generation

out

The produced image

max_across_channels_at(self: metavision_sdk_core.MostRecentTimestampBuffer, y: int, x: int)int

Retrieves the maximum timestamp across channels at the specified pixel.

y

The pixel’s ordinate

x

The pixel’s abscissa

return

The maximum timestamp at that pixel across all the channels in the buffer

numpy(self: metavision_sdk_core.MostRecentTimestampBuffer, copy: bool = False)numpy.ndarray[numpy.int64]

Converts to a numpy array

property rows

Gets the number of rows of the buffer.

set_to(self: metavision_sdk_core.MostRecentTimestampBuffer, ts: int)None

Sets all elements of the timestamp buffer to a constant.

ts

The constant timestamp value

class metavision_sdk_core.PolarityFilterAlgorithm

Class filter that only propagates events of a certain polarity.

static get_empty_output_buffer()metavision_sdk_base.EventCDBuffer

This function returns an empty buffer of events of the correct type, which can later on be used as output_buf when calling process_events()

process_events(*args, **kwargs)

Overloaded function.

  1. process_events(self: metavision_sdk_core.PolarityFilterAlgorithm, input_np: numpy.ndarray[metavision_sdk_base._EventCD_decode], output_buf: metavision_sdk_base.EventCDBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes a numpy array as input and writes the results into the specified output event buffer
input_np

input chunk of events (numpy structured array)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

  1. process_events(self: metavision_sdk_core.PolarityFilterAlgorithm, input_buf: metavision_sdk_base.EventCDBuffer, output_buf: metavision_sdk_base.EventCDBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes an event buffer as input and writes the results into a distinct output event buffer
input_buf

input chunk of events (event buffer)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

process_events_(self: metavision_sdk_core.PolarityFilterAlgorithm, events_buf: metavision_sdk_base.EventCDBuffer)None

This method is used to apply the current algorithm on a chunk of events. It takes an event buffer as input/output. This should only be used when the number of output events is the same as the number of input events

events_buf

Buffer of events used as input/output. Its content will be overwritten. It can be converted to a numpy structured array using .numpy()

class metavision_sdk_core.PolarityInverterAlgorithm

Class that implements a Polarity Inverter filter.

The filter changes the polarity of all the filtered events.

static get_empty_output_buffer()metavision_sdk_base.EventCDBuffer

This function returns an empty buffer of events of the correct type, which can later on be used as output_buf when calling process_events()

process_events(*args, **kwargs)

Overloaded function.

  1. process_events(self: metavision_sdk_core.PolarityInverterAlgorithm, input_np: numpy.ndarray[metavision_sdk_base._EventCD_decode], output_buf: metavision_sdk_base.EventCDBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes a numpy array as input and writes the results into the specified output event buffer
input_np

input chunk of events (numpy structured array)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

  1. process_events(self: metavision_sdk_core.PolarityInverterAlgorithm, input_buf: metavision_sdk_base.EventCDBuffer, output_buf: metavision_sdk_base.EventCDBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes an event buffer as input and writes the results into a distinct output event buffer
input_buf

input chunk of events (event buffer)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

process_events_(self: metavision_sdk_core.PolarityInverterAlgorithm, events_np: numpy.ndarray[metavision_sdk_base._EventCD_decode])None

This method is used to apply the current algorithm on a chunk of events. It takes a numpy array as input/output. This method should only be used when the number of output events is the same as the number of input events

events_np

numpy structured array of events used as input/output. Its content will be overwritten

class metavision_sdk_core.RoiFilterAlgorithm

Class that only propagates events which are contained in a certain window of interest defined by the coordinates of the upper left corner and the lower right corner.

static get_empty_output_buffer()metavision_sdk_base.EventCDBuffer

This function returns an empty buffer of events of the correct type, which can later on be used as output_buf when calling process_events()

is_resetting(self: metavision_sdk_core.RoiFilterAlgorithm)bool

Returns true if the algorithm returns events expressed in coordinates relative to the ROI.

return

true if the algorithm is resetting the filtered events

process_events(*args, **kwargs)

Overloaded function.

  1. process_events(self: metavision_sdk_core.RoiFilterAlgorithm, input_np: numpy.ndarray[metavision_sdk_base._EventCD_decode], output_buf: metavision_sdk_base.EventCDBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes a numpy array as input and writes the results into the specified output event buffer
input_np

input chunk of events (numpy structured array)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

  1. process_events(self: metavision_sdk_core.RoiFilterAlgorithm, input_buf: metavision_sdk_base.EventCDBuffer, output_buf: metavision_sdk_base.EventCDBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes an event buffer as input and writes the results into a distinct output event buffer
input_buf

input chunk of events (event buffer)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

process_events_(self: metavision_sdk_core.RoiFilterAlgorithm, events_buf: metavision_sdk_base.EventCDBuffer)None

This method is used to apply the current algorithm on a chunk of events. It takes an event buffer as input/output. This should only be used when the number of output events is the same as the number of input events

events_buf

Buffer of events used as input/output. Its content will be overwritten. It can be converted to a numpy structured array using .numpy()

property x0

Returns the x coordinate of the upper left corner of the ROI window.

Returns

X coordinate of the upper left corner

property x1

Returns the x coordinate of the lower right corner of the ROI window.

Returns

X coordinate of the lower right corner

property y0

Returns the y coordinate of the upper left corner of the ROI window.

Returns

Y coordinate of the upper left corner

property y1

Returns the y coordinate of the lower right corner of the ROI window.

Returns

Y coordinate of the lower right corner

class metavision_sdk_core.TimeSurfaceProducerAlgorithmMergePolarities

Class that produces a MostRecentTimestampBuffer (a.k.a. time surface) from events.

This algorithm is asynchronous in the sense that it can be configured to produce a time surface every N events, every N microseconds or a mixed condition of both (see AsyncAlgorithm).

Like in other asynchronous algorithms, in order to retrieve the produced time surface, the user needs to set a callback that will be called when the above condition is fulfilled. However, as opposed to other algorithms, the user doesn’t have here the capacity to take ownership of the produced time surface (using a swap mechanism for example). Indeed, swapping the time surface would make the producer lose the whole history. If the user needs to use the time surface out of the output callback, then a copy must be done.

CHANNELS

Number of channels to use for producing the time surface. Only two values are possible for now: 1 or 2. When a 1-channel time surface is used, events with different polarities are stored all together while they are stored separately when using a 2-channels time surface.

This timesurface contains only one channel (events with different polarities are stored all together in the same channel). To use separate channels for polarities, use TimeSurfaceProducerAlgorithmSplitPolarities instead

process_events(self: metavision_sdk_core.TimeSurfaceProducerAlgorithmMergePolarities, events_np: numpy.ndarray[metavision_sdk_base._EventCD_decode])None

Processes a buffer of events for later frame generation

events_np

numpy structured array of events

set_output_callback(self: metavision_sdk_core.TimeSurfaceProducerAlgorithmMergePolarities, arg0: object)None

Sets a callback to retrieve the produced time surface

class metavision_sdk_core.TimeSurfaceProducerAlgorithmSplitPolarities

Class that produces a MostRecentTimestampBuffer (a.k.a. time surface) from events.

This algorithm is asynchronous in the sense that it can be configured to produce a time surface every N events, every N microseconds or a mixed condition of both (see AsyncAlgorithm).

Like in other asynchronous algorithms, in order to retrieve the produced time surface, the user needs to set a callback that will be called when the above condition is fulfilled. However, as opposed to other algorithms, the user doesn’t have here the capacity to take ownership of the produced time surface (using a swap mechanism for example). Indeed, swapping the time surface would make the producer lose the whole history. If the user needs to use the time surface out of the output callback, then a copy must be done.

CHANNELS

Number of channels to use for producing the time surface. Only two values are possible for now: 1 or 2. When a 1-channel time surface is used, events with different polarities are stored all together while they are stored separately when using a 2-channels time surface.

This timesurface contains two channels (events with different polarities are stored in separate channels To use single channel, use TimeSurfaceProducerAlgorithmMergePolarities instead

process_events(self: metavision_sdk_core.TimeSurfaceProducerAlgorithmSplitPolarities, events_np: numpy.ndarray[metavision_sdk_base._EventCD_decode])None

Processes a buffer of events for later frame generation

events_np

numpy structured array of events

set_output_callback(self: metavision_sdk_core.TimeSurfaceProducerAlgorithmSplitPolarities, arg0: object)None

Sets a callback to retrieve the produced time surface