SDK Core Python bindings API

class metavision_sdk_core.AdaptiveRateEventsSplitterAlgorithm(self: metavision_sdk_core.AdaptiveRateEventsSplitterAlgorithm, height: int, width: int, thr_var_per_event: float = 0.0005, downsampling_factor: int = 2)None

Class used to split a stream of events into slices of variable duration and variable number of events.

This algorithm produces reasonably sharp slices of events, based on the content of the stream itself Internally, it computes the variance per event as a criterion for the sharpness of the current slice of events. An additional criterion is the maximum proportion of active pixels containing both positive and negative events.

Constructs a new AdaptiveRateEventsSplitterAlgorithm.

height

height of the input frame of events

width

width of the input frame of events

thr_var_per_event

minimum variance per pixel value to reach before considering splitting the slice

downsampling_factor

performs a downsampling of the input before computing the statistics. Original coordinates will be multiplied by 2**(-downsampling_factor)

static get_empty_output_buffer()metavision_sdk_base.EventCDBuffer

This function returns an empty buffer of events of the correct type, which can later on be used as output_buf when calling process_events()

process_events(*args, **kwargs)

Overloaded function.

  1. process_events(self: metavision_sdk_core.AdaptiveRateEventsSplitterAlgorithm, events_np: numpy.ndarray[metavision_sdk_base._EventCD_decode]) -> bool

Takes a chunk of events (numpy array of EventCD) and updates the internal state of the EventsSplitter. Returns True if the frame is ready, False otherwise.

  1. process_events(self: metavision_sdk_core.AdaptiveRateEventsSplitterAlgorithm, events_buf: metavision_sdk_base.EventCDBuffer) -> bool

Takes a chunk of events (EventCDBuffer) and updates the internal state of the EventsSplitter. Returns True if the frame is ready, False otherwise.

retrieve_events(self: metavision_sdk_core.AdaptiveRateEventsSplitterAlgorithm, events_buf: metavision_sdk_base.EventCDBuffer)None

Retrieves the events (EventCDBuffer) and reinitializes the state of the EventsSplitter.

class metavision_sdk_core.BaseFrameGenerationAlgorithm
static bg_color_default()tuple

Returns default Prophesee dark palette background color.

static generate_frame(events: numpy.ndarray[metavision_sdk_base._EventCD_decode], frame: numpy.ndarray, accumulation_time_us: int = 0, palette: metavision_sdk_core.ColorPalette = <ColorPalette.Dark: 1>)None

Stand-alone (static) method to generate a frame from events

All events in the interval ]t - dt, t] are used where t the timestamp of the last event in the buffer, and dt is accumulation_time_us. If accumulation_time_us is kept to 0, all input events are used. If there is no events, a frame filled with the background color will be generated

events

Numpy structured array whose fields are (‘x’, ‘y’, ‘p’, ‘t’). Note that this order is mandatory

frame

Pre-allocated frame that will be filled with CD events. It must have the same geometry as the input event source, and the color corresponding to the given palette (3 channels by default)

accumulation_time_us

Time range of events to update the frame with (in us). 0 to use all events.

palette

The Prophesee’s color palette to use

get_dimension(self: metavision_sdk_core.BaseFrameGenerationAlgorithm)tuple

Gets the frame’s dimension, a tuple (height, width, channels)

static off_color_default()tuple

Returns default Prophesee dark palette negative event color.

static on_color_default()tuple

Returns default Prophesee dark palette positive event color.

set_color_palette(self: metavision_sdk_core.BaseFrameGenerationAlgorithm, palette: metavision_sdk_core.ColorPalette)None

Sets the color used to generate the frame.

palette

The Prophesee’s color palette to use

set_colors(self: metavision_sdk_core.BaseFrameGenerationAlgorithm, background_color: List[int], on_color: List[int], off_color: List[int], colored: bool = True)None

Sets the color used to generate the frame.

bg_color

Color used as background, when no events were received for a pixel

on_color

Color used for on events

off_color

Color used for off events

colored

If the generated frame should be grayscale (single channel) or in color (three channels)

class metavision_sdk_core.ColorPalette(self: metavision_sdk_core.ColorPalette, value: int)None

Members:

Light

Dark

Gray

class metavision_sdk_core.ColorType(self: metavision_sdk_core.ColorType, value: int)None

Members:

Background

Positive

Negative

Auxiliary

class metavision_sdk_core.FlipXAlgorithm(self: metavision_sdk_core.FlipXAlgorithm, width_minus_one: int)None

Class that allows to mirror the X axis of an event stream.

The transfer function of this filter impacts only the X coordinates of the Event2d by: x = width_minus_one - x

Builds a new FlipXAlgorithm object with the given width.

width_minus_one

Maximum X coordinate of the events (width-1)

static get_empty_output_buffer()metavision_sdk_base.EventCDBuffer

This function returns an empty buffer of events of the correct type, which can later on be used as output_buf when calling process_events()

process_events(*args, **kwargs)

Overloaded function.

  1. process_events(self: metavision_sdk_core.FlipXAlgorithm, input_np: numpy.ndarray[metavision_sdk_base._EventCD_decode], output_buf: metavision_sdk_base.EventCDBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes a numpy array as input and writes the results into the specified output event buffer
input_np

input chunk of events (numpy structured array whose fields are (‘x’, ‘y’, ‘p’, ‘t’). Note that this order is mandatory)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

  1. process_events(self: metavision_sdk_core.FlipXAlgorithm, input_buf: metavision_sdk_base.EventCDBuffer, output_buf: metavision_sdk_base.EventCDBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes an event buffer as input and writes the results into a distinct output event buffer
input_buf

input chunk of events (event buffer)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

process_events_(self: metavision_sdk_core.FlipXAlgorithm, events_np: numpy.ndarray[metavision_sdk_base._EventCD_decode])None

This method is used to apply the current algorithm on a chunk of events. It takes a numpy array as input/output. This method should only be used when the number of output events is the same as the number of input events

events_np

numpy structured array of events whose fields are (‘x’, ‘y’, ‘p’, ‘t’) used as input/output. Its content will be overwritten

class metavision_sdk_core.FlipYAlgorithm(self: metavision_sdk_core.FlipYAlgorithm, height_minus_one: int)None

Class that allows to mirror the Y axis of an event stream.

The transfer function of this filter impacts only the Y coordinates of the Event2d by: y = height_minus_one - y

Builds a new FlipYAlgorithm object with the given height.

height_minus_one

Maximum Y coordinate of the events

static get_empty_output_buffer()metavision_sdk_base.EventCDBuffer

This function returns an empty buffer of events of the correct type, which can later on be used as output_buf when calling process_events()

process_events(*args, **kwargs)

Overloaded function.

  1. process_events(self: metavision_sdk_core.FlipYAlgorithm, input_np: numpy.ndarray[metavision_sdk_base._EventCD_decode], output_buf: metavision_sdk_base.EventCDBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes a numpy array as input and writes the results into the specified output event buffer
input_np

input chunk of events (numpy structured array whose fields are (‘x’, ‘y’, ‘p’, ‘t’). Note that this order is mandatory)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

  1. process_events(self: metavision_sdk_core.FlipYAlgorithm, input_buf: metavision_sdk_base.EventCDBuffer, output_buf: metavision_sdk_base.EventCDBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes an event buffer as input and writes the results into a distinct output event buffer
input_buf

input chunk of events (event buffer)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

process_events_(self: metavision_sdk_core.FlipYAlgorithm, events_np: numpy.ndarray[metavision_sdk_base._EventCD_decode])None

This method is used to apply the current algorithm on a chunk of events. It takes a numpy array as input/output. This method should only be used when the number of output events is the same as the number of input events

events_np

numpy structured array of events whose fields are (‘x’, ‘y’, ‘p’, ‘t’) used as input/output. Its content will be overwritten

class metavision_sdk_core.OnDemandFrameGenerationAlgorithm(self: metavision_sdk_core.OnDemandFrameGenerationAlgorithm, width: int, height: int, accumulation_time_us: int = 10000, palette: metavision_sdk_core.ColorPalette = <ColorPalette.Dark: 1>)None

Constructor.

width

Sensor’s width (in pixels)

height

Sensor’s height (in pixels)

accumulation_time_us

Time range of events to update the frame with (in us) (See set_accumulation_time_us)

palette

The Prophesee’s color palette to use

generate(self: metavision_sdk_core.OnDemandFrameGenerationAlgorithm, ts: int, frame: numpy.ndarray)None

Generates a frame.

ts

Timestamp at which to generate the frame

frame

Frame that will be filled with CD events

allocate

Allocates the frame if true. Otherwise, the user must ensure the validity of the input frame. This is to be used when the data ptr must not change (external allocation, ROI over another cv::Mat, …)

warning

This method is expected to be called with timestamps increasing monotonically. :invalid_argument: if ts is older than the last frame generation and reset method hasn’t been called in the meantime

invalid_argument

if the frame doesn’t have the expected type and geometry

get_accumulation_time_us(self: metavision_sdk_core.OnDemandFrameGenerationAlgorithm)int

Returns the current accumulation time (in us).

process_events(self: metavision_sdk_core.OnDemandFrameGenerationAlgorithm, events_np: numpy.ndarray[metavision_sdk_base._EventCD_decode])None

Processes a buffer of events for later frame generation

events_np

numpy structured array of events whose fields are (‘x’, ‘y’, ‘p’, ‘t’). Note that this order is mandatory

set_accumulation_time_us(self: metavision_sdk_core.OnDemandFrameGenerationAlgorithm, accumulation_time_us: int)None

Sets the accumulation time (in us) to use to generate a frame.

Frame generated will only hold events in the interval [t - dt, t[ where t is the timestamp at which the frame is generated, and dt the accumulation time. However, if accumulation_time_us is set to 0, all events since the last generated frame are used

accumulation_time_us

Time range of events to update the frame with (in us)

class metavision_sdk_core.PeriodicFrameGenerationAlgorithm(self: metavision_sdk_core.PeriodicFrameGenerationAlgorithm, sensor_width: int, sensor_height: int, accumulation_time_us: int = 10000, fps: float = 0.0, palette: metavision_sdk_core.ColorPalette = <ColorPalette.Dark: 1>)None

Inherits BaseFrameGenerationAlgorithm. Algorithm that generates frames from events at a fixed rate (fps). The reference clock used is the one of the input events

Parameters
  • sensor_width (int) – Sensor’s width (in pixels)

  • sensor_height (int) – Sensor’s height (in pixels)

  • accumulation_time_us (timestamp) – Accumulation time (in us) (@ref set_accumulation_time_us)

  • fps (float) – The fps at which to generate the frames. The time reference used is the one from the input events (@ref set_fps)

  • palette (ColorPalette) – The Prophesee’s color palette to use (@ref set_color_palette)

@throw std::invalid_argument If the input fps is not positive or if the input accumulation time is not strictly positive

force_generate(self: metavision_sdk_core.PeriodicFrameGenerationAlgorithm)None

Forces the generation of a frame for the current period with the input events that have been processed.

This is intended to be used at the end of a process if one wants to generate frames with the remaining events This effectively calls the output_cb and updates the next timestamp at which a frame is to be generated

get_accumulation_time_us(self: metavision_sdk_core.PeriodicFrameGenerationAlgorithm)int

Returns the current accumulation time (in us).

get_fps(self: metavision_sdk_core.PeriodicFrameGenerationAlgorithm)float

Returns the current fps at which frames are generated.

process_events(self: metavision_sdk_core.PeriodicFrameGenerationAlgorithm, events_np: numpy.ndarray[metavision_sdk_base._EventCD_decode])None

Processes a buffer of events for later frame generation

events_np

numpy structured array of events whose fields are (‘x’, ‘y’, ‘p’, ‘t’). Note that this order is mandatory

reset(self: metavision_sdk_core.PeriodicFrameGenerationAlgorithm)None

Resets the internal states.

set_accumulation_time_us(self: metavision_sdk_core.PeriodicFrameGenerationAlgorithm, accumulation_time_us: int)None

Sets the accumulation time (in us) to use to generate a frame.

Frame generated will only hold events in the interval [t - dt, t[ where t is the timestamp at which the frame is generated, and dt the accumulation time

accumulation_time_us

Time range of events to update the frame with (in us)

set_fps(self: metavision_sdk_core.PeriodicFrameGenerationAlgorithm, fps: float)None

Sets the fps at which to generate frames and thus the frequency of the asynchronous calls.

The time reference used is the one from the input events

fps

The fps to use. If the fps is 0, the current accumulation time is used to compute it :std::invalid_argument: If the input fps is negative

set_output_callback(self: metavision_sdk_core.PeriodicFrameGenerationAlgorithm, arg0: object)None

Sets a callback to retrieve the frame

skip_frames_up_to(self: metavision_sdk_core.PeriodicFrameGenerationAlgorithm, ts: int)None

Skips the generation of frames up to the timestamp ts.

ts

Timestamp up to which only one image will be generated, i.e. the closest full timeslice before this timestamp

class metavision_sdk_core.MostRecentTimestampBuffer(self: metavision_sdk_core.MostRecentTimestampBuffer, rows: int, cols: int, channels: int = 1)None

Class representing a buffer of the most recent timestamps observed at each pixel of the camera.

A most recent timestamp buffer is also called time surface.

note

The interface follows the one of cv::Mat

Initialization constructor.

rows

Sensor’s height

cols

Sensor’s width

channels

Number of channels

property channels

Gets the number of channels of the buffer.

property cols

Gets the number of columns of the buffer.

generate_img_time_surface(self: metavision_sdk_core.MostRecentTimestampBuffer, last_ts: int, delta_t: int, out: numpy.ndarray)None

Generates a CV_8UC1 image of the time surface for the 2 channels.

Side-by-side: negative polarity time surface, positive polarity time surface The time surface is normalized between last_ts (255) and last_ts - delta_t (0)

last_ts

Last timestamp value stored in the buffer

delta_t

Delta time, with respect to last_t, above which timestamps are not considered for the image generation

out

The produced image

generate_img_time_surface_collapsing_channels(self: metavision_sdk_core.MostRecentTimestampBuffer, last_ts: int, delta_t: int, out: numpy.ndarray)None

Generates a CV_8UC1 image of the time surface, merging the 2 channels.

The time surface is normalized between last_ts (255) and last_ts - delta_t (0)

last_ts

Last timestamp value stored in the buffer

delta_t

Delta time, with respect to last_t, above which timestamps are not considered for the image generation

out

The produced image

max_across_channels_at(self: metavision_sdk_core.MostRecentTimestampBuffer, y: int, x: int)int

Retrieves the maximum timestamp across channels at the specified pixel.

y

The pixel’s ordinate

x

The pixel’s abscissa

return

The maximum timestamp at that pixel across all the channels in the buffer

numpy(self: metavision_sdk_core.MostRecentTimestampBuffer, copy: bool = False)numpy.ndarray[numpy.int64]

Converts to a numpy array

property rows

Gets the number of rows of the buffer.

set_to(self: metavision_sdk_core.MostRecentTimestampBuffer, ts: int)None

Sets all elements of the timestamp buffer to a constant.

ts

The constant timestamp value

class metavision_sdk_core.PolarityFilterAlgorithm(self: metavision_sdk_core.PolarityFilterAlgorithm, polarity: int = 0)None

Class filter that only propagates events of a certain polarity.

Creates a PolarityFilterAlgorithm class with the given polarity.

polarity

Polarity to keep

static get_empty_output_buffer()metavision_sdk_base.EventCDBuffer

This function returns an empty buffer of events of the correct type, which can later on be used as output_buf when calling process_events()

process_events(*args, **kwargs)

Overloaded function.

  1. process_events(self: metavision_sdk_core.PolarityFilterAlgorithm, input_np: numpy.ndarray[metavision_sdk_base._EventCD_decode], output_buf: metavision_sdk_base.EventCDBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes a numpy array as input and writes the results into the specified output event buffer
input_np

input chunk of events (numpy structured array whose fields are (‘x’, ‘y’, ‘p’, ‘t’). Note that this order is mandatory)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

  1. process_events(self: metavision_sdk_core.PolarityFilterAlgorithm, input_buf: metavision_sdk_base.EventCDBuffer, output_buf: metavision_sdk_base.EventCDBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes an event buffer as input and writes the results into a distinct output event buffer
input_buf

input chunk of events (event buffer)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

process_events_(self: metavision_sdk_core.PolarityFilterAlgorithm, events_buf: metavision_sdk_base.EventCDBuffer)None

This method is used to apply the current algorithm on a chunk of events. It takes an event buffer as input/output. This should only be used when the number of output events is the same as the number of input events

events_buf

Buffer of events used as input/output. Its content will be overwritten. It can be converted to a numpy structured array using .numpy()

class metavision_sdk_core.PolarityInverterAlgorithm(self: metavision_sdk_core.PolarityInverterAlgorithm)None

Class that implements a Polarity Inverter filter.

The filter changes the polarity of all the filtered events.

Builds a new PolarityInverterAlgorithm object.

static get_empty_output_buffer()metavision_sdk_base.EventCDBuffer

This function returns an empty buffer of events of the correct type, which can later on be used as output_buf when calling process_events()

process_events(*args, **kwargs)

Overloaded function.

  1. process_events(self: metavision_sdk_core.PolarityInverterAlgorithm, input_np: numpy.ndarray[metavision_sdk_base._EventCD_decode], output_buf: metavision_sdk_base.EventCDBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes a numpy array as input and writes the results into the specified output event buffer
input_np

input chunk of events (numpy structured array whose fields are (‘x’, ‘y’, ‘p’, ‘t’). Note that this order is mandatory)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

  1. process_events(self: metavision_sdk_core.PolarityInverterAlgorithm, input_buf: metavision_sdk_base.EventCDBuffer, output_buf: metavision_sdk_base.EventCDBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes an event buffer as input and writes the results into a distinct output event buffer
input_buf

input chunk of events (event buffer)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

process_events_(self: metavision_sdk_core.PolarityInverterAlgorithm, events_np: numpy.ndarray[metavision_sdk_base._EventCD_decode])None

This method is used to apply the current algorithm on a chunk of events. It takes a numpy array as input/output. This method should only be used when the number of output events is the same as the number of input events

events_np

numpy structured array of events whose fields are (‘x’, ‘y’, ‘p’, ‘t’) used as input/output. Its content will be overwritten

class metavision_sdk_core.RoiFilterAlgorithm(self: metavision_sdk_core.RoiFilterAlgorithm, x0: int, y0: int, x1: int, y1: int, output_relative_coordinates: bool = False)None

Class that only propagates events which are contained in a certain window of interest defined by the coordinates of the upper left corner and the lower right corner.

Builds a new RoiFilterAlgorithm object which propagates events in the given window.

x0

X coordinate of the upper left corner of the ROI window

y0

Y coordinate of the upper left corner of the ROI window

x1

X coordinate of the lower right corner of the ROI window

y1

Y coordinate of the lower right corner of the ROI window

output_relative_coordinates

If false, events that passed the ROI filter are expressed in the whole image coordinates. If true, they are expressed in the ROI coordinates system

static get_empty_output_buffer()metavision_sdk_base.EventCDBuffer

This function returns an empty buffer of events of the correct type, which can later on be used as output_buf when calling process_events()

is_resetting(self: metavision_sdk_core.RoiFilterAlgorithm)bool

Returns true if the algorithm returns events expressed in coordinates relative to the ROI.

return

true if the algorithm is resetting the filtered events

process_events(*args, **kwargs)

Overloaded function.

  1. process_events(self: metavision_sdk_core.RoiFilterAlgorithm, input_np: numpy.ndarray[metavision_sdk_base._EventCD_decode], output_buf: metavision_sdk_base.EventCDBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes a numpy array as input and writes the results into the specified output event buffer
input_np

input chunk of events (numpy structured array whose fields are (‘x’, ‘y’, ‘p’, ‘t’). Note that this order is mandatory)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

  1. process_events(self: metavision_sdk_core.RoiFilterAlgorithm, input_buf: metavision_sdk_base.EventCDBuffer, output_buf: metavision_sdk_base.EventCDBuffer) -> None

This method is used to apply the current algorithm on a chunk of events. It takes an event buffer as input and writes the results into a distinct output event buffer
input_buf

input chunk of events (event buffer)

output_buf

output buffer of events. It can be converted to a numpy structured array using .numpy()

process_events_(self: metavision_sdk_core.RoiFilterAlgorithm, events_buf: metavision_sdk_base.EventCDBuffer)None

This method is used to apply the current algorithm on a chunk of events. It takes an event buffer as input/output. This should only be used when the number of output events is the same as the number of input events

events_buf

Buffer of events used as input/output. Its content will be overwritten. It can be converted to a numpy structured array using .numpy()

property x0

Returns the x coordinate of the upper left corner of the ROI window.

Returns

X coordinate of the upper left corner

property x1

Returns the x coordinate of the lower right corner of the ROI window.

Returns

X coordinate of the lower right corner

property y0

Returns the y coordinate of the upper left corner of the ROI window.

Returns

Y coordinate of the upper left corner

property y1

Returns the y coordinate of the lower right corner of the ROI window.

Returns

Y coordinate of the lower right corner

class metavision_sdk_core.TimeSurfaceProducerAlgorithmMergePolarities(self: metavision_sdk_core.TimeSurfaceProducerAlgorithmMergePolarities, width: int, height: int)None

Class that produces a MostRecentTimestampBuffer (a.k.a. time surface) from events.

This algorithm is asynchronous in the sense that it can be configured to produce a time surface every N events, every N microseconds or a mixed condition of both (see AsyncAlgorithm).

Like in other asynchronous algorithms, in order to retrieve the produced time surface, the user needs to set a callback that will be called when the above condition is fulfilled. However, as opposed to other algorithms, the user doesn’t have here the capacity to take ownership of the produced time surface (using a swap mechanism for example). Indeed, swapping the time surface would make the producer lose the whole history. If the user needs to use the time surface out of the output callback, then a copy must be done.

CHANNELS

Number of channels to use for producing the time surface. Only two values are possible for now: 1 or 2. When a 1-channel time surface is used, events with different polarities are stored all together while they are stored separately when using a 2-channels time surface.

This timesurface contains only one channel (events with different polarities are stored all together in the same channel). To use separate channels for polarities, use TimeSurfaceProducerAlgorithmSplitPolarities instead

Constructs a new time surface producer.

width

Sensor’s width

height

Sensor’s height

process_events(self: metavision_sdk_core.TimeSurfaceProducerAlgorithmMergePolarities, events_np: numpy.ndarray[metavision_sdk_base._EventCD_decode])None

Processes a buffer of events for later frame generation

events_np

numpy structured array of events whose fields are (‘x’, ‘y’, ‘p’, ‘t’). Note that this order is mandatory

set_output_callback(self: metavision_sdk_core.TimeSurfaceProducerAlgorithmMergePolarities, arg0: object)None

Sets a callback to retrieve the produced time surface

class metavision_sdk_core.TimeSurfaceProducerAlgorithmSplitPolarities(self: metavision_sdk_core.TimeSurfaceProducerAlgorithmSplitPolarities, width: int, height: int)None

Class that produces a MostRecentTimestampBuffer (a.k.a. time surface) from events.

This algorithm is asynchronous in the sense that it can be configured to produce a time surface every N events, every N microseconds or a mixed condition of both (see AsyncAlgorithm).

Like in other asynchronous algorithms, in order to retrieve the produced time surface, the user needs to set a callback that will be called when the above condition is fulfilled. However, as opposed to other algorithms, the user doesn’t have here the capacity to take ownership of the produced time surface (using a swap mechanism for example). Indeed, swapping the time surface would make the producer lose the whole history. If the user needs to use the time surface out of the output callback, then a copy must be done.

CHANNELS

Number of channels to use for producing the time surface. Only two values are possible for now: 1 or 2. When a 1-channel time surface is used, events with different polarities are stored all together while they are stored separately when using a 2-channels time surface.

This timesurface contains two channels (events with different polarities are stored in separate channels To use single channel, use TimeSurfaceProducerAlgorithmMergePolarities instead

Constructs a new time surface producer.

width

Sensor’s width

height

Sensor’s height

process_events(self: metavision_sdk_core.TimeSurfaceProducerAlgorithmSplitPolarities, events_np: numpy.ndarray[metavision_sdk_base._EventCD_decode])None

Processes a buffer of events for later frame generation

events_np

numpy structured array of events whose fields are (‘x’, ‘y’, ‘p’, ‘t’). Note that this order is mandatory

set_output_callback(self: metavision_sdk_core.TimeSurfaceProducerAlgorithmSplitPolarities, arg0: object)None

Sets a callback to retrieve the produced time surface

metavision_sdk_core.EventBbox : numpy.dtype for numpy structured arrays of EventBbox
class metavision_sdk_core.EventBboxBuffer(self: metavision_sdk_core.EventBboxBuffer, size: int = 0)None

Constructor

numpy(self: metavision_sdk_core.EventBboxBuffer, copy: bool = False)numpy.ndarray[Metavision::EventBbox]
Copy

if True, allocates new memory and returns a copy of the events. If False, use the same memory