Processing Events with algorithms

SDK Algorithms

The algorithms folders found in C++ include directories of some of the SDK modules (Core and all the advanced modules) contain a collection of common algorithms used to process or filter events.

These algorithms are, most of the time, implemented as classes with a process_events method working on ranges of events. Similarly to what is used in the C++ STL algorithms, the SDK represents the ranges of events by a pair of iterators or pointers for the input and a starting iterator or a pointer for the output.

Note

Our C++ algorithms are independent of the HAL layer/module. They don’t rely on Prophesee sensors and files formats. So as long as you have buffers of events as described in the API documentation, you will be able to leverage them in your application.

For example, in SDK Core, FlipYAlgorithm implements a simple algorithm that inverts the Y coordinates of the input events. It exposes this function template, which takes a pair of iterators as input (first and last) and outputs y-flipped events to the d_first iterator:

template<class InputIt, class OutputIt>
void process_events(InputIt first, InputIt last, OutputIt d_first)

Another example can be found in SDK CV: ActivityNoiseFilterAlgorithm implements an algorithm that filters events based on their activity. It exposes this function template, which takes a pair of iterators as input (it_begin and it_end) and outputs filtered events to the inserter iterator:

template<class InputIt, class OutputIt>
inline OutputIt process_events(InputIt it_begin, InputIt it_end, OutputIt inserter)

Using Algorithms with C++ API

For a hands-on experience with SDK algorithms using C++, the simplest approach is to explore our collection of samples. These demonstrate the processing of events:

  • the simplest example in C++ using the Camera class of SDK Stream API can be found in our C++ Get Started tutorial where we show how to apply FlipXAlgorithm on the stream directly in the callback function of the CD events:

    auto flipx_algo = Metavision::FlipXAlgorithm(camera_width - 1);
    cam.cd().add_callback([&](const Metavision::EventCD *begin, const Metavision::EventCD *end) {
        // we use a vector of CD events to store the output of the algo
        std::vector<Metavision::EventCD> output;
        flipx_algo.process_events(begin, end, std::back_inserter(output));
        // continue processing the events in output
    });
    
  • another simple example in C++ using the Camera class of SDK Stream API can be found in the Metavision Viewer page where we mention how a filter algorithm can be applied to the event stream.

  • a more complex C++ example (also based on the Camera class of SDK Stream API) can be found in metavision_data_rate where we can apply multiple algorithms on the stream of events.

  • for even more examples in C++, refer to the Code Samples of the Advanced Modules

Using Algorithms with Python API

For a hands-on experience with SDK algorithms using Python, the simplest approach is to explore our collection of samples. These demonstrate the processing of events:

  • the simplest example in Python using the EventsIterator of SDK Core API can be found in metavision_filtering sample where we show how to apply PolarityFilterAlgorithm on the stream of CD events:

    polarity_filters = {Polarity.OFF: PolarityFilterAlgorithm(0), Polarity.ON: PolarityFilterAlgorithm(1)}
    for evs in mv_iterator:
        if polarity in polarity_filters:
            polarity_filters[polarity].process_events(evs, events_buf)
            # continue processing the events in events_buf
    
  • If you are using the lower level HAL Python API as shown in the metavision_hal_get_started sample, you should process the event buffers within the CD event callback.

  • for more examples in Python, refer to the Code Samples of the Advanced Modules

Algorithms Tuning

There are different levels of tuning available on the algorithms:

  • Certain algorithms exhibit a fixed behavior and cannot be adjusted. For example FlipXAlgorithm as shown in the previous sections.

  • some algorithms take arguments upon initialization. For example for PolarityFilterAlgorithm, we can specify which event polarity should be kept:

    // keeping only OFF events in C++
    polarity_filter = std::make_unique<Metavision::PolarityFilterAlgorithm>(0);
    
    # keeping only ON events in Python
    polarity_filter = metavision_sdk_core.PolarityFilterAlgorithm(1)
    
  • some algorithms come with numerous parameters and utilize a specialized configuration class.. For example the algorithm TrackingAlgorithm is using the configuration class TrackingConfig. Here is how to choose the type of cluster maker used by the TrackingAlgorithm to build clusters from input events:

    // in C++
    Metavision::TrackingConfig tracking_config;
    tracking_config.cluster_maker_ = Metavision::TrackingConfig::ClusterMaker::MEDOID_SHIFT;
    tracker = std::make_unique<Metavision::TrackingAlgorithm>(sensor_width, sensor_height, tracking_config);
    
    # in Python
    tracking_config = TrackingConfig()
    tracking_config.cluster_maker = TrackingConfig.ClusterMaker.MedoidShift
    tracker = metavision_sdk_analytics.TrackingAlgorithm(sensor_width=width, sensor_height=height, tracking_config=tracking_config)
    

To delve deeper into customizing the algorithms, accessing their source code is necessary:

  • Some algorithms are found in SDK Core which is part of the Open modules. The source code of this module is available in OpenEB as open source code, ready to be cloned and compiled. Feel free to submit a Pull Request if you have some enhancements to propose.

  • The other algorithms are found in the Advanced modules. The source code of those modules is part of our SDK offer.

Going Further

From here, you can: