Generating Frames

There are two algorithms that allow you to generate frames from events: PeriodicFrameGenerationAlgorithm and OnDemandFrameGenerationAlgorithm.

When these classes generate an image at timestamp t, they assume that the user has provided all the events up to t using the process_events method, so that the generated frame makes sense. However, nothing prevents the user from also providing events more recent than t. Both classes will simply ignore these events when generating the frame at time t, and use them later when needed.

The generation of overlapping frames is supported. It means that the time between two consecutive frame generations can be shorter than the accumulation time, and thus some events will be displayed several times. We call this scheme over accumulation as mentioned in the event-based concepts page in the section about frame generation.

The two classes differ, however, on the triggering of frame generation. It is either internal or external to the class.

  • PeriodicFrameGenerationAlgorithm: the class triggers the frame generation by itself and output frames regularly spaced in time through its output callback.

  • OnDemandFrameGenerationAlgorithm: the user can request a frame generation at any timestamp using the generate method, which is expected to be called with timestamps increasing monotonically.

How to choose between them?

If you want the visualization frame to be automatically generated at a given frequency, then use the PeriodicFrameGenerationAlgorithm:

  • to match a display or video encoder capabilities

  • to visualize the last results from a producer while ignoring intermediate ones

If you explicitly want to request when to generate a visualization frame, then use the OnDemandFrameGenerationAlgorithm:

  • to precisely control which events are included in the visualization

  • to adapt the generation rate to the publication rate of an asynchronous producer

Example using the C++ API

Here is a simple example using the C++ API. We open a RAW file with the Metavision::Camera and generate frames at 50 FPS with 20ms accumulation time using PeriodicFrameGenerationAlgorithm.

#include <metavision/sdk/stream/camera.h>
#include <metavision/sdk/core/algorithms/periodic_frame_generation_algorithm.h>
#include <metavision/sdk/ui/utils/window.h>
#include <metavision/sdk/ui/utils/event_loop.h>

int main(int argc, char *argv[]) {

    auto cam = Metavision::Camera::from_file(argv[1]);

    const auto w = cam.geometry().width();
    const auto h = cam.geometry().height();
    const std::uint32_t acc = 20000;
    double fps = 50;
    auto frame_gen = Metavision::PeriodicFrameGenerationAlgorithm(w, h, acc, fps);

    Metavision::Window window("Frames", w, h, Metavision::BaseWindow::RenderMode::BGR);

    frame_gen.set_output_callback([&](Metavision::timestamp, cv::Mat &frame){
        window.show(frame);
    });

    cam.cd().add_callback([&](const Metavision::EventCD *begin, const Metavision::EventCD *end){
        frame_gen.process_events(begin, end);
    });

    cam.start();
    while(cam.is_running()){
        Metavision::EventLoop::poll_and_dispatch(20);
    }
    cam.stop();

    return 0;
}

You can elaborate on this sample code to build a viewer that can play RAW files at various speed by playing with FPS and accumulation time. For example, setting FPS at 500 and accumulation time at 2ms will play files in slow motion.

Note

The C++ sample metavision_viewer is using CDFrameGenerator which is another Frame Generator based on PeriodicFrameGenerationAlgorithm but that also handles multithreading.

Now that you find out how to generate frame from events, the next logical step is to see how to display frames using Metavision SDK.

Examples using the Python API

Note

In those example, we are not generating the frames but also displaying them. Focus on the displaying part in done in a dedicated guide.

Using EventsIterator

Let’s start by opening a live camera with an EventsIterator:

import cv2
import numpy as np
from metavision_core.event_io import EventsIterator
from metavision_sdk_core import OnDemandFrameGenerationAlgorithm, PeriodicFrameGenerationAlgorithm
from metavision_sdk_ui import EventLoop, Window

# Events iterator on Camera
mv_iterator = EventsIterator(input_path="", delta_t=1e3)
height, width = mv_iterator.get_size()  # Camera Geometry

Now we can open a window and use the PeriodicFrameGenerationAlgorithm to generate frames at 50 FPS using an accumulation time of 10ms, with the timestamp written in the top left corner.

with Window("Periodic frame generator", width, height, Window.RenderMode.BGR) as window:
    # Do something whenever a frame is ready
    def periodic_cb(ts, frame):
        cv2.putText(frame, "Timestamp: " + str(ts), (0, 10), cv2.FONT_HERSHEY_DUPLEX, 0.5, (0, 255, 0))
        window.show(frame)

    # Instantiate the frame generator
    periodic_gen = PeriodicFrameGenerationAlgorithm(width, height, accumulation_time_us=10000, fps=50)
    periodic_gen.set_output_callback(periodic_cb)

    for evs in mv_iterator:
        EventLoop.poll_and_dispatch()  # Dispatch system events to the window
        periodic_gen.process_events(evs)  # Feed events to the frame generator
        if window.should_close():
            break

The same result can be achieved using the OnDemandFrameGenerationAlgorithm, but this time we need to manually control the frame generation timestamps.

with Window("OnDemand frame generator", width, height, Window.RenderMode.BGR) as window:
    # Instantiate the frame generator
    on_demand_gen = OnDemandFrameGenerationAlgorithm(width, height, accumulation_time_us=10000)

    frame_period_us = int(1e6/50)  # 50 FPS
    next_processing_ts = frame_period_us
    frame = np.zeros((height, width, 3), np.uint8)
    for evs in mv_iterator:
        EventLoop.poll_and_dispatch()  # Dispatch system events to the window

        on_demand_gen.process_events(evs)  # Feed events to the frame generator

        ts = evs["t"][-1] # Trigger new frame generations as long as the last event is high enough
        while(ts > next_processing_ts):
            on_demand_gen.generate(next_processing_ts, frame)
            cv2.putText(frame, "Timestamp: " + str(next_processing_ts),
                        (0, 10), cv2.FONT_HERSHEY_DUPLEX, 0.5, (0, 255, 0))
            window.show(frame)
            next_processing_ts += frame_period_us

        if window.should_close():
            break

Let’s now consider the case where we have an asynchronous algorithm Algo that regularly provides results through its output callback. We can use the OnDemandFrameGenerationAlgorithm and request a frame generation directly from the callback of Algo, i.e. as soon as new results are available.

with Window("On demand frame generator + Custom asynchronous algo", width, height, Window.RenderMode.BGR) as window:
    # Instantiate the frame generator
    on_demand_gen = OnDemandFrameGenerationAlgorithm(width, height, accumulation_time_us=10000)
    frame = np.zeros((height, width, 3), np.uint8)

    # Instantiate the algorithm providing a count and a label
    def algo_cb(ts, count, label):
        on_demand_gen.generate(next_processing_ts, frame)
        cv2.putText(frame, "Timestamp      : " + str(ts), (0, 10), cv2.FONT_HERSHEY_DUPLEX, 0.5, (0, 255, 0))
        cv2.putText(frame, "Count          : " + str(count), (0, 20), cv2.FONT_HERSHEY_DUPLEX, 0.5, (0, 255, 0))
        cv2.putText(frame, "Most seen label: " + label, (0, 30), cv2.FONT_HERSHEY_DUPLEX, 0.5, (0, 255, 0))
        window.show(frame)

    algo = Algo(width, height, fps=50)
    algo.set_output_callback(algo_cb)

    for evs in mv_iterator:
        EventLoop.poll_and_dispatch()  # Dispatch system events to the window

        on_demand_gen.process_events(evs)  # First feed events to the frame generator
        algo.process_events(evs)  # Then feed events to the algorithm

        if window.should_close():
            break

Now that you find out how to generate frame from events, the next logical step is to see how to display frames using Metavision SDK.

Using RawReader

As described in the section Reading Events with SDK Core Python API, an alternate way to read the events is to use the RawReader. The same algorithm as the one described above can be used to generate frames, but some more simple code can be enough in some situation. Hence, here is a naive example on how to build frames with numpy and display them with matplotlib:

import numpy as np
from metavision_core.event_io.raw_reader import RawReader
from matplotlib import pyplot as plt

def create_image_from_events(events, height, width):
    img = np.full((height, width, 3), 128, dtype=np.uint8)
    img[events['y'], events['x']] = 255 * events['p'][:, None]
    return img

raw_stream = RawReader("/path/to/file.raw")  # use empty string to open a camera
height, width = raw_stream.get_size()

while not raw_stream.is_done():
    events = raw_stream.load_delta_t(100000)
    im = create_image_from_events(events, height, width)
    plt.figure()
    plt.imshow(im)
    plt.axis('off')
    plt.show() # press "q" to close this figure and allow the program to continue