Focus Adjustment

This application metavision_blinking_pattern_focus uses the Calibration API to allow you to easily adjust the focus of an event-based camera.

To get the best results, the camera should be placed in front of a blinking pattern at the desired distance (that is, the distance that we want to be in focus). The blinking pattern can be generated by the application or by other means.

The application creates a frame from the events generated by the blinking pattern. It then computes the Discrete Fourier Transform (DFT) of the created frame and a score measuring the sharpness of the observed edges. The score determines the quality of focus: the higher the score, the sharper the edges, the better the focus.

To adjust the focus, change the aperture (if available) and focus distance (if available) on the objective lens until you get the higher score possible.

Note that the score depends on the focus but also on other factors (lens, distance from the pattern, etc), so it is not always possible to compare scores between different focusing sessions.

The source code of this application can be found in <install-prefix>/share/metavision/sdk/calibration/apps/metavision_blinking_pattern_focus when installing Metavision SDK from installer or packages. For other deployment methods, check the page Path of Samples.

Expected Output

Metavision Focus Adjustment application visualizes all events generated by the camera (on the left) and blinking events (on the right) with the output score indicating the sharpness of edges and thus the quality of focus:

Expected Output from Focus Adjustment Application

How to start

You could either use the pre-compiled executable or compile the source code as described in this tutorial.

For the blinking pattern, you can use the pattern file blink-pattern.jpg located in /usr/share/metavision/sdk/calibration/apps/metavision_blinking_pattern_focus on Ubuntu and C:\Program Files\Prophesee\share\metavision\sdk\calibration\apps\metavision_blinking_pattern_focus on Windows.

The easiest way to use this application is to provide a path to the pattern file on the command line. This way the pattern will be shown by the application and you can point the camera to it:


metavision_blinking_pattern_focus --pattern-image-path PATTERN_FILE_PATH


metavision_blinking_pattern_focus.exe --pattern-image-path PATTERN_FILE_PATH

Alternatively, you can open the blinking pattern in a third-party tool, then you can start the pre-compiled executable with this command:





To check for additional options:


metavision_blinking_pattern_focus -h


metavision_blinking_pattern_focus.exe -h


After displaying the blinking pattern, you might observe a remanent image on your screen. This should not be a permanent damage to your screen and should go away after a while. Nevertheless, we advise to use the flickering image only during the focusing procedure and close it when you are done.

Code Overview


The application implements the following pipeline:


Optional Stages for Blinking Pattern Generation

The blinking pattern can be generated either by using a pre-recorded image/video or by the same application via a dedicated command-line option. If enabled, both the pattern blinker stage and the display stage are started. These stages run independently from the others.

SpatioTemporalContrast Stage

The Metavision::SpatioTemporalContrastAlgorithmT is used as a first stage to filter the noise and reduce the number of events to process.

The filtered events are sent to the next stages.

Blinking Frame Generator Stage

This stage uses the Metavision::BlinkingFrameGeneratorAlgorithm to detect events triggered by the blinking pattern. The implementation is straightforward as it considers a pixel has blinked when at least two events with different polarities have been triggered at that location within a given time window.

The algorithm is asynchronous in the sense that it produces a binary frames of blinking pixels every time enough blinking pixels have been detected. In other words, for each input buffer of events this algorithm might produce 0, 1 or N binary frames.

To feed the Metavision::BlinkingFrameGeneratorAlgorithm with the input CD events, we specify the consuming callback of the stage using the Metavision::BaseStage::set_consuming_callback() method:

set_consuming_callback([this](const boost::any &data) {
    try {
        auto buffer = boost::any_cast<EventBufferPtr>(data);
        if (buffer->empty())
        successful_cb_ = false;
        blink_detector_->process_events(buffer->cbegin(), buffer->cend());
        if (!successful_cb_)
            produce(std::make_pair(buffer->crbegin()->t, FramePtr())); // Temporal marker
    } catch (boost::bad_any_cast &c) { MV_LOG_ERROR() << c.what(); }

To retrieve the binary frames we then subscribe to the output callback of the Metavision::BlinkingFrameGeneratorAlgorithm. When the callback is called, we send the binary frame to the next stages using the Metavision::BaseStage::produce() method:

frame_pool_ = FramePool::make_bounded();
blink_detector_->set_output_callback([this](Metavision::timestamp ts, cv::Mat &blinking_img) {
    successful_cb_        = true;
    auto output_frame_ptr = frame_pool_.acquire();
    cv::swap(blinking_img, *output_frame_ptr);
    produce(std::make_pair(ts, output_frame_ptr));


The fact that the binary frame is passed to the callback via a non constant reference by the Metavision::BlinkingFrameGeneratorAlgorithm allows us to swap it to avoid useless copies. This way the Metavision::BlinkingFrameGeneratorAlgorithm can continue to update the next binary frame while the current one is sent without any copy to the next stages.


In the code snippets above, the successful_cb_ flag is used to detect when the algorithm doesn’t produce any frame. In that case, we send an empty result to the next stages to ease the synchronization and act as a temporal marker.

Discrete Fourier Transform Stage

This stage uses the Metavision::DftHighFreqScorerAlgorithm to compute the Discrete Fourier Transform (DFT) on the input binary frame. The DFT is then used to compute a focus score. This stage also produces a frame for visualization which contains the DFT score.

The Metavision::DftHighFreqScorerAlgorithm is synchronous as it produces a new DFT score for each input frame. To reduce the computation cost, the algorithm can be configured to check whether the input frame has changed since the previous one. A new DFT score is then only produced in this case.

In the stage’s implementation, we check whether a DFT score has been computed or not. If so, the DFT score frame is sent to the next stages. If not, we send an empty frame to the next stages to ease the synchronization and act as a temporal marker:

if (high_freq_scorer_->process_frame(input_ts, *input_frame_ptr, output_score)) {
    auto output_frame_ptr = frame_pool_.acquire();
    output_frame_ptr->create(header_score_height_, header_score_width_, CV_8UC3);
    const std::string score_str = std::to_string(100 * output_score);
    const cv::Size str_size     = cv::getTextSize(score_str, cv::FONT_HERSHEY_SIMPLEX, 1, 1, 0);
    cv::putText(*output_frame_ptr, score_str,
                cv::Point((output_frame_ptr->cols - str_size.width) / 2,
                          (output_frame_ptr->rows + str_size.height) / 2),
                cv::FONT_HERSHEY_SIMPLEX, 1, cv::Scalar(255, 255, 255), 2);
    produce(std::make_pair(input_ts, output_frame_ptr));
} else {
    produce(std::make_pair(input_ts, FramePtr())); // Temporal marker

Everything is done in the consuming callback of the stage.

Frame Generation Stage

This stage, implemented in the Metavision::FrameGenerationStage class, uses the Metavision::PeriodicFrameGenerationAlgorithm to generate a frame from the events. The events are directly drawn in the frame upon reception, and the frame is produced (i.e. sent to the next stages) at a fixed frequency in the camera’s clock.


This approach is more efficient than the one implemented in Metavision::EventsFrameGeneratorAlgorithm, where the events are buffered before being drawn. However, this later approach eases the synchronization.

Frame Composition Stage

This stage uses the Metavision::FrameCompositionStage class to create a frame made of:

  • the raw events frame, on the left,

  • the blinking events frame, on the right, and

  • the DFT score frame on the top right

The previous stages are connected using the Metavision::FrameCompositionStage::add_previous_frame_stage() method:

auto &frame_composer_stage = p.add_stage(std::make_unique<Metavision::FrameCompositionStage>(display_fps, 0));
frame_composer_stage.add_previous_frame_stage(high_freq_score_stage, width + 10, 0, header_score_width,
frame_composer_stage.add_previous_frame_stage(events_frame_stage, 0, header_score_height + 10, width, height);
frame_composer_stage.add_previous_frame_stage(blinking_frame_generator_stage, width + 10, header_score_height + 10,
                                              width, height);

The composed frame is produced at a fixed frequency in the camera’s clock in contrast to the input frames that might arrive at different and variable frequencies. Variable frequencies are due to asynchronous algorithms that might produce 0, 1 or N output(s) for each input.

Temporal markers are used to ease the synchronization that is done internally in the Metavision::FrameCompositionStage.

Display Stages

The frame produced by the image composer stage is displayed in this stage:

Expected Output from Focus Adjustment Application

The printed score corresponds to the percentage of high frequencies (i.e. sharp details) in the image, so the higher the score is the better.

When the option is enabled, the blinking pattern is also displayed in another independent stage: