Note

This C++ sample is available only with our Professional plan. The corresponding Python sample is available with all Metavision Intelligence Plans.

Particle Size Measurement using C++

The Analytics API provides algorithms to both count and estimate the size of fast moving objects.

The sample metavision_psm shows how to count, estimate the size and display the objects passing in front of the camera.

We expect objects to move from top to bottom, as in free-fall. But, command-line arguments allow to rotate the camera 90 degrees clockwise in case of particles moving horizontally in FOV or to specify if objects are going upwards instead of downwards. An object is detected when it crosses one of the horizontal lines (by default, 6 lines spaced 20 pixels apart are used). The number of lines and their positions can be specified using command-line arguments. Detections over the lines are then combined together to get a size estimate when the object has crossed the last line.

Expected Output

Metavision Particle Size Measurement sample visualizes the events (from moving objects), the lines on which objects are counted, the detected particles, their estimated sizes, the total histogram of sizes and the total object counter:

Setup & requirements

To accurately count and estimate the size of objects, it is very important to fulfill some conditions:

  • the camera should be static and the object in focus

  • there should be good contrast between the background and the objects (using a uniform backlight helps to get good results)

  • set the camera to have minimal background noise (for example, remove flickering lights)

  • the events triggered by an object passing in front of the camera should be clustered as much as possible (i.e. no holes in the objects to avoid multiple detections)

Also, we recommend to find the right objective/optics and the right distance to objects, so that an object size seen by the camera is at least 5 pixels. This, together with your chosen optics, will define the minimum size of the objects you can count.

Finally, depending on the speed of your objects (especially for high-speed objects), you might have to tune the sensor biases to get better data (make the sensor faster and/or less or more sensitive).

How to start

First, compile the sample as described in this tutorial.

To start the sample based on the live stream from your camera, run:

Linux

./metavision_psm

Windows

metavision_psm.exe

To start the sample based on recorded data, provide the full path to a RAW file (here, we use a file from our Sample Recordings):

Linux

./metavision_psm -i 195_falling_particles.raw

Windows

metavision_psm.exe -i 195_falling_particles.raw

To check for additional options:

Linux

./metavision_psm -h

Windows

metavision_psm.exe -h

Code Overview

Pipeline

Metavision Particle Size Measurement sample implements the following pipeline:

../../../../_images/psm_pipeline.png

Optional Pre-Processing Filters/Algorithms

To improve the quality of initial data, some pre-processing filters can be applied upstream of the algorithm:

Note

These filters are optional: experiment with your setup to get the best results.

Particle Size Measurement Algorithm

This is the main algorithm in this sample. The algorithm is configured to count objects and estimate the histogram of their sizes while they’re passing from top to bottom in front of the camera. To create an instance of Metavision::PsmAlgorithm, we first need to gather some configuration information, such as the approximate size of the objects to count, their speed and their distance from the camera, to find the right algorithm parameters.

Even though a specific accumulation time might provide a well-defined 2D shape, the algorithm doesn’t directly have access to it because it’s only seeing events through the lines of interest. Ideally, we would like the object to advance one pixel at a time, so that in the end the algorithm could process all the information in this well-defined 2D form. This is however not compatible with a large accumulation time. The accumulation time must therefore be decoupled from the time interval between two line processings. That’s why we have two temporal parameters:

  • Precision time: Time interval between two line processings. Should roughly be equal to the inverse of the object speed (pix/us)

  • Accumulation time: Temporal size of the event-buffers accumulated on the line for a line processing. Should be large enough so that the object shape is well-defined

The lines should be close enough so that there’s no ambiguity when matching together detections done on several lines.

Once we have a valid calibration, we can create an instance of Metavision::PsmAlgorithm:

  // PsmAlgorithm
  std::vector<int> rows; // Vector to fill with line counters ordinates
  rows.reserve(num_lines_);
  const int y_line_step = (max_y_line_ - min_y_line_) / (num_lines_ - 1);
  for (int i = 0; i < num_lines_; ++i) {
      const int line_ordinate = min_y_line_ + y_line_step * i;
      rows.push_back(line_ordinate);
  }

  const int bitsets_buffer_size = static_cast<int>(accumulation_time_us_ / precision_time_us_);
  const int num_clusters_ths = 7; ///< Min nbr of cluster measurements below which a particle is considered as noise
  Metavision::LineClusterTrackingConfig detection_config(
      static_cast<unsigned int>(precision_time_us_), bitsets_buffer_size, cluster_ths_, num_clusters_ths_,
      min_inter_clusters_distance_, learning_rate_, max_dx_allowed_, 0);

  Metavision::LineParticleTrackingConfig tracking_config(!is_going_up_, dt_first_match_ths_,
                                                         std::tan(max_angle_deg_ * 3.14 / 180.0), matching_ths_);

  const int num_process_before_matching = 3; // Accumulate particle detections during n process
                                             // before actually matching them to existing trajectories

  psm_tracker_ = std::make_unique<Metavision::PsmAlgorithm>(sensor_width, sensor_height, rows, detection_config,
                                                            tracking_config, num_process_before_matching);

The Metavision::PsmAlgorithm relies on the use of lines of interest to count and estimate the size of the objects passing in front of the camera and produces Metavision::LineParticleTrackingOutput and Metavision::LineClusterWithId as output. A Metavision::LineParticleTrackingOutput contains a global counter and object sizes as well as their trajectories. The global counter is incremented when an object has been successfully tracked over several lines of interest.

The algorithm is implemented in an asynchronous way which allows to retrieve new estimations at a fixed refresh rate rather than getting them for each processed buffer of events. Not only is this more efficient for visualization purpose but it’s also easier to process results only when a new particle has been detected.

Frame Generation

At this step, we generate an image that will be displayed when the sample is running. In this frame are displayed:

  • the events

  • the lines of interest used by the algorithm

  • the global counter

  • the reconstructed object contours and the estimated sizes

  • the histogram of the object sizes

Here, we are not using the Metavision::Pipeline utility class to implement the pipeline. As a consequence, to ease the synchronization between the events and the size measurement results, the Metavision::OnDemandFrameGenerationAlgorithm class is used. This class allows to buffer input events (i.e. Metavision::OnDemandFrameGenerationAlgorithm::process_events()) and generate an image on demand (i.e. Metavision::OnDemandFrameGenerationAlgorithm::generate()). Once the event image has been generated, following helpers are called to add overlays:

As the output images are generated at the same frequency as the Metavision::LineParticleTrackingOutput produced by the Metavision::PsmAlgorithm, the image generation is done in the Metavision::PsmAlgorithm’s output callback:

void Pipeline::psm_callback(const timestamp &ts, LineParticleTrackingOutput &tracks,
                            LineClustersOutput &line_clusters) {
    if (counting_drawing_helper_) {
        back_img_.create(events_img_roi_.height, events_img_roi_.width + histogram_img_roi_.width, CV_8UC3);
        if (!histogram_drawing_helper_) {
            draw_events_and_line_particles(ts, back_img_, tracks, line_clusters);
        } else {
            for (auto it = tracks.buffer.cbegin(); it != tracks.buffer.cend(); it++) {
                size_t id_bin;
                if (Metavision::value_to_histogram_bin_id(hist_bins_boundaries_, it->particle_size, id_bin))
                    hist_counts_[id_bin]++;
            }

            if (!tracks.buffer.empty()) {
                int count = 0;
                for (int k = 0; k < hist_counts_.size(); k++)
                    count += hist_counts_[k];
                if (count != 0) {
                    MV_LOG_INFO() << "Histogram : " << count;
                    std::stringstream ss;
                    for (int k = 0; k < hist_counts_.size(); k++) {
                        if (hist_counts_[k] != 0)
                            ss << hist_bins_centers_[k] << ":" << hist_counts_[k] << " ";
                    }
                    MV_LOG_INFO() << ss.str();
                }
            }
            back_img_.create(events_img_roi_.height, events_img_roi_.width + histogram_img_roi_.width, CV_8UC3);
            auto events_img = back_img_(events_img_roi_);
            draw_events_and_line_particles(ts, events_img, tracks, line_clusters);
            auto hist_img = back_img_(histogram_img_roi_);
            histogram_drawing_helper_->draw(hist_img, hist_counts_);
        }

        if (video_writer_)
            video_writer_->write_frame(ts, back_img_);

        if (window_) {
            window_->show(back_img_);
        }
    }

    current_time_callback(ts);
    increment_callback(ts, tracks.global_counter);
    inactivity_callback(ts, tracks.last_count_ts);
}

while the buffering of the events is done in the Metavision::Camera’s output callback:

void Pipeline::camera_callback(const Metavision::EventCD *begin, const Metavision::EventCD *end) {
    if (!is_processing_)
        return;

    // Adjust iterators to make sure we only process a given range of timestamps [process_from_, process_to_]
    // Get iterator to the first element greater or equal than process_from_
    begin = std::lower_bound(begin, end, process_from_,
                             [](const Metavision::EventCD &ev, timestamp ts) { return ev.t < ts; });

    // Get iterator to the first element greater than process_to_
    if (process_to_ >= 0)
        end = std::lower_bound(begin, end, process_to_,
                               [](const Metavision::EventCD &ev, timestamp ts) { return ev.t <= ts; });
    if (begin == end)
        return;

    /// Apply filters
    bool is_first = true;
    apply_filter_if_enabled(polarity_filter_, begin, end, buffer_filters_, is_first);
    apply_filter_if_enabled(transpose_events_filter_, begin, end, buffer_filters_, is_first);
    apply_filter_if_enabled(activity_noise_filter_, begin, end, buffer_filters_, is_first);

    /// Process filtered events
    apply_algorithm_if_enabled(events_frame_generation_, begin, end, buffer_filters_, is_first);
    apply_algorithm_if_enabled(psm_tracker_, begin, end, buffer_filters_, is_first);
}

Note

Different approaches could be considered for more advanced applications.

Note

While filtered events are used by the Metavision::PsmAlgorithm, raw events are used for the display.

Display

Finally, the generated frame is displayed on the screen. The following image shows an example of output:

Expected Output from Metavision Particle Size Measurement Sample