This Python sample is available with all Metavision Intelligence Plans. The corresponding C++ sample is available only with our Professional plan.
Object Counting using Python
The Analytics API provides algorithms to count small fast moving objects.
metavision_counting.py shows how to use the python bindings of Metavision Analytics SDK to count and display the objects passing in front of the camera.
Objects are counted per lines (by default, 4 horizontal lines are used), and we expect objects to move from top to bottom, as in free-fall. Each object is counted when it crosses one of the horizontal lines. The number of lines and their positions can be specified using command-line arguments.
Metavision Counting sample visualizes the events (from moving objects), the lines on which objects are counted, and the total object counter:
Setup & requirements
To accurately count objects, it is very important to fulfill some conditions:
the camera should be static and the object in focus
there should be good contrast between the background and the objects (using a uniform backlight helps to get good results)
set the camera to have minimal background noise (for example, remove flickering lights)
the events triggered by an object passing in front of the camera should be clustered as much as possible (i.e. no holes in the objects to avoid multiple detections)
Also, we recommend to find the right objective/optics and the right distance to objects, so that an object size seen by the camera is at least 5 pixels. This, together with your chosen optics, will define the minimum size of the objects you can count.
Finally, depending on the speed of your objects (especially for high-speed objects), you might have to tune the sensor biases to get better data (make the sensor faster and/or less or more sensitive).
How to start
To start the sample based on the live stream from your camera, run:
To start the sample based on recorded data, provide the full path to a RAW file (here, we use a file from our Sample Recordings):
python3 metavision_counting.py -i 80_balls.raw
python metavision_counting.py -i 80_balls.raw
To check for additional options:
python3 metavision_counting.py -h
python metavision_counting.py -h
Metavision Counting sample implements the following pipeline:
Optional Pre-Processing Filters/Algorithms
To improve the quality of initial data, some pre-processing filters can be applied upstream of the algorithm:
metavision_sdk_core.PolarityFilterAlgorithmis used to select only one polarity to count the objects. Using only one polarity allows to have the sharpest shapes possible and prevents multiple counts for the same object.
metavision_sdk_cv.TransposeEventsAlgorithmallows to change the orientation of the events. Note that
metavision_sdk_analytics.CountingAlgorithmrequires objects to move from top to bottom, and if your setup doesn’t allow it, then this filter/algorithm is useful for changing the orientation of events.
metavision_sdk_cv.ActivityNoiseFilterAlgorithmaims to reduce noise in the events stream that could produce false counts.
These filters are optional: experiment with your setup to get the best results.
This is the main algorithm in this sample. The algorithm is configured to count objects of a given size
passing from top to bottom in front of the camera.
To create an instance of
metavision_sdk_analytics.CountingAlgorithm, we first need to gather some configuration
information, such as the size of the objects to count, their speed and their distance from the camera.
The size of those objects in the camera’s image plane depends on the
optic used, their distance to the camera and their speed.
metavision_sdk_analytics.CountingCalibration class allows to compute these parameters and pass them to the
Once we have a valid calibration, we can create an instance of
# Counting Calibration (Get optimal algorithm parameters) cluster_ths, accumulation_time_us = CountingCalibration.calibrate( width=width, height=height, object_min_size=args.object_min_size, object_average_speed=args.object_average_speed, distance_object_camera=args.distance_object_camera)
metavision_sdk_analytics.CountingAlgorithm relies on the use of lines of interest to count the objects passing in
front of the camera and produces 4 information through its output callback:
(ts, global_counter, last_count_ts, local_counters).
Local counters (i.e. one per line) are incremented every time an object crosses their corresponding line, while the global counter is
the maximum of all the local counters. These counters are not reset between two calls but updated
throughout the sequence. The algorithm is implemented in an asynchronous way which allows to retrieve new counters
estimations at a fixed refresh rate rather than getting them for each processed buffer of events. As these counters are
mainly used for visualization purposes, the asynchronous approach has been proven to be more efficient in this case.
At this step, we generate an image that will be displayed when the sample is running. In this frame are displayed:
the lines of interest used by the algorithm
the global counter
metavision_sdk_core.OnDemandFrameGenerationAlgorithm class allows to buffer input events
metavision_sdk_core.OnDemandFrameGenerationAlgorithm.process_events()) and generate an image on demand
Once the event image has been generated, the counting related overlays (i.e. lines and counter) are rendered using the
As the output images are generated at the same frequency as the counts produced by the
the image generation is done in the
metavision_sdk_analytics.CountingAlgorithm’s output callback.
The event processing is done while iterating over
# Process events for evs in mv_iterator: # Dispatch system events to the window EventLoop.poll_and_dispatch() # Process events if filtering_algorithms: filtering_algorithms.process_events(evs, events_buf) for filter in filtering_algorithms[1:]: filter.process_events_(events_buf) counting_gui.process_events(events_buf) counting_algo.process_events(events_buf) else: counting_gui.process_events(evs) counting_algo.process_events(evs) if counting_gui.should_close(): break
Finally, the generated frame is displayed on the screen. The following image shows an example of output: