Simple Motion Capture C++
Overview
The sample metavision_simple_motion_capture
demonstrates how to use Metavision SDK to detect, track, and
triangulate the 3D positions of infrared (IR) LEDs using a synchronized multi-camera setup. The sample implements a
basic motion capture system, where:
Two slave cameras, configured to perceive high frequency LED pattern, capture the motion of an object equipped with IR LEDs.
A master camera, configured to see scene motion, is used to visualize the positions of the LEDs in the event stream.
A 3D viewer provides real-time visualization of the LEDs’ positions in 3D space.
Motion capture systems track the movement of objects or people by following the positions of light sources (e.g., IR LEDs). Event-based cameras offer the advantage of high-speed, low-latency tracking, making them ideal for capturing fast movements with minimal data. Additionally, the LED modulation scheme used for detection and tracking—enabled by the high temporal resolution of event-based cameras—allows for simpler and more efficient data association compared to traditional frame-based methods.
Note
In this sample, the cameras are assumed to be synchronized. The master camera is used for event visualization, and the slave cameras are used for motion capture.
The source code of this sample can be found in <install-prefix>/share/metavision/sdk/cv3d/cpp_samples/metavision_simple_motion_capture
if you installed the Metavision SDK via installer or packages. For other installation methods, check
Path of Samples.
Expected Output
The following images show the expected output from this sample:

The first image illustrates the event stream visualization of the triangulated IR LEDs.

The second image demonstrates how the 3D viewer displays the real-time 3D positions of the LEDs.
How to start
Compiling the sample
To compile the sample follow the steps in this tutorial.
Running the sample
The sample can be run in either offline or online mode, depending on whether you’re using pre-recorded event data or live camera streams.
Offline mode
To run the sample using recorded event data, provide:
Event recordings: The path to the event-based recording (RAW or HDF5) for each camera (master camera first).
Intrinsic calibration files: These JSON files contain parameters like focal length and lens distortion for each camera (master camera first, check this page to learn how to calibrate intrinsics and generate these files).
Extrinsic calibration files: These JSON files define the position and orientation of each slave camera relative to the master camera (check this page to learn how to calibrate extrinsic and generate these files).
Ogre 3D scene file: An XML file that defines scene elements such as lights, materials, and object tracking nodes.
Adjacency graph: A JSON file describing the structure of the object being tracked. It lists which LEDs are connected, forming the object’s “skeleton.”
Note
Ensure the camera paths are provided consistently across the different options (the master camera should always come first and the slaves in the same order).
Ogre 3D Scene File Explanation
The Ogre 3D scene file is an XML file that defines various elements used to render the scene, such as lights and object tracking nodes. Here’s a breakdown of the key components:
Tracking node: This node serves as the root for positioning the LEDs in 3D space.
Decor node: Defines the scene’s lighting conditions, including ambient and point lights.
External node: Specifies external resources (e.g., materials) used to render objects in the scene, like the LEDs.
Here is an example of an Ogre 3D scene file:
<?xml version="1.0" encoding="UTF-8"?>
<!-- exporter: blender2ogre 0.8.3 -->
<!-- export_time: Wed, 27 Mar 2024 14:53:24 +0000 -->
<scene author="Prophesee" formatVersion="1.1" >
<nodes >
<node name="tracking_node" >
<position x="0.000000" y="0.000000" z="0.000000" />
<rotation qw="0.707107" qx="0.707107" qy="0.000000" qz="0.000000" />
<scale x="1.000000" y="1.000000" z="1.000000" />
</node>
<node name="decor" >
<position x="0.000000" y="0.000000" z="0.000000" />
<rotation qw="1.000000" qx="0.000000" qy="0.000000" qz="0.000000" />
<scale x="1.000000" y="1.000000" z="1.000000" />
<node name="light" >
<position x="0.509271" y="0.523711" z="1.433111" />
<rotation qw="0.790329" qx="0.408568" qy="-0.203693" qz="-0.408610" />
<scale x="1.000000" y="1.000000" z="1.000000" />
<light name="light" type="point">
<colourDiffuse r="0.9" g="0.8" b="0.8" />
<colourSpecular r="0.5" g="0.5" b="0.5" />
</light>
</node>
</node>
</nodes>
<externals >
<item type="material" >
<file name="sphere_material.material" />
</item>
</externals>
<environment >
<colourBackground b="0.050876" g="0.050876" r="0.050876" />
</environment>
</scene>
Adjacency Graph File Explanation
The adjacency graph describes how the LEDs are connected, forming the structure of the object being tracked. Each LED is represented as a node, and the connections between them are described as edges between these nodes. This information is critical for reconstructing the object or skeleton from the LED positions.
Here is an example of an adjacency graph JSON file:
{
"0": [2, 3, 7, 9, 11],
"1": [4],
"2": [0, 3, 11],
"3": [0, 2, 5, 6, 11],
"4": [1, 5],
"5": [3, 4, 6],
"6": [3, 5, 7],
"7": [0, 6, 9],
"8": [9, 10],
"9": [0, 7, 8],
"10": [8],
"11": [0, 2, 3]
}
This adjacency graph indicates that LED 0 is connected to LEDs 2, 3, 7, 9, and 11, forming the structure for the tracked object.
The easiest way to run the sample is to use the provided dataset we provide in our Sample Recordings. Assuming the dataset was extracted locally, here’s an example of how to run the sample in offline mode using a dataset:
Linux
./metavision_simple_motion_capture -i record-master.raw record-slave-left.raw record-slave-right.raw \
--calibration-paths master-camera-geometry.json slave-left-camera-geometry.json slave-camera-right-geometry.json \
--ext-calibration-paths extrinsics-master-slave-left.json extrinsics-master-slave-right.json \
--3d-scene-path skeleton-scene.scene \
--adj-path adjacency-graph.json
Windows
metavision_simple_motion_capture.exe -i record-master.raw record-slave-left.raw record-slave-right.raw \
--calibration-paths master-camera-geometry.json slave-left-camera-geometry.json slave-camera-right-geometry.json \
--ext-calibration-paths extrinsics-master-slave-left.json extrinsics-master-slave-right.json \
--3d-scene-path skeleton-scene.scene \
--adj-path adjacency-graph.json
Online mode
To run the sample with a live multi-camera setup, provide the serial numbers and settings files for each camera, along with the calibration and scene files.
Note
The master camera must always be listed first in the camera serials, settings, and calibration files.
Here’s an example of how to run the sample using live cameras (assuming we have cameras with serial numbers 00051299, 00050697, and 00051165):
Linux
./metavision_simple_motion_capture -s 00051299 00050697 00051165 \
--camera-settings-paths master-camera-settings.json slave-camera-settings.json slave-camera-settings.json \
--calibration-paths master-camera-geometry.json slave-left-camera-geometry.json slave-camera-right-geometry.json \
--ext-calibration-paths extrinsics-master-slave-left.json extrinsics-master-slave-right.json \
--3d-scene-path skeleton-scene.scene \
--adj-path adjacency-graph.json
Windows
metavision_simple_motion_capture.exe -s 00051299 00050697 00051165 \
--camera-settings-paths master-camera-settings.json slave-camera-settings.json slave-camera-settings.json \
--calibration-paths master-camera-geometry.json slave-left-camera-geometry.json slave-camera-right-geometry.json \
--ext-calibration-paths extrinsics-master-slave-left.json extrinsics-master-slave-right.json \
--3d-scene-path skeleton-scene.scene \
--adj-path adjacency-graph.json
Additional options
To explore additional options, such as tweaking the tracking settings, you can check the command line help:
Linux
./metavision_simple_motion_capture -h
Windows
metavision_simple_motion_capture.exe -h
Next steps
For further information, you can check the following resources:
Sample Recordings for pre-recorded datasets.
Calibration to learn how to calibrate your cameras.