Active Marker 3D Tracking using C++

Overview

The Computer Vision API can be used to detect and track active markers in 3D.

The sample metavision_active_marker_3d_tracking shows how to implement a pipeline for detecting and tracking an active marker in 3D.

The source code of this sample can be found in <install-prefix>/share/metavision/sdk/cv3d/cpp_samples/metavision_active_marker_3d_tracking when installing Metavision SDK from installer or packages. For other deployment methods, check the page Path of Samples.

Expected Output

The sample displays, in 3D, the trajectory of the active marker with respect to the camera (or the other way around).

How to start

First, compile the sample as described in this tutorial.

To run the sample, you need to provide:

  • a JSON file containing the description of the active marker you want to track

  • a JSON file containing the intrinsics calibration of the event-based camera

  • a 3D model file representing the active marker to be tracked, the 3D model must be provided in the OGRE’s format (i.e. .MESH file)

  • also, if you want to start the sample based on the live stream from your camera, you will need to provide a camera setting file that tunes the biases to values wisely chosen for this specific application

The easiest way to try this sample is to launch it on the active_marker.raw file that we provide in our Sample Recordings. Download the file active_marker.zip which contains both the RAW and the active marker description JSON file. Assuming you extracted the archive next to your executable, you can launch the sample with the following command:

Linux

./metavision_active_marker_3d_tracking -i active_marker.raw -a rectangular_active_marker.json -c calibration.json --am-3d-model-path rectangular_active_marker.mesh

Windows

metavision_active_marker_3d_tracking.exe -i active_marker.raw -a rectangular_active_marker.json -c calibration.json --am-3d-model-path rectangular_active_marker.mesh

If you want to start the sample based on the live stream from your camera, you will need to provide a camera settings file with bias values wisely chosen for this specific application. To do so, you can use the command line option --input-camera-config (or -j) with a JSON file containing the settings. To create such a JSON file, check the camera settings section.

Here is how to launch the sample with a JSON camera settings file:

Linux

./metavision_active_marker_3d_tracking -a <path_the_active_marker_description> -j <path_to_a_camera_settings_file> -c <path_to_a_calib_file> --am-3d-model-path <path_to_the_3d_mesh_file>

Windows

metavision_active_marker_3d_tracking.exe -a <path_the_active_marker_description> -j <path_to_a_camera_settings_file> -c <path_to_a_calib_file> --am-3d-model-path <path_to_the_3d_mesh_file>

To check for additional options:

Linux

./metavision_active_marker_3d_tracking -h

Windows

metavision_active_marker_3d_tracking.exe -h

Code Overview

The Metavision Active Marker 3D Tracking sample implements the following data flow:

Metavision Active Marker 3D tracking Data Flow
  • The first step is to start the camera or open a record file. In case of a live stream, the camera settings file is loaded to properly configure the camera for this application. Note that the bias range check is disabled to allow configuring the camera with values specific to this application:

Metavision::Camera camera;
if (opt_config->event_file_path.empty()) {
    Metavision::DeviceConfig device_config;
    device_config.enable_biases_range_check_bypass(true);
    camera = Metavision::Camera::from_first_available(device_config);
    camera.load(opt_config->cam_config_path);
} else {
    const auto cam_config = Metavision::FileConfigHints().real_time_playback(opt_config->realtime_playback_speed);
    camera                = Metavision::Camera::from_file(opt_config->event_file_path, cam_config);
}
  • Then, the description of the Active Marker to be used is loaded from the given JSON file. Here, both the IDs and the 3D positions of the LEDs are loaded:

const auto active_marker = detail::load_active_marker(opt_config->am_json_path);
  • Then, the intrinsics calibration of the event-based camera is loaded:

const auto camera_geometry = Metavision::load_camera_geometry<float>(opt_config->calib_json_path);
  • The 3D viewer and the algorithms are instantiated and configured:

Metavision::Viewer3d viewer(opt_config->viewer_params);
Metavision::ModulatedLightDetectorAlgorithm modulated_light_detector(opt_config->detector_params);
Metavision::ActiveMarkerPoseEstimatorAlgorithm pose_estimator(opt_config->pose_estimator_params, *camera_geometry,
                                                              active_marker);
  • A time callback is added to the camera device. This is very useful as it allows the pose estimator algorithm, and under the hood, the tracking algorithm, to be regularly notified that some amount of time has elapsed since the last received events. This is very important as it allows the tracking algorithm to execute its internal processes even when no events are received:

auto decoder = camera.get_device().get_facility<Metavision::I_EventsStreamDecoder>();
decoder->add_time_callback([&](Metavision::timestamp t) { pose_estimator.notify_elapsed_time(t); });
  • The processing callback is set to the camera which executes the algorithms:

std::vector<Metavision::EventSourceId> source_id_events;
camera.cd().add_callback([&](const auto begin, const auto end) {
    source_id_events.clear();
    modulated_light_detector.process_events(begin, end, std::back_inserter(source_id_events));

    pose_estimator.process_events(source_id_events.cbegin(), source_id_events.cend());
});
  • A callback is set to the pose estimator to update the 3D viewer when a new pose is available:

pose_estimator.set_pose_update_callback(
    [&](Metavision::timestamp t, const Metavision::Viewer3d::PoseUpdate &T_w_m) {
        viewer.apply_pose_update(T_w_m);
    });
  • Finally the camera and the 3D viewer are started:

camera.start();
viewer.run();
camera.stop();

Algorithms Overview

Active Marker Pose Estimator

The Active Marker 3D Tracking sample uses the same algorithms as the ones used in the Active Marker 2D Tracking Sample. The difference here is that an additional algorithm brick, the Metavision::ActiveMarkerPoseEstimatorAlgorithm which is built on top of the Metavision::ActiveMarkerTrackerAlgorithm, computes the 6 DoF pose of the Active Marker with respect to the event-based camera.

To do so, the Metavision::ActiveMarkerPoseEstimatorAlgorithm consumes Metavision::EventSourceId events which are used to feed the Metavision::ActiveMarkerTrackerAlgorithm internally. The Metavision::EventActiveTrack events produced by this latter are then associated to the 3D position of the LEDs of the Active Marker (set during the initialization of the algorithm) to compute a 6 DoF pose of the marker with respect to the event-based camera by solving the well-know PnP problem.

The pose can be computed in several ways:

  • every N events

  • every N microseconds in the event-based camera’s clock

  • every N microseconds in the system’s clock

The first option allows adapting the pose computation rate to the speed of the camera while the last two allow updating the pose at a fixed frequency which can be useful in cases with real-time constraints. The pose estimation mode is set once when the algorithm is instantiated and then, independently from the chosen mode, a callback is called every time a new pose update is available.

Configuring the camera

Please refer to the Active Marker 2D Tracking Sample for a guide on how to configure the camera.