Dataset creation

There are no unique rules for creating a dataset for machine learning: how many videos, which scenes to record, which characteristics, and many other variables depend on your application and on the desired target. The general rule-of-thumb used for creating a training dataset for frame-based computer vision, are valid also for event-based computer vision, and are outside the scope of this doc. You can find many tips online, for example on towardsdatascience web site.

Recording tips

In this section, we will discuss some characteristics specific to event-based that will help you in obtaining the best results.

  • Data quality: ensure that your camera is in focus and that the best set of biases is used. If the input data is of low quality, the output network will be of low quality too.

  • External interference: phenomena such as flickering lights, vibrations, adverse weather conditions might introduce artifacts in the event-based data. These artifacts might not be visible with frame-based cameras, but might create issues in your event-based data. Ensure that you record your training data in a condition which is as similar as possible to the one you will face at inference time. Try to remove flickering lights by tuning the camera biases and remove any external interference. However, if you expect to face flickering lights, vibrations or any other factor at inference time, ensure to record these events and add these videos to your training dataset.

  • Variability: scene characteristics that can increase variability in frame-based cameras might not have an effect in event-based cameras. For example, recording videos with cars of different colors might have a positive effect in RGB cameras, but will not affect event-based camera, as the events generated might not change. On the contrary, changing the scene dynamic, such as the speed of the cars, might not have an effect on a RGB camera, but will increase variability in event-based data.


To simplify data labelling, we offer an Video to Event Simulator tool allowing to convert frames to events.


Tutorials in this section were created using Jupiter Notebooks. You can execute them on your computer by downloading the source code at the top or bottom of the page. More information can be found on this page.