Tutorial 3: Open RAW files or a live camera¶
In the previous tutorials, we worked with DAT files. In this tutorial we will learn how to open RAW files and use a live camera.
While DAT files can be opened directly using the Metavision Designer Core functionalities, RAW file and live cameras are supported in Metavision Designer through the use of Metavision HAL, specifically using device interfaces.
Metavision HAL is a Hardware Abstraction Layer that provides the application with a generic entry point: a Device handle. This handle exposes various capabilities or features, that are abstracted through facilities. Example of facilities are:
i_geometry: provides static information on the actual geometry (or size) of the underlying device. Using this facility we can retrieve width and height of sensor.
i_events_stream: provides support to start event streaming and retrieve event data
etc.
HalDeviceInterface can be used to create events producer. In our case we will generate CD events using CdProducer.
In this example we will implement the following pipeline:

Note that the (+) operator is the exclusive OR.
In this pipeline, the upper branch using HalDeviceInterface
is used
to open RAW files or a live camera, whereas the lower branch using
FileProducer
is used to open DAT files.
Initialization¶
The choice between live camera, DAT file or RAW file depends on the
input_path
variable. You can set it to point to a RAW file, a DAT
file or leave it empty to use a live camera.
import metavision_designer_engine as mvd_engine
import metavision_designer_core as mvd_core
import metavision_hal as mv_hal
import time
#input_path = 'PATH_TO_DAT' # use a DAT file
input_path = 'PATH_TO_RAW' # use a RAW file
#input_path = '' # use a live camera
# Check input
is_live_camera = is_raw = False
if input_path == '':
is_live_camera = True
else:
is_raw = input_path.endswith('.raw')
# Instantiate custom CD producer, depending on RAW/DAT format or live camera
cd_producer = None
# Create controller
controller = mvd_engine.Controller()
Building the pipeline¶
Let’s now prepare the top branch, where we want to read from a RAW file or a live camera.
if is_raw or is_live_camera:
# using a RAW file or a live camera requires the same steps
# Open RAW file using Metavision Hardware Abstraction Layer
if is_raw:
mv_reader = mv_hal.DeviceDiscovery.open_raw_file(input_path)
if mv_reader is None:
raise RuntimeError('Failed to open RAW file: ' + input_path)
elif is_live_camera:
# Open camera
mv_reader = mv_hal.DeviceDiscovery.open('')
if not mv_reader:
raise RuntimeError("Could not open camera. Make sure you have an event-based device plugged in")
# from here on, it is transparent to the user whether it is a RAW file or a live camera
# We can get the geometry of the source, if needed.
i_geom = mv_reader.get_i_geometry()
width = i_geom.get_width()
height = i_geom.get_height()
# Add interface to controller, to poll events from file
polling_interval = 1e-3 # Interval to poll data from the camera in seconds
mv_interface = mvd_core.HalDeviceInterface(mv_reader, polling_interval, 0)
# we need to add the device to the controller
controller.add_device_interface(mv_interface)
# Create Producer
cd_producer = mvd_core.CdProducer(mv_interface)
# Add producer to the pipeline
controller.add_component(cd_producer, "CD Producer")
Let’s now prepare the bottom branch, where we read from DAT.
# Else, assume .dat file
if not (is_raw or is_live_camera):
cd_producer = mvd_core.FileProducer(input_path)
# We can get the size of the event stream, if needed.
width = cd_producer.get_width()
height = cd_producer.get_height()
# Add producer to the pipeline
controller.add_component(cd_producer, "CD Producer")
# Print on console, input path and geometry
if is_live_camera:
print('Will read from a live camera with a %d x %d geometry' % (width, height))
else:
print('Will read from %s with a %d x %d geometry' % (input_path, width, height))
The rest of the pipeline is the same regardless of the input.
# Create Frame Generator @25 FPS
frame_gen = mvd_core.FrameGenerator(cd_producer)
controller.add_component(frame_gen, "FrameGenerator")
# Create image display window
img_display = mvd_core.ImageDisplayCV(frame_gen)
img_display.set_name("Metavision Designer Events Player")
controller.add_component(img_display,"ImgDisplay")
Now we need to start the camera, if required, and start the event stream.
# We need to access the **I_EventsStream** interface in order to start reading and streaming events from the file.\n",
i_events_stream = mv_reader.get_i_events_stream()
i_events_stream.start()
# Start the camera if needed
if is_live_camera:
camera_device = mv_reader.get_i_device_control()
camera_device.start()
# rendering
controller.add_renderer(img_display, mvd_engine.Controller.RenderingMode.SimulationClock, 25.)
controller.enable_rendering(True)
Running the pipeline in an interactive way¶
In the previous tutorials, once the application was started, there was no way to stop it in a clean way. This is a limitation, especially when dealing with a live camera.
In this sequence, we are using 2 features of the Controller
that
ease the execution of the application, and collect statistics.
get_last_key_pressed()
: returns an integer value encoding the keyboard key pressed during the lastrun
step. Note that the key should be pressed with the output window in focus. In this example, we use the key q (quit) to exit thewhile
loop.print_stats
: delivers information on the number of events generated / consumed by the various components.
# Run pipeline & print execution statistics
done = False
cnt = 0
start_time = time.time()
controller.set_slice_duration(5000)
controller.set_batch_duration(40000)
do_sync = False if is_live_camera else True
while not (done or controller.is_done()):
# Check if key pressed in window
last_key = controller.get_last_key_pressed()
# if 'q' key pressed -> quit application
if last_key == ord('q'):
done = True
ini_time = controller.get_time() # get current timestamp of the controller (which events we are processing)
real_run_time = controller.run(do_sync) # execute this loop and get the execution time
theo_run_time = (controller.get_time() - ini_time) / 1e6 # computes the length in seconds of the events we processed
cur_time = time.time() # current clock time
print("%.1f/%.1f : run_time = %f for %f s --> %f" % (controller.get_time() / 1e6,
cur_time - start_time,
real_run_time,
theo_run_time,
theo_run_time / real_run_time))
cnt = cnt + 1
if cnt % 500 == 0:
controller.print_stats(False)
controller.print_stats(False)
Note how we compare the runtime returned by the controller
(real_run_time
) with the theoretical time (theo_run_time
)
computed using the get_time()
function. This allows us to check
whether the execution time of the last buffer of events was longer or
shorter than the duration of the buffer. In other words, if the
execution of one second of data takes longer than one second, our
application is not realtime. Note also that in this sample we process
buffers of 0.04s of data, in slices of 0.005s.
Output¶
The expected output is the following (console output could be different on your pc):

0.0/0.2 : run_time = 0.165172 for 0.040000 s --> 0.242172
0.1/0.2 : run_time = 0.037468 for 0.040000 s --> 1.067571
0.1/0.2 : run_time = 0.034736 for 0.040000 s --> 1.151544
0.2/0.3 : run_time = 0.026452 for 0.040000 s --> 1.512153
0.2/0.3 : run_time = 0.039853 for 0.040000 s --> 1.003699
0.2/0.3 : run_time = 0.039868 for 0.040000 s --> 1.003318
0.3/0.4 : run_time = 0.041633 for 0.040000 s --> 0.960781
0.3/0.4 : run_time = 0.038924 for 0.040000 s --> 1.027638
0.4/0.5 : run_time = 0.039699 for 0.040000 s --> 1.007574
0.4/0.5 : run_time = 0.039203 for 0.040000 s --> 1.020341
0.4/0.5 : run_time = 0.039858 for 0.040000 s --> 1.003570
0.5/0.6 : run_time = 0.039615 for 0.040000 s --> 1.009729
0.5/0.6 : run_time = 0.039863 for 0.040000 s --> 1.003438
0.6/0.7 : run_time = 0.044491 for 0.040000 s --> 0.899060
0.6/0.7 : run_time = 0.036557 for 0.040000 s --> 1.094185
0.6/0.7 : run_time = 0.039958 for 0.040000 s --> 1.001044
0.7/0.8 : run_time = 0.038112 for 0.040000 s --> 1.049533
0.7/0.8 : run_time = 0.040400 for 0.040000 s --> 0.990094
0.8/0.9 : run_time = 0.040026 for 0.040000 s --> 0.999344
0.8/0.9 : run_time = 0.039253 for 0.040000 s --> 1.019027
0.8/1.0 : run_time = 0.046021 for 0.040000 s --> 0.869176
0.9/1.0 : run_time = 0.034615 for 0.040000 s --> 1.155577
0.9/1.0 : run_time = 0.038903 for 0.040000 s --> 1.028193
1.0/1.1 : run_time = 0.040784 for 0.040000 s --> 0.980773
1.0/1.1 : run_time = 0.039557 for 0.040000 s --> 1.011202
Let’s analyze a random line from the console output:
0.9/1.0 : run_time = 0.034615 for 0.040000 s --> 1.155577
This means
we are at the 0.9s mark of our video, that it took 1.0s of processing
time up to here. We also know that to execute the current buffer of
events of 0.04s it took 0.034615 seconds, which means that this buffer
was executed at a speed which was 1.155577 times the realtime (slightly
faster than realtime). These numbers will probably be different on your
machine depending on the input source and on your processing power.
We finally delete the graphical components, to stop the tutorial properly.
del img_display