Developing a new model to align event camera data

In their new ICCV paper, SCIoI members Pia Bideau and Guillermo Gallego, together with Cheng Gu, Erik Learned-Miller, and Daniel Sheldon, describe a new model to align the frames of an event camera and create a stable panorama that is easier to make sense of.

Compared to traditional image-based cameras, event cameras work at high speed with a very high dynamic range and low power consumption, and are commonly used in computer vision and robotics applications. What these cameras do is that they sense brightness changes (events) at every pixel as they occur with microsecond resolution. Thus, motion is needed to be able to record visual information in the form of events. This motion, however, makes it difficult for us to reason about the cause of the triggered events, and events triggered by the same cause are scattered over the camera sensor.

In order to address this challenge, Pia Bideau, Guillermo Gallego, and other authors have developed a model, the spatio-temporal Poisson Point Process, that aligns the data in order to create a representation that facilitates the interpretation of visual information. This model can be used as a pre-processing step for next steps in computer vision aiming at high level scene understanding. Read the abstract below for a more in-depth description of the process.

 

Research

An overview of our scientific work

See our Research Projects