Human Augmented 3D Computer Vision for Robust Simulation of Rare Events
Project Abstract/Statement of Work:
Research in robotics and autonomous vehicles (AVs) suffers from a lack of realistic training data and environments in which to test new approaches. The sparsity of rare and unusual events in real settings—which may occur only once every few years in a home, or every few million or billion miles driven—results in standard collection techniques (like instrumented vehicles on real roads) encountering exceptionally few such events. These events occur several orders of magnitude less frequently than is needed to collect large enough training and testing sets over the timespan of less than a decade using current methods, presenting a fundamental bottleneck in the research and deployment of such systems.
Simulation is a mechanism for overcoming this bottleneck. However, generating realistic simulations, especially of rare and unusual events, is a challenge. This project envisions a future in which publicly available videos from individual users (i.e., general in-home/office footage from television) and municipal sources of visual traffic data (i.e., traffic cameras) can be used to enable generation of simulated environments containing rare events.
We plan to use a crowdsourced human-in-the-loop approach to guide computer vision algorithms to extract measurement information from large video corpora, allowing us to created simulations of scene dynamics for training and testing (including vehicle speed, object orientation, etc.) The proposed work creates a new, improved pipeline for scene reconstruction and deploys it on real data to generate useful training and simulation datasets.