Building and Reasoning about Fully 3D Representations

Project Abstract/Statement of Work:

Humans have a deep understanding of the physical environment around them that they use to move through and interact with the world. Their knowledge is fully three dimensional: upon entering an unfamiliar building, they know that the floor continues behind furniture even though it is hidden by the furniture and can make sensible inferences about the layout of the nearby parts of the building given limited observations. The main goal of this project is to enable computers to learn to extract such a 3D representation from ordinary images and to connect this ability with tasks and settings that are relevant to autonomous systems, such as service robots indoors and autonomous vehicles outdoors.
The goal of this project is to work towards giving computers the ability to infer a full 3D understanding of the world from a conventional image and to demonstrate how to apply this understanding to a variety of tasks. This proposal aims to better connect efforts with robotics by working with natural images and applying it towards tasks and scenarios that are more closely related to robotics and robotics tasks. We intend to explore projects in three main directions towards achieving this goal: better handling natural data from ordinary cameras (not synthetic data), integrating robotic sensors with learned systems, and reasoning for robotic tasks on top of the predicted 3D.
 

PI and Co-PIs: