Event Date/Time
Location
https://princeton.zoom.us/my/robotics
Series/Event Type
Title: Reimagining Robot Autonomy with Neural Environment Representations
New developments in computer vision and deep learning have led to the rise of neural environment representations: 3D maps that are stored as deep networks that spatially register occupancy, color, texture, and other physical properties. These environment models can generate photo-realistic synthetic images from unseen view points, and can store 3D information in exquisite detail. In this talk, I investigate the question: How can robots use neural environment representations for perception, motion planning, manipulation, and simulation? I will show recent work from my lab in which we build a robot navigation pipeline using a Neural Radiance Field (NeRF) map of an environment. We develop a trajectory optimization algorithm that interfaces with the NeRF model to find dynamically feasible, collision-free trajectories for a robot moving through a NeRF world. We also develop an optimization-based state estimator that uses the NeRF model to give full dynamic state estimates for a robot from only on board images. I will also discuss preliminary results using NeRF models for grasp planning, and for tracking the poses of multiple 3D objects in a scene. Finally, I will discuss a new differentiable robot physics simulator called Dojo that can use NeRFs as a geometry description for objects, leading to physically interpretable motion prediction from NeRF models. I will conclude with future opportunities and challenges in integrating neural environment representations into the robot autonomy stack.