Acquiring Metric and Semantic Information using Autonomous Robots

Event Date/Time

Location

Andlinger Center
Maeder Hall

Series/Event Type

MAE Departmental Seminars

Recent years have seen impressive progress in robot control and perception including adept manipulation, aggressive quadrotor maneuvers, dense metric map reconstruction, and object recognition in real time. The grand challenge in robotics today is to capitalize on these advances in order to enable autonomy at a higher-level of intelligence. It is compelling to envision teams of autonomous robots in environmental monitoring, precision agriculture, construction and structure inspection, security and surveillance, and search and rescue. 

 

In this talk, I will emphasize that many such applications can be addressed by thinking about how to coordinate robots in order to extract useful information about the environment. More precisely, I will formulate a general active estimation problem that captures the common characteristics of the aforementioned scenarios. I will show how to manage the complexity of the problem over metric information spaces with respect to long planning horizons and large robot teams. These results lead to computationally scalable, non-myopic algorithms with quantified performance for problems such as distributed source seeking and active simultaneous localization and mapping (SLAM).

 

I will then focus on acquiring information using both metric and semantic observations (e.g., object recognition). In this context, there are several new challenges such as missed detections, false alarms, and unknown data association. To address them, I will model semantic observations via random sets and will discuss filtering using such models. A major contribution of our approach is in proving that the complexity of the problem is equivalent to computing the permanent of a suitable matrix. This enables us to develop and experimentally validate algorithms for semantic localization, mapping, and planning on mobile robots, Google's project Tango phone, and the KITTI visual odometry dataset.

Speaker Bio

Nikolay A. Atanasov is a postdoctoral researcher at the department of Mechanical Engineering and Applied Mechanics at the University of Pennsylvania, Philadelphia, PA. He received a B.S. in Electrical Engineering from Trinity College, Hartford, CT, in 2008, and M.S. and Ph.D. degrees in Electrical and Systems Engineering from the University of Pennsylvania in 2012 and 2015, respectively. His research focuses on robotics, control theory, and computer vision and in particular on controlling teams of robots to collect metric and semantic information in applications such as environmental monitoring, security and surveillance, localization and mapping, search and rescue, and object recognition. His contributions were recognized by an award for the best Ph.D. dissertation in Electrical and Systems Engineering at the University of Pennsylvania in 2015.