Event Date/Time
Location
J223
Series/Event Type
Traditionally, planning, control and decision making algorithms have been designed based on a-priori knowledge about the system and its environment, including models of the system dynamics and maps of the environment. This approach has enabled successful system operation in predictable situations, where the models are a good approximation of the real system behavior. However, if detailed models are not available, control systems are typically designed to be conservative against the unknown, which may cause drastic performance losses. To achieve safe and efficient system behavior in the presence of uncertainties and unknown disturbances, we aim to enable systems to learn during operation and adapt their behavior accordingly. However, classical learning methods, while often successful, do not provide formal guarantees for system safety and performance. In this talk, I will present approaches for safety-guaranteed learning, which combine learning methods with formal results of control theory in order to produce provably safe approaches to real-time system control. Our work is motivated by applications in the field of robotics such as mobile manipulators, and self-flying and -driving vehicles. In contrast to their early industrial counterparts, these robots are envisioned to operate in increasingly complex and uncertain environments, alongside humans, and over long periods of time. We use Gaussian Processes (GPs) as a tool to model uncertainties and gradually learn unknown effects from data. We investigate how GPs can be combined with robust, nonlinear and predictive control approaches to achieve guaranteed safe, high-performance system behavior. Examples include automatic, safe controller tuning for aerial vehicles and experience-based speed improvement for self-driving vehicles. Finally, I will describe a number of promising future directions of research within the framework of safety-guaranteed learning, including experience recommendation for long-term operations in changing conditions, safe reinforcement learning, resource-constrained learning and control, and transfer learning between robots.