Guaranteeing safety is crucial for the effective deployment of robots. Control theory research has established techniques with theoretical safety and stability guarantees based on model predictive control, reference governor design, Hamilton-Jacobi reachability, control Lyapunov and barrier functions, and contraction theory. Similarly, formal methods techniques based on SMT solvers and hybrid system verification have been used to guarantee safety in systems. Existing techniques, however, predominantly assume that the robot motion dynamics and safety constraints are precisely known in advance. This assumption cannot be satisfied in the unstructured and dynamic real-world conditions. For example, an aerial vehicle aiding in disaster response must operate in an unpredictable environment subject to extreme disturbances. Similarly, a walking robot providing last-mile delivery has to traverse changing terrain while negotiating pedestrian traffic.
With recent progress in machine learning we can learn robot dynamics or environment models from sensory data. Gaussian Process regression and Koopman Operator theory have shown promise in estimating robot dynamics models. Deep neural network models have enabled impressive results in 3D reconstruction from visual data. Although empirically impressive, these machine learning techniques, however, do not provide guarantees for safety.
This workshop seeks to bring together experts from multiple communities – robotics, control theory and machine learning – and highlight the cutting-edge research in their intersection. We will feature talks from both the fields with an emphasis on safe robot control in uncertain environments.
We invite submissions from a broad range of topics that investigate the formal safety of robots when dealing with uncertainty introduced when the robot dynamics models are learned or the environment state is estimated. We provide a non-exhaustive list of topics that might be of interest to the target audience for this workshop:
Priority will be given to papers that bridge the gap between the two areas to provide safety and stability guarantees for systems with learned motion and environment dynamics. The review committee will judge the contributions based on the following questions:
Accepted papers will be required to submit a spotlight video that provides a demo of the proposed approach and answers the four key questions related to our workshop. In the demo part of the video, the authors are encouraged to demonstrate the operation of their system (either real or simulated) in a safety critical scenario. The spotlight videos will be presented during the time allocated for the poster session.
Time (PST, GMT-07) | Time (EST, GMT-04) | Topic |
06:45-07:00 AM | 09:45-10:00 AM | Opening remarks |
07:00-07:30 AM | 10:00-10:30 AM | Invited talk: Dimitra Panagou on “Control synthesis for safety- and time-critical systems under uncertainty” |
07:30-08:00 AM | 10:30-11:00 AM | Invited talk: Kelsey Allen on “Safety through structured learning in physical problem-solving environments” |
08:00-08:30 AM | 11:00-11:30 AM | Spotlight talks: 1) L1-RL: Robustifying Reinforcement Learning Policies with L1 Adaptive Control 2) No-Regret Safe Learning for Online Nonlinear Control with Control Barrier Functions 3) Negotiation-Aware Reachability-Based Safety Verification for Autonomous Driving in Interactive Scenarios 4) Sampling-based Motion Planning for Uncertain Nonlinear Systems via Funnel Control |
08:30-09:00 AM | 11:30-12:00 PM | Coffee break |
09:00-09:30 AM | 12:00-12:30 PM | Spotlight talks: 5) Learning Vehicle Models for Agile Flight 6) Probabilistic Safe Adaptive Merging Control for Autonomous Vehicles under motion uncertainty 7) Learning-based Robust Motion Planning with Guaranteed Nonlinear Stability: A Contraction Theory Approach |
09:30-10:00 AM | 12:30-01:00 PM | Invited talk: Sergey Levine on “Safety in Numbers: How Large Prior Datasets Can Put RL into the Real World” |
10:00-10:30 AM | 01:00-01:30 PM | Invited talk: Marco Pavone on “Safe Learning-based Control for Robot Autonomy “ |
10:30-12:00 PM | 01:30-03:00 PM | Lunch break |
12:00-12:30 PM | 03:00-03:30 PM | Invited talk: Qi (Rose) Yu on “Incorporating Symmetry for Improved Deep Dynamics Learning” |
12:30-01:00 PM | 03:30-04:00 PM | Invited talk: Melanie N. Zeilinger on “Safe learning-based control using a Model Predictive Control framework” |
01:00-01:30 PM | 04:00-04:30 PM | Coffee break and Interactive session for accepted papers 1) L1-RL: Robustifying Reinforcement Learning Policies with L1 Adaptive Control 2) No-Regret Safe Learning for Online Nonlinear Control with Control Barrier Functions 3) Negotiation-Aware Reachability-Based Safety Verification for Autonomous Driving in Interactive Scenarios 4) Sampling-based Motion Planning for Uncertain Nonlinear Systems via Funnel Control |
01:30-02:00 PM | 04:30-05:00 PM | Interactive session for accepted papers 5) Learning Vehicle Models for Agile Flight 6) Probabilistic Safe Adaptive Merging Control for Autonomous Vehicles under motion uncertainty 7) Learning-based Robust Motion Planning with Guaranteed Nonlinear Stability: A Contraction Theory Approach |
02:00-02:30 PM | 05:00-05:30 PM | Invited talk: Claire J. Tomlin on “Safe Learning in Robotics” |
02:30-03:00 PM | 05:30-06:00 PM | Invited talk: Aaron Ames on “Learning for Safety-Critical Control” |
03:00-03:30 PM | 06:00-06:30 PM | Discussion and closing remarks |
Title: Control synthesis for safety- and time-critical systems under uncertainty
Abstract: Fully autonomous operation of robots in unstructured and unknown environments has been an ongoing area of research. Despite significant progress over the years, there are still open challenges due to constraints (e.g., safety and time specifications, computational power), malicious or faulty information, and environmental uncertainty. This talk will present some of our recent results and ongoing work on spatiotemporal control under constraints and uncertainty. The proposed framework on Control Barrier Functions and Fixed-Time Control Lyapunov Functions aims to develop and integrate estimation, learning and control methods towards provably-correct and computationally-efficient mission synthesis for constrained and uncertain multi-agent systems.
Title: Safety through structured learning in physical problem-solving environments
Abstract: The world is structured in countless ways. When cognitive and machine models respect these structures, by factorizing their modules and parameters, they often behave in ways we can better understand and more safely interact with. In this talk, I will discuss our work investigating the factorizations of objects, relations and constraints to create safer, more interpretable artificial agents. Focusing on the domain of physical problem-solving (including construction and tool use), I will show how to harness object and relational structure in the form of graph networks to improve agent generalization, and how to use and learn constraints to enable a simulated robot to solve cognitively inspired tool use problems. By taking better advantage of problem structure, and combining it with general-purpose methods for statistical learning, we can develop more robust and interpretable machine agents that better reflect human expectations.
Title: Safety in Numbers: How Large Prior Datasets Can Put RL into the Real World
Abstract: Reinforcement learning provides a general and powerful toolkit for learning-based control. However, conventionally, RL algorithms require training through trial and error, which leaves us with two unenviable choices in the real world: either rely on simulation to provide training, or else risk deploying untrained or partly trained policies, which can fail catastrophically before they improve enough to be reliable. In this talk, I will discuss how reformulating the reinforcement learning problem to focus on data rather than online experience can help to alleviate this challenge: by leveraging the framework of offline RL, we can train policies using previously collected data, and then only deploy policies that have already reached some desired level of performance. I will discuss how we can derive principled offline RL algorithms that learn lower bounds on the true return for the task, and then talk about how such tools can be leveraged to train conservative safety critics, which can further provide constraints on the online training process.
Title: Safe Learning-based Control for Robot Autonomy
Abstract: In this talk I will provide an overview of recent efforts from my group on infusing safety assurances in robot autonomy stacks equipped with learning-based components, with an emphasis on settings where an autonomous robot needs to interact with a human or operate in complex, possibly previously unseen environments. The underlying approach entails adding structure within robot learning via control-theoretical methods. The discussion will be grounded in a number of practical applications, from self-driving cars to space robots.
Title: Incorporating Symmetry for Improved Deep Dynamics Learning
Abstract: While deep learning has been used for dynamics learning, limited physical accuracy and an inability to generalize under distributional shift limit its applicability to real-world robots. In this talk, I will demonstrate how to incorporate symmetries into deep neural networks and significantly improve the physical consistency, sample efficiency, and generalization in learning dynamics. I will showcase the applications of these models to challenging problems such as turbulence forecasting and trajectory prediction for autonomous vehicles.
Title: Safe learning-based control using a Model Predictive Control framework
Abstract: Model predictive control (MPC) is an established technique for addressing constraint satisfaction with demonstrated success in various industries. However, it requires a sufficiently descriptive system model as well as a suitable formulation of the control objective to provide the desired guarantees and solve the problem via numerical optimization. In this talk, I will present different options how learning and MPC can be combined to overcome some of the individual difficulties of both MPC and available reinforcement learning methods. I will in particular discuss learning for inferring a parametric model of the system dynamics, or for designing the objective in MPC, as well as the use of MPC as a safety filter, providing a modular approach for augmenting high-performance learning-based controllers with constraint satisfaction properties.
Title: Safe Learning in Robotics
Abstract: In many applications of autonomy in robotics, guarantees that constraints are satisfied throughout the learning process are paramount. We present a controller synthesis technique based on the computation of reachable sets, using optimal control and game theory. Then, we present methods for combining reachability with learning-based methods, to enable performance improvement while maintaining safety and to move towards safe robot control with learned models of the dynamics and the environment. We will illustrate these “safe learning” methods on robotic platforms at Berkeley, including demonstrations of motion planning around people, and navigating in a priori unknown environments.
Title: Learning for Safety-Critical Control
Abstract: Safety is critical on a wide variety of dynamic robotic systems. Yet, when deploying controllers that have guarantees of safety on these systems, uncertainties in the model and environment can violate these guarantees in practice. This talk will approach safety-critical control from the perspective of control barrier functions (CBFs)—-describing the basic theory and existing applications of this optimization-based control methodology. To enable guarantees on CBF controllers realized in practice, we will present an approach that fuses learning with CBFs. Experimental results on robotic systems will be used to illustrate the approach. We will also highlight the application of learning to legged robots and robotic assistive devices aimed at restoring mobility.
Should you have any questions, please do not hesitate to contact the organizers Vikas Dhiman (vdhiman@ucsd.edu) or Shumon Koga (skoga@ucsd.edu). Please include ICRA 2021 Workshop
in the subject of the email.