Robots that Learn to Safely Influence via
Prediction-Informed Reach-Avoid Dynamic Games

Carnegie Mellon University

Left: Naively applying safe control or an influence-aware model in isolation can result in incomplete (not live) or unsafe behavior. Right: With our method (SLIDE), the robot can safely influence the human and reach its own object.

Abstract

Robots can influence people to accomplish their tasks more efficiently: autonomous cars can inch forward at an intersection to pass through, and tabletop manipulators can go for an object on the table first. However, a robot's ability to influence can also compromise the safety of nearby people if naively executed. In this work, we pose and solve a novel robust reach-avoid dynamic game which enables robots to be maximally influential, but only when a safety backup control exists. On the human side, we model the human's behavior as goal-driven but conditioned on the robot's plan, enabling us to capture influence. On the robot side, we solve the dynamic game in the joint physical and belief space, enabling the robot to reason about how its uncertainty in human behavior will evolve over time. We instantiate our method, called SLIDE (Safely Leveraging Influence in Dynamic Environments), in a high-dimensional (39-D) simulated human-robot collaborative manipulation task solved via offline game-theoretic reinforcement learning. We compare our approach to a robust baseline that treats the human as a worst-case adversary, a safety controller that does not explicitly reason about influence, and an energy-function-based safety shield. We find that SLIDE consistently enables the robot to leverage the influence it has on the human when it is safe to do so, ultimately allowing the robot to be less conservative while still ensuring a high safety rate during task execution.


Method Overview

(left) Before solving the reach-avoid game, we specify the target set (goal locations), failure set (collisions), and a conditional behavior prediction (CBP) model that can predict the human's future trajectory conditioned on the robot's future plan. (center) During simulated gameplay, the SLIDE policy, π*(xe), is trained against a simulated human adversary π(xe) whose control bounds are informed by the CBP model. (right) Online, the robot uses its robust SLIDE policy to safely influence against any human.



Baselines

Marginal-RA has a similar structure to the SLIDE policy, but does not consider the influence that the robot's future plan has on the human's future trajectory (i.e. it uses a marginal prediction model).

Robust-RA treats the human as a worst-case adversary and does not consider any prediction model of the human.


Policy Comparison

SLIDE Policy

Marginal-RA Policy

Robust-RA Policy


Closed-Loop Simulations: SLIDE, Marginal-RA and Robust-RA policies starting from the same initial condition. SLIDE confidently understands that the human will be influenced to move out of its way as it chooses the blue bottle and reaches the fastest (the human changes its mind from the blue bottle to the yellow mug at t=1.2s). Marginal-RA waits until the human is out of its way and chooses the yellow mug. Robust-RA stays cautious even as the human is moving towards a different goal and finishes last.

Effect of Conditional Behavior Prediction (CBP) Model

Most-likely mode of SLIDE's CBP model given different future robot plans. Each robot plan has a corresponding human prediction in the same color. The prediction is highly dependent on the robot's plan and captures the idea that the human will change goals to a different semantic class.
Table shows ADE (FDE) of the CBP model and a marginal prediction model used for the Marginal-RA baseline. While both models have similar ADE, the CBP model lowers the FDE, particularly on interactive states (i.e. states where human changes goals).
We measure the size of the inferred control bound for the human model used in offline simulated gameplay. On the full dataset, the CBP model results in a smaller control bound on average. This implies that SLIDE's downstream policy (which uses the CBP model) will be able to exploit its influence on the human and thus choose less conservative actions.