From autonomous cars in cities to mobile manipulators at home, our lab aims to design robots that reliably interact with people.
What makes this hard is that human behavior---especially when interacting with other agents---is vastly complex, varying between individuals, environments, and over time.
Thus, robots rely on data and machine learning throughout the design process and during deployment to build and refine models of humans.
However, by blindly trusting their data-driven human models, today's robots confidently plan unsafe behaviors around people: for example, autonomous drones collide head-on with pedestrians whose behavior they could not predict (left). Or, robots that learn from physical human corrections can consistently misinterpret feedback during a pick-and-place task: instead of learning to move coffee mugs close to the table, the robot erroneously learns move coffee mugs at an angle, resulting in spilled coffee and miscoordination (right).
Our goal is to develop robots that interact safely and intelligently despite imperfect human models: autonomous vehicles that automatically slow down around erratic pedestrians; assistive robots that only learn from human feedback they can understand; mobile manipulators that continually refine their representations of an end-user's interaction preferences.
|