Human-Motion to Robot-Motion Remapping
Many situations could benefit from mapping human arm motions to robot arm motions. For example, a person could control a robot in real-time by having the robot mimic the operator’s arm motions or a person could teach a robot by providing an effective demonstration with their own hand in the robot’s workspace. While a mapping from human arm motion to robot arm motion could have many applications, the vastly different motion capabilities caused by a robot’s differing scale, speed, geometry, or even number of degrees of freedom, make a direct mapping infeasible.
In this line of work, we propose methods that bridge the gap as best as possible between human-motion and robot-motion for control and teaching applications. For example, in one project we introduce a novel interface for teleoperation that allows novice users to effectively and intuitively control robot manipulators. The premise of our method is that an interface that allows a user to direct a robot using the natural 6-DOF space of his/her hand would afford effective direct control of a robot arm. While a direct mapping is infeasible (given the reasons mentioned above), our key idea is that by relaxing the constraint of the direct mapping between hand position and orientation and end-effector configuration, a system can provide the user with the feeling of direct control, yet be able to achieve the practical requirements for telemanipulation such as motion smoothness and singularity avoidance. We present a method for implementing a novel motion synthesis solution that achieves this relaxed control using constrained optimization, called RelaxedIK, and describe systems that utilize it to provide real-time control of a robot arm. We demonstrate the effectiveness of our approach in a user study that shows that novice users can complete a range of tasks more efficiently and enjoyably using our relaxed-mimicry based interface than with standard interfaces.
Another project in this space has looked at how to effectively provide teach a robot using natural human-arm motion demonstrations. In this work, users use instrumented tongs that are able to accurately capture the position, orientation, force, and torque information throughout the user’s demonstration trace. The data from the demonstration is then used to calculate a feasible execution trace by the robot, such that the robot robustly performs the same task exhibited during the demonstration. We have presented work on how to create instrumented tongs for robot demonstrations as well as constraint detection and playback methods for robot learning.

The Team
-
Daniel Rakita
Assistant Professor, Yale University
-
Guru Subramani
Systems Analyst, Intuitive
-
Pragathi Praveena
PhD Student
-
Mike Hagenow
PhD Student
-
Bilge Mutlu
Sheldon B. and Marianne S. Lubar Professor, Computer Science
-
Michael Gleicher
Professor, Computer Sciences
Sponsors
-
National Science Foundation under award 1208632
-
University of Wisconsin– Madison Office of the Vice Chancellor for Research and Graduate Education with funding from the Wisconsin Alumni Research Foundation
The Robot
-
Sawyer
Learn more »
-
Skills
Publications
[uw_publications]