Natural and Intuitive Telemanipulation Interfaces
Telemanipulation systems, where people leverage robot platforms to project their manipulation abilities into remote, dangerous, or high-precision settings, are valuable in scenarios where automation is impractical, where human judgment is essential, or where having the user engaged in the task is desirable. While many facets of telemanipulation have been developed and studied since the late 1950’s, the interfaces behind such systems still exhibit issues. For example, the interfaces are tedious to use, especially for complex tasks, as it is cumbersome to break down tasks into a sequence of atomic actions, the interfaces often require mode switching, such as translation and rotation modes or end-effector space and joint-space modes, which reduces fluency in completing tasks, and the interfaces are difficult for novice users to use and have a pronounced learning curve as they are primarily designed for expert users like engineers or military personnel.
In this area of research, we have developed numerous telemanipulation interfaces designed to be easy to learn and operate, while still being robust and general purpose. The central idea of our control paradigm is to map human-arm motion to robot-arm motion in real-time in order to effectively and efficiently specify to a robot how to do a particular task on-the-fly. We posit that enabling users to work in the “natural” space of their arms will allow them to draw on their inherent kinesthetic sense and ability to perform tasks in controlling a robot. That is, mapping the movement of the user’s arm(s) to the movement of the robot can allow intuitive and effective control of the robot without significant training.
The premise of our method is that, while a direct mapping between the user’s hand and the robot’s end effector is impractical because the robot has different kinematic and speed capabilities than the human arm, we can relax the constraint of the direct mapping between hand position and orientation and end-effector configuration. Thus, the system can provide the user with the feel of direct control, while still achieving the practical requirements for telemanipulation, such as motion smoothness and singularity avoidance. We presented methods for implementing a motion retargeting solution that achieves this relaxed control using a constrained optimization called RelaxedIK and described a system that utilizes it to provide real-time control of a robot arm. We demonstrated the effectiveness of our approach in a user study that shows novice users can complete a range of tasks more efficiently and enjoyably using our relaxed-mimicry based interface compared to standard interfaces.
In follow-up work, we presented a method that improves the ability of remote users to operate a robot arm by continuously providing them with an effective viewpoint using a second camera-in-hand robot arm. The user controls the manipulation robot using the motion-remapping method discussed above, and the camera-in-hand robot automatically servos to provide a view of the remote environment that is estimated to best support effective manipulations. Our method avoids occlusions with the manipulation arm to improve visibility, provides context and detailed views of the environment by varying the camera-target distance, utilizes motion prediction to cover the space of the user’s next manipulation actions, and actively corrects views to avoid disorienting the user as the camera moves. Through multiple user studies, we have shown that our method improves remote telemanipulation performance over alternative methods of providing visual support for telemanipulation.
More recently, we (along with collaborators at the Naval Research Laboratory in Washington, D.C.) have enabled robots to more effectively perform bimanual tasks by introducing a bimanual shared-control method. The control method moves the robot’s arms to mimic the operator’s arm movements, but provides on-the-fly assistance to help the user complete tasks more easily. Our method utilizes a bimanual action vocabulary, constructed by analyzing how people perform two-hand manipulations, as the core abstraction level for reasoning about how to assist in bimanual shared-autonomy. The method infers which individual action from the bimanual action vocabulary is occurring using a sequence-to-sequence recurrent neural network architecture and turns on a corresponding assistance mode, signals introduced into the shared-control loop designed to make the performance of a particular bimanual action easier or more efficient. We demonstrated the effectiveness of our method through two user studies that showed that novice users can control a robot to complete a range of complex manipulation tasks more successfully using our method compared to alternative approaches.
The Team
-
Daniel Rakita
Assistant Professor, Yale University
-
Bilge Mutlu
Sheldon B. and Marianne S. Lubar Professor, Computer Science
-
Michael Gleicher
Professor, Computer Sciences
Sponsors
-
National Science Foundation under award 1208632
-
University of Wisconsin– Madison Office of the Vice Chancellor for Research and Graduate Education with funding from the Wisconsin Alumni Research Foundation
The Robot
-
UR5
Learn more »
-
Skills
Safe | High repeatability | Robust | Payload: 5 kg |
Publications
Rakita, D., Mutlu, B., and Gleicher, M. 2019. Remote Telemanipulation with Adapting Viewpoints in Visually Complex Environments. Robotics: Science and Systems (RSS).
Rakita, D., Mutlu, B., Gleicher, M., and Hiatt, L. 2019. Shared-Control-Based Bimanual Robot Manipulation. Science Robotics. [Preprint] [VIDEO]
Rakita, D., Mutlu, B., and Gleicher, M. 2018. Relaxed-IK: Real-time Synthesis of Accurate and Feasible Robot Arm Motion. Robotics: Science and Systems (RSS) [VIDEO]
Rakita, D., Mutlu, B., and Gleicher, M. 2018. An Autonomous Dynamic Camera Method for Effective Remote Teleoperation. International Conference on Human-Robot Interaction (HRI). ACM/IEEE. Best Paper Award Winner [VIDEO]
Rakita, D., Mutlu, B., Gleicher, M., and Hiatt, L. 2018. Shared Dynamic Curves: A Shared-Control Telemanipulation Method for Motor Task Training. International Conference on Human-Robot Interaction (HRI). ACM/IEEE.
Rakita, D., Mutlu, B., and Gleicher, M. 2017. A Motion Retargeting Method for Effective Mimicry-based Teleoperation of Robot Arms. International Conference on Human-Robot Interaction (HRI). ACM/IEEE. [VIDEO]