Tucker Hermans portrait
  • Adjunct Assistant Professor, Mechanical Engineering
  • Assistant Professor, School Of Computing
+1-801-581-8122

Publications

  • Balakumar Sundaralingam & Tucker Hermans (2021). In-Hand Object-Dynamics Inference using Tactile Fingertips. IEEE Transactions on Robotics. Published, 01/2021.
  • Visak C. V. Kumar, Tucker Hermans, Dieter Fox, Stan Birchfield & Jonathan Tremblay (2020). Contextual Reinforcement Learning of Visuo-tactile Multi-fingered Grasping Policies. NeurIPs Workshop on Robot Learning. Published, 12/2020.
  • Klevis Aliaj, Gentry Feeney, Balakumar Sundaralingam, Tucker Hermans, K. Bo Foremen, Kent N Bachus & Heath B. Henninger (2020). Replicating Dynamic Humerus Motion using an Industrial Robot. PLOS One. Published, 11/2020.
    https://journals.plos.org/plosone/article?id=10.13...
  • Qingkai Lu, Mark Van der Merwe & Tucker Hermans (2020). Multi-Fingered Active Grasp Learning. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Published, 10/2020.
  • Matthew N. Goodell, Takara E. Truong, Stephanie R. Marston, Brett J. Smiley, Elliot R. Befus, Alex Bingham, Kent Allen, Joseph R. Bourne, Yi Wei, Kate E. Magargal, Vellachi Ganesan, Daniel L. Mendoza, Anil C. Seth, Stacy A. Harwood, Marc Bodson, Tucker Hermans & Kam K. Leang (2020). Autonomous Light Assessment Drone for Dark Skies Studies. ASME Dynamic Systems and Control Conference. Published, 10/2020.
    https://robot-learning.cs.utah.edu/_media/project/...
  • Roya Sabbagh Novin, Ellen Taylor, Tucker Hermans & Andrew Merryweather (2020). Development of a Novel Computational Model for Evaluating Fall Risk in Patient Room Design. HERD: Health Environments Research & Design Journal. Published, 09/24/2020.
  • Katharin Jensen-Nau, Tucker Hermans & Kam K. Leang (2020). Near-Optimal Area-Coverage Path Planning of Energy Constrained Aerial Robots with Application in Autonomous Environmental Monitoring. IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING. Published, 08/31/2020.
  • Mark Van der Merwe, Qingkai Lu, Balakumar Sundaralingam, Martin Matak & Tucker Hermans (2020). Learning Continuous 3D Reconstructions for Geometrically Aware Grasping. IEEE International Conference on Robotics and Automation. Published, 05/2020.
    https://sites.google.com/view/reconstruction-grasp...
  • Qingkai Lu, Mark Van der Merwe, Balakumar Sundaralingam & Tucker Hermans (2020). Multi-Fingered Grasp Planning via Inference in Deep Neural Networks. IEEE Robotics and Automation Magazine. Published, 02/2020.
  • James D Carrico, Tucker Hermans, K. J. Kim & Kam K. Leang (2019). 3D-Printing and Machine Learning Control of Soft Ionic Polymer-Metal Composite Actuators. Scientific Reports (Special Collection: Soft Sensors and Actuators). Published, 11/2019.
  • Matthew Wilson & Tucker Hermans (2019). Learning to Manipulate Object Collections Using Grounded State Representations. Conference on Robot Learning. Published, 11/2019.
  • Seth Payne, C. Fletcher Garrison IV, Steven E. Markhan, Tucker Hermans & Kam K. Leang (2019). Assembly Planning using a Multi-arm System for Polygonal Furniture. ASME Dynamic Systems and Control Conference (DSCC). Published, 10/2019.
  • Adam Conkey & Tucker Hermans (2019). Learning Task Constraints from Demonstration for Hybrid Force/Position Control. IEEE-RAS International Conference on Humanoid Robotics (Humanoids). Published, 10/2019.
  • Adam Conkey & Tucker Hermans (2019). Active Learning of Probabilistic Movement Primitives. IEEE-RAS International Conference on Humanoid Robotics (Humanoids). Published, 10/2019.
  • James R. Watson & Tucker Hermans (2019). Assembly Planning by Subassembly Decomposition Using Blocking Reduction. Robotics and Automation Letters. Vol. 4. Published, 10/2019.
  • Balakumar Sundaralingam, Alexander Lambert, Ankur Handa, Byron Boots, Tucker Hermans, Stan Birchfield, Nathan Ratliff & Dieter Fox (2019). Robust Learning of Tactile Force Estimation through Robot Interaction. IEEE International Conference on Robotics and Automation (ICRA). Published, 06/2019.
  • Qingkai Lu & Tucker Hermans (2019). Modeling Grasp Type Improves Learning-Based Grasp Planning. IEEE Robotics & Automation: Letters. Published, 01/2019.
  • Roya Sabbagh Novin, Amir Yazdani, Tucker Hermans & Andrew Merryweather (2018). Dynamics Model Learning and Manipulation Planning for Objects in Hospitals using a Patient Assistant Mobile (PAM) Robot. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Published, 10/2018.
  • Filipe Veiga, Jan Peters & Tucker Hermans (2018). Grip Stabilization of Novel Objects using Slip Prediction. IEEE Transactions on Haptics. Published, 06/2018.
  • Balakumar Sundaralingam & Tucker Hermans (2018). Relaxed-Rigidity Constraints: Kinematic Trajectory Optimization and Collision Avoidance for In-Grasp Manipulation. Autonomous Robots. Published, 06/2018.
  • Balakumar Sundaralingam & Tucker Hermans (2018). Geometric In-Hand Regrasp Planning: Alternating Optimization of Finger Gaits and In-Grasp Manipulation. IEEE International Conference on Robotics and Automation. Published, 01/12/2018.
  • Qingkai Lu, Kautilya Chenna, Balakumar Sundaralingam & Tucker Hermans (2017). Planning Multi-Fingered Grasps as Probabilistic Inference in a Learned Deep Network. International Symposium on Robotics Research. Published, 12/11/2017.
  • Balakumar Sundaralingam & Tucker Hermans (2017). Relaxed-Rigidity Constraints: In-Grasp Manipulation using Purely Kinematic Trajectory Optimization. Robotics: Science and Systems (RSS). Published, 07/12/2017.
    http://www.roboticsproceedings.org/rss13/p15.html
  • “First Demonstration of Simultaneous Localization and Propulsion of a Magnetic Capsule in a Lumen using a Single Rotating Magnet.” Katie M. Popek, Tucker Hermans, and Jake J. Abbott. IEEE International Conference on Robotics and Automation (ICRA), 2017. Published, 01/15/2017.
  • “Active Tactile Object Exploration with Gaussian Processes,” Z. Yi, R. Calandra, H. van Hoof, F. Veiga, T. Hermans, Y. Zhang, and J. Peters. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2016. Published, 07/01/2016.

Research Statement

In order for robots to leave sterile factory floors and precisely calibrated laboratory settings,
they must be endowed with sophisticated manipulation capabilities, robust to the uncertainty
of the real-world. My research centers on developing the algorithms and systems required to
make use of rich 3D visual and tactile sensor data for improving robot manipulation for use
in the real-world. My work to date has shown how robots can autonomously discover objects,
improve manipulation skills, and generalize these skills to previously unseen objects — crucial
abilities missing in currently deployed robots. A coordinated interplay of perception and
action fundamentally enables these methods’ success. Perception not only guides the robot
during manipulation, but also provides a learning signal to the robot by analyzing the results
of its actions. Further, action provides the means to improve perception: a robot can change
its environment to reduce uncertainty and remove ambiguity present in the scene. By jointly
considering perception and action in developing manipulation skills, I am able to design
straightforward, efficient algorithms, which produce robust and reliable results. I divide my
research contributions to autonomous manipulation into three broad areas: (1) autonomous,
self-supervised learning; (2) in-hand manipulation; and (3) manipulation in clutter.
 
(1) Autonomous Self-Supervised Learning
Robots operating in open environments such as homes and offices will regularly be required
to manipulate novel objects. My research [4, 6] has shown that visual feedback control can
provide an effective means for robots to push novel objects in a predictable manner. However,
the performance of these pushing controllers depends heavily on the initial location the robot
chooses to push on the object. As such, I examined if a robot can autonomously learn where
on an object it should push in order to move the object in a stable manner. Importantly, we
wish for the robot to autonomously collect and label its own data, an approach we call “self-
supervised learning.” Prior to interaction, shape is the most indicative feature as to how
an object can be manipulated. Not only does shape directly describe the object’s geometry,
important for characterizing expected contact with a manipulator, it indirectly provides
information as to an object’s mass and friction distributions. When a robot encounters
an object for the first time, the robot does not have access to the full physical description
of the object necessary to simulate how the object will behave. In our approach [2, 3],
the robot extracts a set of 3D visual features centered at the location it chooses to push,
performs the push, and computes a push stability score by analyzing the observed object
trajectory. My research has shown that a robot can learn to predict these push stability
scores from the shape proxy features. By effectively predicting push stability scores, the
robot can reliably transport an object seen for the first time to a desired goal pose. The
shape proxy provides two important capabilities. First it encodes a concise description of
the object allowing learning to take place from relatively few example interactions. Second,
it provides a generalized notion of shape allowing the learned predictor to generalize its push
stability knowledge to previously unseen objects.
 
We have extended this autonomous learning paradigm to tactile sensing in order to pro-
vide robots with an additional signal beyond vision for learning and assessing manipulation
actions. We examined the problem of a robot automatically detecting and predicting slip
between its finger and an object [11]. Here again, the aim is for the robot to generalize
its prediction capabilities to objects with which it has no previous experience.
The robotrecorded trials of several different objects slipping and learns to predict slip on novel objects
from the tactile sensor signals. This slip detector was then embedded in a feedback controller
to allow the robot to stabilize its grip. The strongly coupled perception and manipulation
approach fundamentally enables the controller’s generalization to novel objects.
 
(2) In-Hand Manipulation
In-hand manipulation comprises a major thrust of my recent and future research. A grasp
capable of lifting a screwdriver differs greatly from the grasp needed in using the driver to
tighten or loosen bolts. Effective, versatile tool use by robots will require such an ability to
lift and regrasp objects in hand. In [10] we examine the task of rolling an object between two
fingers in an under-actuated hand with embedded tactile sensing. We can exploit compliance
and tactile feedback to adapt to unknown objects in grasping; however, compliant hands
and tactile sensors add complexity and are themselves difficult to model. Hence, we propose
acquiring in-hand manipulation skills through model-free reinforcement learning, which does
not require analytic dynamics or kinematics models. We show that this approach successfully
acquires a tactile manipulation skill using a passively compliant hand. Additionally, we show
that the learned tactile skill generalizes to novel objects.
 
While this reinforcement learning technique provided good results, it required large
amounts of data to work. In order to alleviate this reliance we have examined using tra-
jectory optimization with simplified kinematic models to perform an in-hand manipulation
task [9]. Our approach requires no knowledge of dynamic properties of the object or the
robot. We examine the in-grasp manipulation problem: moving an object with reference
to the palm from an initial pose to a goal pose without breaking or making contacts. We
validate our method in experiments with 10 different objects moving to several goal poses
for a total of 500 trials. Our method produces relatively low position error without ever
dropping the object, a feat not achieved by the alternative approaches tested.
 
(3) Manipulation in Clutter: Object Discovery, Modeling, and Manipulation
Robot manipulation traditionally focuses on isolated interaction with a single object. As-
sistive robots in homes and hospitals will need to interact with objects situated in clutter.
When many objects are amassed in a cluttered environment, determining the extent of a
single object in the scene — what parts of an image or point cloud belong to the object
— may not be possible. Interaction provides the robot with a means of verifying a given
object’s extent by moving the object independent of all other elements in the scene. I have
examined this problem by having a robot perform pushing actions to discover and segment
objects in the environment [5]. My work shows that by using visual cues to hypothesize
where object boundaries may lie, a robot can efficiently separate objects in contact with
one another, which traditional segmentation methods interpret as a single object. This
separation allows the robot to know and verify what rigid bodies can be manipulated as
individual objects. Once singulated the robot may construct models of the objects in order
to identify them in the future. While much work has been done in building visual models for
identification, my research has shown how robots can use tactile sensing to reliably identify
objects based on their material [7] and construct 3D models in a data-efficient manner [12].
In addition to discovering objects in cluttered environments, I have explored planning how
to rearrange objects in cluttered settings [1]. Importantly, our planning approach allowed
both manipulation of single objects, as well as pushing of multiple objects at once.
 
References
[1] Akansel Cosgun, Tucker Hermans, Victor Emeli, and Mike Stilman. Push Planning for Object Placement
on Cluttered Table Surfaces. In IEEE/RSJ International Conference on Intelligent Robots and Systems,
September 2011.
[2] Tucker Hermans, Fuxin Li, James M. Rehg, and Aaron F. Bobick. Learning Contact Locations for Push-
ing and Orienting Unknown Objects. In IEEE-RAS International Conference on Humanoid Robotics,
October 2013.
[3] Tucker Hermans, Fuxin Li, James M. Rehg, and Aaron F. Bobick. Learning Stable Pushing Locations. In
IEEE International Conference on Developmental Learning and Epigenetic Robotics (ICDL-EPIROB),
August 2013.
[4] Tucker Hermans, James M. Rehg, and Aaron Bobick. Affordance Prediction via Learned Object At-
tributes. In ICRA Workshop on Semantic Perception, Mapping, and Exploration, May 2011.
[5] Tucker Hermans, James M. Rehg, and Aaron Bobick. Guided Pushing for Object Singulation. In
IEEE/RSJ International Conference on Intelligent Robots and Systems, October 2012.
[6] Tucker Hermans, James M. Rehg, and Aaron F. Bobick. Decoupling Behavior, Perception, and Control
for Autonomous Learning of Affordances. In IEEE International Conference on Robotics and Automa-
tion, May 2013.
[7] Janine Hoelscher, Jan Peters, and Tucker Hermans. Evaluation of Tactile Feature Extraction for Inter-
active Object Recognition. In IEEE-RAS International Conference on Humanoid Robotics, 2015.
[8] Qingkai Lu, Kautilya Chenna, Balakumar Sundaralingam, and Tucker Hermans. Planning Multi-
Fingered Grasps as Probabilistic Inference in a Learned Deep Network. In International Symposium on
Robotics Research (Under Review), 2017.
[9] Balakumar Sundaralingam and Tucker Hermans. Relaxed-Rigidity Constraints: In-Grasp Manipulation
using Purely Kinematic Trajectory Optimization. In Robotics: Science and Systems (RSS), 2017.
[10] Herke van Hoof, Tucker Hermans, Gerhard Neumann, and Jan Peters. Learning Robot In-Hand Ma-
nipulation with Tactile Features. In IEEE-RAS International Conference on Humanoid Robotics, 2015.
[11] Filipe Veiga, Herke van Hoof, Jan Peters, and Tucker Hermans. Stabilizing Novel Objects by Learning
to Predict Tactile Slip. In IEEE/RSJ International Conference on Intelligent Robots and Systems, 2015.
[12] Zhengkun Yi, Roberto Calandra, Filipe Veiga, Herke van Hoof, Tucker Hermans, Yilei Zhang, and
Jan Peters. Active Tactile Object Exploration with Gaussian Processes. In IEEE/RSJ International
Conference on Intelligent Robots and Systems, 2016.

 

Research Keywords

  • Robotics
  • Robotic Perception
  • Robotic Manipulation
  • Robot Learning
  • Reinforcement Learning
  • Motion Planning
  • Machine Learning/ Artifical Intelligence
  • Learning in Physical Systems
  • Computer Vision
  • Autonomous Systems

Presentations

  • iCub Robotics Seminar, Italian Institute of Technology. Invited Talk/Keynote, Presented, 11/2020.
  • CogSci Workshop on the Origins of Commonsense. Invited Talk/Keynote, Presented, 07/2020.
    https://ocs2020.github.io/
  • Robotics Seminar, University of California; San Diego. Invited Talk/Keynote, Presented, 05/28/2020.
  • Robotics Seminar, University of Michigan. Invited Talk/Keynote, Presented, 01/24/2020.
    https://events.umich.edu/event/71850
  • Robotics Institute Seminar, Carnegie Mellon University. Invited Talk/Keynote, Presented, 09/27/2019.
    https://www.ri.cmu.edu/event/ri-seminar-tucker-her...
  • Institute for Robotics and Intelligent Machines Seminar, Georgia Tech. Invited Talk/Keynote, Presented, 09/11/2019.
    https://robotics.gatech.edu/hg/item/623488
  • Robotics Colloquium, University of Washington: ``Learning and Planning for Autonomous, Multi-fingered Robot Manipulation''. Invited Talk/Keynote, Presented, 10/27/2017.
    https://www.cs.washington.edu/research/robotics/pr...
  • "Within-Hand Manipulation: Object Reposing Benchmark” at Workshop on Development of Benchmarking Protocols for Robot Manipulation. Invited Talk/Keynote, Presented, 09/24/2017.
    http://ycbbenchmarks.org/IROS2017workshop.html
  • "Visual and Tactile Learning for Manipulation" - Invited Talk at BYU Mechanical Engineering Seminar. Invited Talk/Keynote, Presented, 01/11/2016.

Languages

  • German, fluent.

Software Titles

  • Trajectory optimization for geometric finger gaiting. Software in support of our publication "Geometric In-Hand Regrasp Planning: Alternating Optimization of Finger Gaits and In-Grasp Manipulation.". Release Date: 01/2018.
  • Deep Neural Network Probabilistic Grasp Inference. Software in support of our publication "Planning Multi-Fingered Grasps as Probabilistic Inference in a Learned Deep Network". Release Date: 12/2017.
  • Relaxed-Rigidity Trajectory Optimization. Software in support of our publication "Relaxed-Rigidity Constraints: In-Grasp Manipulation using Purely Kinematic Trajectory Optimization.". Release Date: 07/2017.