The Helping Hands Lab @ Northeastern


Research (old: to be updated soon!)

Grasp pose detection

The goal of this work is to develop the capability for a robot to enter a novel environment and grasp whatever objects are found there. There are two key elements to this problem. First, the robot must be able to grasp novel objects about which no information is available. Second, the grasp algorithm must be able to operate robustly in dense clutter (a worse-case scenario for robot grasping). Above all, the robot must be robust to whatever environmental circumstances are encountered.

(a) Cluttered scenario
(b) Detected grasp poses
(c) Grasp

There are two parts to grasp pose detection: 1) grasp hypothesis generation; 2) classification of hypotheses as grasps or not. We use geometry in both of these processes. First, we generate a large set of high quality grasp hypotheses by checking for geometrically necessary conditions for a grasp. Second, we train a grasp pose classifier by using geometry to generate a large amount of autmatically labelled training data. Overall, our method achieves an average grasp success rate of 88% when grasping novels objects presented in isolation and an average success rate of 73% when grasping novel objects presented in dense clutter on Baxter such as that shown in the figures above.

Grasping novel objects in clutter.

  • Ten Pas, A., Platt, R., Using Geometry to Detect Grasps in 3D Point Clouds, Int'l Symposium on Robotics Research (ISRR), 2015.

    ROS package and documentation available here.

    Handle detection

    This algorithm localizes handle-like geometries in a 3-D point cloud by searching for cylindrical shell geometries. The presence of a cylindrical shell that contains points in the cloud but is empty between the shells is a sufficient condition for the presence of an enveloping grasp. We develop a method for efficiently locating cylindrical shells.

    Ten Pas, A., Platt, R., Localizing Handle-like Grasp Affordances in 3D Point Clouds, Int'l Symposium on Experimental Robotics (ISER), 2014

    Ros package and documentation is available here.

    All handles found in the scene are shown in cyan. Red shows the selected grasp target.

    Localizing grasped objects using tactile sensing

    Localizing a grasped object accurately is a prerequisite for many manipulation tasks such as insertions. In this work, we achieve this using Gelsight tactile sensing. Our approach is to apply standard SIFT/SURF feature matching methods to the problem of localizing the pose of the robot finger on the surface of the object. In particular, we use image mozaicing to create a tactile map of what the object surface "feels" like to the robot. Then, during localization, we match the feature pattern perceived by the robot to the map, thereby localizing the pose of the part with respect to the hand. We have shown that this method can be used to robustly insert a USB cable into a socket.

    Localization and insertion of a USB cable using tactile sensing

    Li, R., Platt R., Yuan, W., ten Pas, A., Roscup, N., Srinivasan, M., Adelson, E., Localization and Manipulation of Small Parts Using GelSight Tactile Sensing, IEEE Int'l Conf. on Intelligent Robot Systems (IROS), 2014.

    The Baxter Easyhand

    We have designed a new 3-D printed hand derived from the Yale T42 hand, but designed specifically to be mounted on the Baxter robot from Rethink robotics. Because this hand is designed specifically for Baxter, we are able to make some important design simplifications that make this hand cheaper, lighter, and easier to build and interface with than other robot hands available for Baxter. The hand costs $150US in parts and can be built and assembled by a typical undergraduate student by following the instructions below.

    Assembly instructions for the Baxter Easyhand.

    Franchi, G., ten Pas, A., Platt, R. Panzieri, S., The Baxter Easyhand: A Robot Hand That Costs $150 US in Parts , IEEE Int'l Conf. on Intelligent Robot Systems (IROS), 2015 (submitted).

    Planning Under Uncertainty in Manipulation

    One of the main research challenges in robot manipulation is developing reliable grasping capabilities. It is difficult to guarantee accurate perception of the objects to be grasped or manipulated. In these contexts, it is insufficient just to identify the most likely state of the world; the system must be aware of what it knows and what is not known. Furthermore, when the state of the world is uncertain, the system must be capable of acting in order to gain information. We are developing new approaches to planning under uncertainty that can do this kind of reasoning in robot manipulation scenarios. For example, we are using planning under uncertainty to reason about how to position a camera or range sensor in order to more accurately localize relevant objects to be grasped. We are also using planning under uncertainty to reason about how to push objects in order to better localize them.

    The planner pushes the left box away from the right so that it will better perceive both boxes.

    Platt, R., Kaelbling, L., Lozano-Perez, T., Tedrake, R. Non-Gaussian Belief Space Planning: Correctness and Complexity, IEEE Int'l Conf. on Robotics and Automation, 2012. (The final version of the paper posted here fixes some errors that were present in the proofs in the submitted version.)

    Platt, R., Kaelbling, L., Lozano-Perez, T., Tedrake, R.Efficient planning in non-Gaussian belief spaces and its application to robot grasping, Proceedings of the International Symposium on Robotics Research, 2011. (Extended version available in CSAIL Tech Report MIT-CSAIL-TR-2011-039

    Platt, R., Tedrake, R., Kaelbling, L., Lozano-Perez, T., Belief space planning assuming maximum likelihood observations, Proceedings of Robotics: Science and Systems 2010 (RSS), Zaragosa, Spain, June 27, 2010.


    The robot motion planning problem is very important and arises in several types of robotics problems including manipulation planning, mobile robot motion planning, and aerial vehicle motion planning. RRT* is a recently-developed motion planning algorithm extends the Rapidly Exploring Random Tree (RRT). It is significant because, unlike RRT, it is guaranteed (with probability one) to converge to an optimal motion planning solution. Recently, we have extended the RRT* algorithm to handle arbitrary linear quadratic differentially constrained systems by incorporating ideas from linear quadratic regulation (LQR).

    The path found by our algorithm for a double-integrator system.

    Goretkin, G., Perez, A., Platt, R., Konidaris, G., Optimal Sampling-Based Planning for Linear-Quadratic Kinodynamic Systems, IEEE Int'l Conf. on Robotics and Automation (ICRA), 2013.

    Perez, A., Platt, R., Konidaris, G., Kaelbling, L., Lozano-Perez, T. LQR-RRT^*: Optimal Sampling-Based Motion Planning with Automatically Derived Extension Heuristics, IEEE Int'l Conf. on Robotics and Automation, 2012. (This paper has been updated since our original submission.)