Research Scientist, MIT CSAIL
Learning Cognitive Models from Machine Vision and Natural Language
11am Thursday April 10, 2014, 166 WVH
Whether they are exploring the deepest depths of the ocean or the surface of Mars, or responding to a disaster, robots have proven tremendously effective as our surrogates, performing tasks that are either too difficult, dangerous, or dull for humans. The next generation of intelligent systems will cooperate with people in our homes and workplaces, providing personalized care, assisting the disabled, and carrying out advanced manufacturing. To be effective partners, robots must reason about their environment and their actions in the same way that humans do. However, robots currently use representations that are either hard-coded or require significant supervision by a domain expert. I seek to enable robots to efficiently learn shared cognitive models of their surroundings and available actions from their interaction with humans.
This talk highlights my recent advances in semantic perception that enable robots to acquire shared cognitive models of objects and of their environment from limited supervision provided by human partners. First, I will describe a visual appearance-based algorithm that efficiently learns a robust representation of objects from a single, user-provided segmentation cue. Second, I will provide a probabilistic framework that allows robots to formulate human-centric models of their environment from natural language descriptions. I will then demonstrate how these learned representations allow people to command and interact with robots using free-form speech. Finally, I will end with my vision for how robots will formulate hierarchical cognitive models of their environments, the objects they contain, and the rich space of actions available to the robot.
University of Amsterdam
Bayesian RL for Multiagent Systems under State Uncertainty
11am Thursday, April 3, 2014, 166WVH
Sequential decision making in multiagent systems (MASs) is a challenging problem, especially when the agents have uncertainty about what the true state of the environment is. The problem gets even more complex when the agents do not have an accurate model of the environment and/or the other agents they are interacting with. In such cases, the agents will need to learn during execution. While the field of multiagent reinforcement learning (MARL) focuses on learning in MASs, few approaches address settings with state uncertainty, and even fewer consider principled methods for balancing the exploitation of learned knowledge and exploratory actions to gain new information.
In this talk I will cover two approaches to Bayesian MARL for settings with state uncertainty that aim to fill this gap by transforming the learning problem to a planning problem. The solution of this planning problem specifies behavior that optimally trades of exploration and exploitation. I discuss a novel scalable approach based on sample-based planning and factored value functions that exploits structure present in many multiagent settings. I will also briefly talk about a model from the perspective of a single agent that has uncertainty about both the environment and the behavior of the agents it needs to interact with. Our results show that we can provide high quality solutions to these realistic problems even with a large amount of initial model uncertainty.
Assistant Professor, WPI
The Frontiers of Intelligent Robot Motion and Manipulation
3pm Thursday, November 21, 2013, 366WVH
Robot motion and manipulation has been studied for many years, with much of the work focusing on rigid environments with known obstacles, like a factory work cell. We now seek to bring robots into our homes, offices, and operating rooms and to have them collaborate with people. These new applications break the fundamental assumptions behind most of our methods and have created new frontiers in motion planning and manipulation. I will discuss our work on pursuing these frontiers; namely in human-robot collaboration, manipulation, and motion planning for deformable objects.
Dmitry Berenson received a BS in Electrical Engineering from Cornell University and received his Ph.D. degree from the Robotics Institute at Carnegie Mellon University in 2011. He completed a post-doc at UC Berkeley in the Department of Electrical Engineering and Computer Sciences and started as an Assistant Professor in Robotics Engineering and Computer Science at WPI in 2012. He founded and directs the Autonomous Robotic Collaboration (ARC) Lab at WPI, which focuses on motion planning, manipulation, and human-robot collaboration.
Postdoctoral Researcher at the Distributed Robotics Group at MIT CSAIL
Physics-Based Robotic Manipulation in Human Environments
11am Thursday, October 24, 2013, 166 WVH
The list of physics-based actions that we humans use to push, pull, throw, tumble, and play with the objects around us is nearly endless. My research strives to develop robots with similar capabilities by incorporating physical predictions into manipulation planning. Most existing manipulation planners do not use physical predictions and therefore are limited to pick-and-place actions. I develop manipulation algorithms which enable robots to move beyond pick-and-place.
In this talk I will focus on using physics-based pushing actions. I will describe how a robot can plan pushing actions that are robust to high degrees of uncertainty in the environment. I will show that pushing manipulation leads to very efficient plans in cluttered environments, while pick-and-place manipulation treats clutter like a game of chess where each piece is moved one-by-one. Finally, I will talk about how contact sensor feedback can be used during physics-based actions to reduce uncertainty and to account for errors in the robot's physical predictions.
Mehmet Dogar is a Postdoctoral Researcher at the Distributed Robotics Group at MIT CSAIL, supervised by Prof. Daniela Rus. His research focuses on using physics-based predictions in robotic manipulation which enables robots to accomplish useful tasks in dynamic and cluttered human environments. He received his PhD in August 2013 from the Robotics Institute at Carnegie Mellon University. He received M.S. and B.S. degrees in Computer Engineering from the Middle East Technical University, Turkey.
Ph.D. Students, MIT Interactive Robotics Group
Human-Robot Cross-Training: Computational Formulation, Modeling and Evaluation of a Human Team Training Strategy
2pm Tuesday, April 23, 2013, 366 WVH
We design and evaluate human-robot cross-training, a strategy widely used and validated for effective human team training. Cross-training is an interactive planning method in which a human and a robot interactively switch roles to learn a shared plan for a collaborative task. We computationally formulate human-robot cross-training and compare it to standard interactive reinforcement learning algorithms through a large-scale experiment of 36 subjects. We show that cross-training improves team fluency metrics, including an increase of 71% in concurrent motion (p = 0.02), and a decrease of 41% in human idle time (p = 0.04), during the human-robot task execution phase that succeeds the training phase. Additionally, cross-training reduces the robot uncertainty on human behavior before task execution (p = 0.04). Finally, a post-experimental survey shows statistically significant differences in perceived robot performance and trust in the robot (p < 0.01). These results provide the rest evidence that human-robot teamwork is improved when a human and robot train together by switching roles, in a manner similar to effective human team training practices.
Stefanos was born in Athens, Greece, where he studied Electrical and Computer Engineering at the National Technical University of Athens (Dipl. Eng., 2006). He then moved to Japan to do research in Robotics at the University of Tokyo (M.Eng., 2009). He later worked at Square-Enix video-game company in Tokyo for two years, before joining IRG@MIT as a PhD student. He is doing research in human and robot team coordination for efficient task execution under time pressure.
Research Fellow at Children's Hospital Boston/Harvard Medical School
Magnetic Microrobots for Minimally Invasive Interventions
3pm Thursday, March 1, 2012, 366 WVH
Surgeons are increasingly adopting tools and techniques that enable them to operate with minimal invasiveness, reduced trauma, and higher success rates. However, there is an inherent limit to their capabilities, due to the increased amount of dexterity that is required to manipulate sensitive human tissue and the inaccessibility of a variety of body regions. Examples include retinal surgery, where small surgeon errors may lead to blindness, and targeted drug deliver to inaccessible cancerous tissue. These problems may be addressed with a tool that is gaining increasing attention; magnetically actuated microrobots.
The expertise of the Institute of Robotics and Intelligent Systems of ETH Zurich lies in the development of electromagnetic steering systems for microrobots. In this talk, we will focus on the development of a system for ophthalmic surgery. We will present drug loaded microrobots that can wirelessly move in 5DOF in order to precisely deliver drugs to the retina. We will cover aspects of tracking, control, and servoing of these microrobots. Additionally, we will briefly introduce the concept of swimming microrobots, which can be propelled through the human vasculature in order to reach remote areas.
Christos is a Research Fellow at Children's Hospital Boston/Harvard Medical School, where he is investigating MRI-actuated robotics for minimally invasive surgery. Previously, he was at ETH Zurich where he worked on magnetic microrobots for retinal surgery. He was involved in tracking, control, servoing, and ex vivo testing of the developed technology. He received the Ph.D. degree (mech. eng.) from ETH Zurich, Switzerland, in 2011, and the M.Sc. degree (electr. comp. eng.) from the National Technical University of Athens, Greece, in 2006.
Research Scientist, MIT CSAIL
Planning under uncertainty in robotics
3pm Wednesday, November 16, 2011, 366 WVH
One of the key challenges in robotics is designing robots that can function robustly even when the state of the world is uncertain. This is a fundamental problem in many application areas including manufacturing, space exploration, and domestic assistance. In this talk, I frame this as a planning under uncertainty problem. Planning under uncertainty is a general framework in which the objective is to move from an initial “belief state” where the system is very uncertain about the state of the world to one where the system is certain that it has achieved the task objectives. One of the hallmarks of planning under uncertainty is the ability to generate information-gathering strategies automatically as needed to solve a problem. Because finding optimal solutions to this problem is PSPACE-hard in general, I will propose a class of approximate algorithms based on solving “determinized” sub-problems that is well-suited to continuous space and long time horizon robotics domains. Although these algorithms are sub-optimal, they are correct in the sense that they can be guaranteed to achieve task objectives eventually. In addition, it turns out that it is possible to leverage convex sub-structure to solve these determinized planning problems efficiently. I will demonstrate the approach in the context of a difficult robot manipulation problem where the robot must simultaneously localize and grasp an occluded object whose position is initially unknown.
Robert Platt will start as an Assistant Professor at SUNY Buffalo in the Spring of 2012. Currently, he is a research scientist in the Computer Science and Artificial Intelligence Laboratory at the Massachusetts Institute of Technology. Prior to coming to MIT, he was the team lead for control and autonomy for Robonaut 2, a NASA robot that flew to the international space station aboard the space shuttle Discovery in February 2011. He holds a Ph.D. from the University of Massachusetts, Amherst.
Manager, Planning and Execution Systems Section, NASA/JPL
This Is Mission Control
3pm Thursday, April 21, 2011, 366 WVH
The NASA Jet Propulsion Laboratory is responsible for a fleet of spacecraft in operation throughout the solar system that are constantly extendinghumanity's reach and expanding our understanding of the universe. In this presentation, Dr. Norris will describe some of these spacecraft and the software and processes that control them. The presentation will also cover some of the latest research in robot commanding using the Xbox Kinect and the ways we might control robotic space explorers in the future. Dr. Norris will conclude with a brief overview of the Jet Propulsion Laboratory and some of the opportunities for internships and full-time positions for students in a broad range of engineering and science majors.
Dr. Jeff Norris manages the Planning and Execution Systems Section at the NASA Jet Propulsion Laboratory. He is also the founder and lead of the JPL Ops Lab and is an active human-system interaction researcher. Previously, he was a mission operator for the Spirit and Opportunity Mars Rover mission and has led the development of numerous software systems for the command and control of robots and spacecraft. Dr. Norris received the 2009 Lew Allen Award for Excellence and the 2004 NASA Software of the Year Award. He received Bachelor's and Masters degrees in Computer Science from MIT and a Ph.D. in Computer Science from the University of Southern California.
Staff Scientist in Bioinspired Robotics
Wyss Institute for Biologically Inspired Engineering at Harvard University
Designing Emergence in Swarm Robotics
3pm Thursday, January 27, 2011, 366 WVH
Emergent systems exhibit complex behavior arising from the interactions of many simple components. This talk will give an overview of three research projects in which the activities of many simple autonomous robots need to be coordinated so as to give desired high-level behavior. First, in a collective construction task inspired by termites, robots build structures according to user-specified designs. Next, in a collective locomotion task inspired by cellular slime molds, a robot comprising an unknown number of independent modules with unknown connectivity needs to achieve effective movement. Finally, performance considerations under severe constraints inform the design of micro aerial vehicles inspired by honeybees. The talk will highlight challenges and progress in these areas, and discuss general principles that can be used in other systems.
Justin Werfel is a research scientist in Bioinspired Robotics at Harvard Universitys Wyss Institute for Biologically Inspired Engineering. His research interests are in the understanding and design of complex and emergent systems. He received his Ph.D. in computer science from MIT in 2006
Research Scientist, Universite Libre de Bruxelles
Fast Reconfiguration Planning Algorithms for Modular Robots
3pm Wednesday, December 15, 2010, 366 WVH
A modular robot consists of several identical individual components (modules) attached together. Groups of modules may co-operate to produce local or global reconfigurations of the robot shape. In this talk, I will discuss algorithmic issues in the reconfiguration of Crystalline robots, whose components are shaped as cubes.
The total number of sufficient motions for any reconfiguration depends on the level of “physical realism” that is chosen to be modeled (e.g., bounds on physical strength or maximum velocity). Some of the recent bounds obtained are not yet tight, and thus present themselves as interesting open problems.
Ph.D. Candidate, University of California, Santa Barbara
On Distributed Coordination in Robotic Networks: Gossip Coverage and Frontier-based Pursuit
4pm, Thursday, February 4, 2010, 366 WVH
In this talk, I present a general model for distributed robotic networks and describe our implementation of two particular algorithms to run on hardware. The first example is a discrete coverage control algorithm for gossiping robots, where the goal is to deploy a team of robots to provide coverage of a discretized environment represented by a graph. The algorithm provably converges to a centroidal Voronoi partition while requiring only pairwise “gossip” communication. My second example is a distributed pursuit-evasion algorithm inspired by frontier-based exploration methods. In this algorithm, a team of robots with limited range sensors collaborate to clear an environment of any evaders through distributed storage and updating of the global frontier between cleared and contaminated areas. I conclude with a preview of our ongoing work in both of these directions.
Joseph Durham is currently a Ph.D. candidate developing distributed algorithms for teams of robots in the Mechanical Engineering Department at the University of California at Santa Barbara. His research interests are in efficient and scalable robotic algorithms which can be implemented in hardware. He received a BA in physics from Carleton College in 2004 and a M.S. from the University of California at Santa Barbara in 2007.
Assistant Professor, NEU CCIS
Robotics: Algorithms in Action in the Physical World
11:45am, Wednesday, March 24, 2010, 108 WVH
Talk given in the nuACM lecture series at NEU CCIS (video).
Robotics is a multi-disciplinary field studied by many different researchers for different reasons. In this talk I will describe my perspective as a computer scientist: a robot is a physical machine whose state encodes information. This information evolves according the interaction of (1) the geometry and mobility of the machine (kinematics), (2) the laws of physics (dynamics), and (3) motion planning and control algorithms.
Such algorithms detect the robot's state via sensors and affect it via actuators, and thus give the robot some authority over its own future. But sensing and actuation typically offer only limited access to the state, which can make motion planning and control particularly challenging. To be successful, these algorithms usually need to work in harmony with the physical kinematics and dynamics. The overall task can even extend to designing a robot with intrinsically tractable physical properties.
I will describe past and present work along these lines in several different areas, including
self-reconfiguring robots, where the goal is to build systems that can automatically and arbitrarily change shape
user interface software and hardware for remote operation of robots, in particular for space exploration
compliant climbing and walking robots that achieve high reliability by using control algorithms that exploit their particular kinematics and dynamics.
Marsette Vona is an experimental roboticist who examines how computation interacts with the physical world. His recent work has involved algorithms for the simulation and control of articulated, and sometimes self-reconfiguring, robots; mechanisms and algorithms that exploit compliant actuation for reliable climbing and walking; and hardware and software for remote operation of Lunar robots. He received his MS and PhD degrees in electrical engineering and computer science from the Massachusetts Institute of Technology. While pursuing his PhD, he spent two years at NASA's Jet Propulsion Laboratory in California, where he helped create the science operations software for the Mars Exploration Rover mission (Spirit and Opportunity), which earned the NASA Software of the Year Award in 2004.