Topics of Discussion

In this workshop, we would like to discuss the critical role of open-ended learning in object perception and grasp affordance. We invite a number of renowned experts in the field who will highlight the current successes and future challenges these robots face. In particular, we will discuss the current and future challenges and opportunities for open-ended approaches in this workshop by considering the following questions:

  1. What is the role of open-ended learning in human cognition?
  2. What lessons can we learn from human cognition to develop open-ended learning approaches for autonomous robots?
  3. What challenges, opportunities the open-ended approaches provide for incremental robot learning and learning from observations?
  4. What would be the target for open-ended approaches? To what extend open-ended approaches must generalise?
  5. What are the impacts of open-ended learning in human-robot collaborative domains?
  6. What are the limitation of Deep Learning approaches to be used in open-ended manner?
  7. How well should the learning algorithm be resistance to catastrophic forgetting (i.e., acquiring new knowledge should not destroy old knowledge)?
  8. How to evaluate performance of open-ended learning approaches? What are the right metrics to do so?
  9. What would be the right benchmarks, datasets that helps evaluate approaches and compare progress in this field?

Topics of Interest

Topics of interest include but not limited to the following:

  • Architectures for open-ended learning
  • Transfer learning from one to another type of robot hand
  • Open-ended grasping of deformable objects
  • Lifelong learning and adaptation for autonomous robots
  • Cognitive robotics
  • Deep learning for task-informed grasping
  • Deep transfer learning for object perception
  • Knowledge transfer and avoidance of catastrophic forgetting
  • Affordance learning and task informed grasping
  • Challenges of Human-Robot collaborative manipulation
  • Grasping and object manipulation
  • 3D object category learning and recognition
  • Active perception and scene interpretation
  • Coupling between object perception and manipulation
  • Learning from demonstrations

Great line-up of speakers

Tamim Asfour (KIT)

Tamim Asfour is full Professor at the Institute for Anthropomatics and Robotics, where he holds the chair of Humanoid Robotics Systems and is head of the High Performance Humanoid Technologies Lab (H2T). His current research interest is high performance 24/7 humanoid robotics.

*Topic: Human-Robot Collaborative Manipulation in Real World Scenarios.

Serena Ivaldi (INRIA)

Serena is a tenured research scientist in INRIA Nancy Grand-Est (France), working in the project-team LARSEN. Serena is currently focused on robots collaborating with humans. She is interested into combining ML with control to improve the prediction and interaction skills of robots.

*Topic: Learning objects by autonomous exploration and human interaction.

Hao Su (UC San Diego)

Hao Su has been in UC San Diego as Assistant Professor of Computer Science and Engineering since July 2017. He is affiliated with the Contextual Robotics Institute and Center for Visual Computing. He served on the program committee of multiple conferences and workshops on computer vision, computer graphics, and machine learning. He is the Area Chair of CVPR'19.

*Topic: Semantic Scene Segmentation using PartNet Models.

Luis Seabra Lopes (Uni. of Aveiro)

Luis Seabra Lopes is Associate Professor of Informatics in the Department of Electronics, Telecommunications and Informatics of the University of Aveiro, Portugal. He received a PhD in Robotics and Integrated Manufacturing from the New University of Lisbon, Portugal, in 1998. Luís Seabra Lopes has long standing interests in robot learning, cognitive robotic architectures, and human-robot interaction.

*Topic: Open-ended robot learning about objects and activities.

Yukie Nagai (Uni. of Tokyo)

Yukie Nagai has been investigating underlying neural mechanisms for social cognitive development by means of computational approach. She designs neural network models for robots to learn to acquire cognitive functions such as self-other cognition, estimation of others’ intention and emotion, altruism, and so on based on her theory of predictive learning.

*Topic: Predictive Coding as a Computational Theory for Open-Ended Cognitive Development.

Luca Carlone (MIT)

Luca Carlone is the Charles Stark Draper Assistant Professor in the Department of Aeronautics and Astronautics at the Massachusetts Institute of Technology, and a Principal Investigator in the Laboratory for Information & Decision Systems (LIDS). He received his PhD from the Polytechnic University of Turin in 2012. His research interests include nonlinear estimation, numerical and distributed optimization, and probabilistic inference, applied to sensing, perception, and decision-making in single and multi-robot systems.

*Topic: It kinda works! Challenges and Opportunities for Robot Perception in the Deep Learning Era.

Julian Ibarz (Google AI)

Julian Ibarz is a Staff Software Engineer at Google AI. He is a technical lead within the Google Brain Robotics team and work on making robots smarter with deep reinforcement learning techniques. Prior to that, he worked 5 years in the Google Maps team helping automating mapping using deep learning.

*Topic: Challenges of Self-Supervision via Interaction in Robotics.

Carlos Celemin Paez (Delft Uni.)

Carlos is a Postdoctoral researcher in the Cognitive Robotics Department at Delft University of Technology, in the group of Learning and Autonomous Control. His research is focused on Machine Learning for robot control, combining Reinforcement learning and human feedback in order to obtain data efficient methods which make feasible to learn directly on real systems.

*Topic: Teaching Robots Interactively from few corrections: Learning Policies and Objectives.

Program

Time Speaker Topic Presentation
9:00-9:05 Hamidreza Kasaei Introduce the workshop and its goals download
9:05-9:45 Tamim Asfour Human-Robot Collaborative Manipulation in Real World Scenarios download
9:45-10:25 Julian Ibarz and
Vincent Vanhoucke
Challenges of Self-Supervision via Interaction in Robotics. download
10:25-10:45 Paper presentations 20 minutes paper presentations
10:45-11:15 Coffee break --
11:15-11:50 Luis Seabra Lopes Open-Ended Robot Learning about Objects and Activities. download
11:50-12:30 Serena Ivaldi Learning Objects by Autonomous Exploration and Human Interaction download
12:30-14:00 Lunch --
14:00-14:40 Yukie Nagai Predictive Coding as a Computational Theory for Open-Ended Cognitive Development download
14:40-15:15 Hao Su Learning to Model the Environment for Interaction. download
15:15-15:45 Coffee break -- --
15:45-16:20 Luca Carlone It Kinda Works! Challenges and Opportunities for Robot Perception in the Deep Learning Era. download
16:20-16:55 Carlos Celemin Paez
and Jens Kober
Teaching Robots Interactively from Few Corrections: Learning Policies and Objectives download
16:55-17:55 Panel Discussion Panel Discussion
17:55-18:00 End

Title: Human-Robot Collaborative Manipulation in Real World Scenarios

Robot that should literally provide a second pair of hands to humans should be able to understand situations and reason about possible helping actions. The talk will address the grasping and manipulation skills of ARMAR-6, a new humanoid robot developed to provide help to technician in industrial maintenance tasks. I will showcase the robot’s capabilities and its performance in a challenging industrial maintenance scenario that requires human-robot collaboration, where the robot autonomously recognizes the human’s need of help and provides such help in a proactive way. The talk will conclude with discussion of challenges, lessons learnt and potential transfer of the results to other domains.



Title: Learning objects by autonomous exploration and human interaction

In this talk I will present how we approached the problem of learning the visual appearance of different objects with the iCub robot. We exploited human interaction to teach the robot the visual appearance of objects, combing intrinsically motivated self exploration of objects with the supervised interaction by human experts. We used human experts also to teach the iCub how to assemble a two-parts objects. I will conclude by presenting how these researches are used in the new project HEAP, where the human expertise is used to facilitate the robot to grasp irregular objects.




Title: Open-ended robot learning about objects and activities

If robots are to adapt to new users, news tasks and new environments, they will need to conduct a long-term learning process to gradually acquire the knowledge needed for that adaptation. One of the key features of this learning process is that it is open-ended. The Intelligent Robotics and Systems group of the University of Aveiro has been carrying out research on open-ended learning in robotics for more than a decade. Different learning techniques were developed for object ecognition, grasping and task planning. These techniques build upon well established machine learning techniques, ranging from instance-based learning and bayesian learning to abstraction and deductive generalization. Our approach includes the human user as mediator in the learning process. Key features of open-ended learning will be discussed. New experimental protocols and metrics were designed for open-ended learning and will also be presented.


Title: Learning to Model the Environment for Interaction

Having a compact yet informative representation of the environment is vital for learning interaction policies in complex scenes. In this talk, I will go over three recent papers from my group targeting at bridging gap of environment modeling and interaction: 1) A recent CoRL 2019 paper that studies how to learn 6-DoF grasping poses in a cluttered scene captured using a commodity depth sensor from just a single view point; 2) A recent NeurIPS 2019 work on model-based reinforcement learning by mapping the state space using automatically discovered landmarks; and 3) A recent CVPR 2019 paper on building a large-scale 3D dataset with fine-grained part segmentation and mobility information.





Title: Predictive Coding as a Computational Theory for Open-Ended Cognitive Development

My talk presents computational models for robots to acquire cognitive abilities as human infants do. A theoretical framework called predictive coding suggests that the human brain works as a predictive machine, that is, the brain tries to minimize prediction errors by updating the internal model and/or by affecting the environment. Inspired by the theory, I have suggested that predictive learning of sensorimotor signals leads to open-ended cognitive development (Nagai, PhilTransB 2019). Neural networks based on predictive coding have enabled our robots to learn to generate goal-directed actions, estimate the goal of other agents, and assist others trying to achieve a goal successively. This result demonstrates how both non-social and social behaviors emerge based on a shared mechanism of predictive learning. I discuss the potential of the predictive coding theory to account for the underlying mechanism of open-ended cognitive development.


Title: It kinda works! Challenges and Opportunities for Robot Perception in the Deep Learning Era

Spatial perception has witnessed an unprecedented progress in the last decade. Robots are now able to detect objects, localize them, and create large-scale maps of an unknown environment, which are crucial capabilities for navigation and manipulation. Despite these advances, both researchers and practitioners are well aware of the brittleness of current perception systems, and a large gap still separates robot and human perception. This talk discusses two efforts targeted at bridging this gap. The first focuses on robustness: I present recent advances in the design of certifiably robust spatial perception algorithms that are robust to extreme amounts of outliers and afford performance guarantees. These algorithms are “hard to break” and are able to work in regimes where all related techniques fail. The second effort targets metric-semantic understanding. While humans are able to quickly grasp both geometric and semantic aspects of a scene, high-level scene understanding remains a challenge for robotics. I present recent work on real-time metric-semantic understanding, which combines robust estimation with deep learning. I discuss these efforts and their applications to a variety of perception problems, including mesh registration, image-based object localization, and SLAM.


Title: Challenges of Self-Supervision via Interaction in Robotics

Deep Learning has been revolutionizing robotics over the last few years. Leveraging data to learn robotic skills is critical to gain generalization capabilities for certain robotic tasks, such as generalizing to grasp new objects. We will discuss challenges we encountered when learning on real robots, and the progress we've made in tackling them. Learning robotic tasks in the real world require our algorithms to be data efficient, or to scale our data collection efforts. Real world tasks have much more diversity and visual complexity than in most simulated domains. Some challenges are sometimes not captured in existing simulated domains, such as dealing with latency caused by sensors and real-time computations that may impact the performance of certain reinforcement learning algorithms. In this talk, we will discuss how we can learn complex closed-loop self-supervised grasping behaviors, using deep reinforcement learning, on real robots. We will discuss how we can leverage sim-to-real techniques to gain orders of magnitude data efficiency as well as new reinforcement learning algorithms that can scale to visual robotic tasks and generalize to new objects. We will then take a peak at what types of challenges lie ahead of us, beyond grasping objects within a bin.


Title: Teaching Robots Interactively from few corrections: Learning Policies and Objectives

Learning manipulation and other complex robotic tasks directly on the real systems seems to be a simplified and straight strategy. Although, for challenging tasks it remains unfeasible due to the need for prohibitive amounts of physical experience, in the autonomous (reinforcement) learning cases, and to the lack of any source of proper expert demonstrations in the case of Imitation Learning. Methods demanding less detailed information from the user, and more robust to mistaken feedback are required from end-users who are non-experts, but still need flexible and adaptable robots. This talk is focused on learning methods that allow non-expert users to train robots to perform complex tasks with few and vague interactions. Based on the use of occasional relative corrections, and users' preferences, it is possible to learn policies and the objective functions of the task the user wants to teach. All of this, within a few iterations, ensuring data efficiency, therefore, making it feasible for real systems.


Call for Papers

Submissions

We encourage contributions with either a contributed paper ( IEEE conference format, 6 pages without references), an extended abstract of a published work (IEEE conference format, 2 pages maximum).
All papers are reviewed using a blind review process. Authors of selected contributed papers may be asked to submit extended versions of their papers for an RA-L special issue. All papers must be written in English and submitted electronically in PDF format by emailing it to oel.workshop@gmail.com

Important Dates

  • Submission Deadline: 22 October 2019
  • Notification: 25 October 2019
  • Workshop Date: 8 November 2019






Accepted Papers

Authors Title
Quentin Delfosse, Svenja Stark, Daniel Tanneberg, Vieri Giuliano Santucci, Jan Peters Open-Ended Learning of Grasp Strategies using Intrinsically Motivated Self-Supervision
Andrew Melnik, Luca Lach,Matthias Plappert, Timo Korthals, Robert Haschke, and Helge Ritter Tactile Sensing and Deep Reinforcement Learning for In-Hand Manipulation Tasks
Mario Ríos Muñoz, Lambert Schomaker, S. Hamidreza Kasaei Extending GG-CNN through Automated Model Space Exploration using Knowledge Transfer
J. Kim, N. Cauli, P. Vicente, B. Damas, A. Bernardino, J. Santos-Victor and F. Cavallo Cleaning Tasks Knowledge Transfer between Heterogeneous Robots: A Deep Learning Approach
D. Kamale, S. Mghames, T. Pardi, A. Srinivasany, Gerhard Neumann and Amir Ghalamzan E. Haptic-guiding to Avoid Collision during Teleoperation
T. Pardi, V. Ortenzi, C. Fairbairn, T. Pipe, A. M. Ghalamzan E., and R. Stolkin. Toward a Kinetically-aware Motion Planning for Robotic Cutting on Arbitrary Surfaces

Photos of the workshop

Organizers

Dr. Hamidreza Kasaei, University of Groningen, the Netherlands

Hamidreza joined the Department of Artificial Intelligence of the University of Groningen, the Netherlands, as a Faculty of Science and Engineering (FSE) Fellow in 2018. Prior to joining University of Groningen, he finished his Ph.D. on open-ended learning approaches to recognise multiple objects and their grasp affordances concurrently as part of an FP7 project named RACE: Robustness by Autonomous Competence Enhancement. His main research interests lie in the area of 3D Object Perception, Grasp Affordance, and Object Manipulation.

Dr. Amir Ghalamzan Esfahani, University of Lincoln, UK

Amir is a Senior Lecture at the University of Lincoln (UoL), UK. He is also a member of Lincoln Centre of Autonomous Systems (LCAS) at UoL. Prior to joining UoL, he was a Research Fellow at University of Birmingham, UK, conducteding research on robotic grasping and manipulation.

Mohammadreza Kasaei, University of Aveiro, Portugal

Mohammadreza is a PhD student at the University of Averio (MAP-i program), Portugal. His Ph.D. aims to propose a hybrid walking framework by coupling a model-based walk engine with DRL algorithms to combine the potential of both approaches. This hybrid framework aims at generating robust, versatile, and agile omnidirectional walking gaits by exploring the full potential of the robot, taking advantage of the analytical solution’s consistency and the flexibility of residual learning.