Sensorimotor Learning with Stability Guarantees via Autonomous Neural Dynamic Policies

1Inria, CNRS, Loria and Universite de Lorraine, France.
2Computational Intelligence Laboratory (CILab), Department of Mathematics, University of Patras and Laboratory of Automation and Robotics (LAR), Department of Electrical and Computer Engineering. University of Patras, Greece.
3Robot Perception and Learning Lab (RPL Lab), Department of Computer Science, University College London (UCL), United Kingdom.
4Archimedes/Athena RC, Greece.

*Indicates Equal Contribution

Submitted to IEEE Robotics and Automation Letters (RA-L) 2024
ANDPs Pipeline

ANDPs Pipeline Overview.

Abstract

State-of-the-art sensorimotor learning algorithms, either in the context of reinforcement learning or imitation learning, offer policies that can often produce unstable behaviors, damaging the robot and/or the environment. Moreover, it is very difficult to interpret the optimized controller and analyze its behavior and/or performance. Traditional robot learning, on the contrary, relies on dynamical system-based policies that can be analyzed for stability/safety. Such policies, however, are neither flexible nor generic and usually work only with proprioceptive sensor states. In this work, we bridge the gap between generic neural network policies and dynamical system-based policies, and we introduce Autonomous Neural Dynamic Policies (ANDPs) that: (a) are based on autonomous dynamical systems, (b) always produce asymptotically stable behaviors, and (c) are more flexible than traditional stable dynamical system-based policies. ANDPs are fully differentiable, flexible generic-policies that can be used for both imitation learning and reinforcement learning setups, while ensuring asymptotic stability. Through several experiments, we explore the flexibility and capacity of ANDPs in several imitation learning tasks including experiments with image observations. The results show that ANDPs combine the benefits of both neural network-based and dynamical system-based methods.

Video Presentation

Experiment 1: Imitating 2D Trajectories

Experiment 1 Pipeline overview

ANDPs Pipeline

Experiment 2: Imitating Robotic Behaviors

Experiment 2 Pipeline overview

ANDPs Pipeline

Experiment 3: Fixed Base Robot - Pouring Task

Experiment 3 Pipeline overview

ANDPs Pipeline

Experiment 4: Floating Base Robot - Follow the Line

Experiment 4 Pipeline overview

ANDPs Pipeline

Supplementary Experiment: Learning a spiral motion in joint space

In order to showcase that ANDPs can work with arbitrary state spaces 1, we learn the spiral motion but this time in joint space (7 degrees of freedom), this means that xxc = [θ01,...,θ6] and the output is the desired velocity profile c that the joints should follow. The data collection process of this experiment is similar to the one described above, with the only difference that this time on every timestep we collect the angle of every joint of the robot, as well as their corresponding angular velocities. The results showcase that we are able to learn the task even in the higher dimensional space of the joints.


1 Formally, we should require that the controllable part of the state forms a Euclidean space, which holds for the reduced coordinates system (joint space) with limits.

Part of the paper was presented at 2023 ICRA Life-Long Learrning with Human Help (L3H2) Workshop (Non-archival). You can find the poster here.

BibTeX


  @article{dionis2024andps,
  title={{Sensorimotor Learning with Stability Guarantees via Autonomous Neural Dynamic Policies}},
  author={Totsila, Dionis and Chatzilygeroudis, Konstantinos and Modugno, Valerio and Hadjivelichkov, Denis and Kanoulas, Dimitrios},
  year={2024},
  journal={{Preprint}}