State-of-the-art sensorimotor learning algorithms, either in the context of reinforcement learning or imitation learning, offer policies that can often produce unstable behaviors, damaging the robot and/or the environment. Moreover, it is very difficult to interpret the optimized controller and analyze its behavior and/or performance. Traditional robot learning, on the contrary, relies on dynamical system-based policies that can be analyzed for stability/safety. Such policies, however, are neither flexible nor generic and usually work only with proprioceptive sensor states. In this work, we bridge the gap between generic neural network policies and dynamical system-based policies, and we introduce Autonomous Neural Dynamic Policies (ANDPs) that: (a) are based on autonomous dynamical systems, (b) always produce asymptotically stable behaviors, and (c) are more flexible than traditional stable dynamical system-based policies. ANDPs are fully differentiable, flexible generic-policies that can be used for both imitation learning and reinforcement learning setups, while ensuring asymptotic stability. Through several experiments, we explore the flexibility and capacity of ANDPs in several imitation learning tasks including experiments with image observations. The results show that ANDPs combine the benefits of both neural network-based and dynamical system-based methods.
In order to showcase that ANDPs can work with arbitrary state spaces 1, we learn the spiral motion but this time in joint space (7 degrees of freedom), this means that x ≡ xc = [θ0,θ1,...,θ6] and the output is the desired velocity profile ẋc that the joints should follow. The data collection process of this experiment is similar to the one described above, with the only difference that this time on every timestep we collect the angle of every joint of the robot, as well as their corresponding angular velocities. The results showcase that we are able to learn the task even in the higher dimensional space of the joints.
1 Formally, we should require that the controllable part of the state forms a Euclidean space, which holds for the reduced coordinates system (joint space) with limits.
@article{dionis2024andps,
title={{Sensorimotor Learning with Stability Guarantees via Autonomous Neural Dynamic Policies}},
author={Totsila, Dionis and Chatzilygeroudis, Konstantinos and Modugno, Valerio and Hadjivelichkov, Denis and Kanoulas, Dimitrios},
year={2024},
journal={{Preprint}}