Original Research ARTICLE

Front. Neurorobot., 06 January 2014 | doi: 10.3389/fnbot.2013.00025

Curiosity driven reinforcement learning for motion planning on humanoids

Mikhail Frank1,2,3*, Jürgen Leitner1,2,3, Marijn Stollenga1,2,3, Alexander Förster1,2,3 and Jürgen Schmidhuber1,2,3
  • 1Dalle Molle Institute for Artificial Intelligence, Lugano, Switzerland
  • 2Facoltà di Scienze Informatiche, Università della Svizzera Italiana, Lugano, Switzerland
  • 3Dipartimento Tecnologie Innovative, Scuola Universitaria Professionale della Svizzera Italiana, Manno, Switzerland

Most previous work on artificial curiosity (AC) and intrinsic motivation focuses on basic concepts and theory. Experimental results are generally limited to toy scenarios, such as navigation in a simulated maze, or control of a simple mechanical system with one or two degrees of freedom. To study AC in a more realistic setting, we embody a curious agent in the complex iCub humanoid robot. Our novel reinforcement learning (RL) framework consists of a state-of-the-art, low-level, reactive control layer, which controls the iCub while respecting constraints, and a high-level curious agent, which explores the iCub's state-action space through information gain maximization, learning a world model from experience, controlling the actual iCub hardware in real-time. To the best of our knowledge, this is the first ever embodied, curious agent for real-time motion planning on a humanoid. We demonstrate that it can learn compact Markov models to represent large regions of the iCub's configuration space, and that the iCub explores intelligently, showing interest in its physical constraints as well as in objects it finds in its environment.

Keywords: artificial curiosity, intrinsic motivation, reinforcement learning, humanoid, iCub, embodied AI

Citation: Frank M, Leitner J, Stollenga M, Förster A and Schmidhuber J (2014) Curiosity driven reinforcement learning for motion planning on humanoids. Front. Neurorobot. 7:25. doi: 10.3389/fnbot.2013.00025

Received: 05 July 2013; Accepted: 04 December 2013;
Published online: 06 January 2014.

Edited by:

Gianluca Baldassarre, Italian National Research Council, Italy

Reviewed by:

Anthony F. Morse, University of Skövde, Sweden
Hsin Chen, National Tsing-Hua University, Taiwan
Alberto Finzi, Università di Napoli Federico II, Italy

Copyright © 2014 Frank, Leitner, Stollenga, Förster and Schmidhuber. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Mikhail Frank, Dalle Molle Institute for Artificial Intelligence, Galleria 2, CH-6928 Manno-Lugano, Switzerland e-mail: kail@idsia.ch

Back to top