%A Neumann,Gerhard %A Daniel,Christian %A Paraschos,Alexandros %A Kupcsik,Andras %A Peters,Jan %D 2014 %J Frontiers in Computational Neuroscience %C %F %G English %K Robotics,Policy Search,modularity,movement primitives,motor control,Hierarchical Reinforcement Learning %Q %R 10.3389/fncom.2014.00062 %W %L %M %P %7 %8 2014-June-11 %9 Review %+ Gerhard Neumann,Department of Computer Science, Intelligent Autonomous Systems, Technische Universität Darmstadt,Darmstadt, Germany,neumann@ias.tu-darmstadt.de %# %! Learning Modular Policies for Robotics %* %< %T Learning modular policies for robotics %U https://www.frontiersin.org/articles/10.3389/fncom.2014.00062 %V 8 %0 JOURNAL ARTICLE %@ 1662-5188 %X A promising idea for scaling robot learning to more complex tasks is to use elemental behaviors as building blocks to compose more complex behavior. Ideally, such building blocks are used in combination with a learning algorithm that is able to learn to select, adapt, sequence and co-activate the building blocks. While there has been a lot of work on approaches that support one of these requirements, no learning algorithm exists that unifies all these properties in one framework. In this paper we present our work on a unified approach for learning such a modular control architecture. We introduce new policy search algorithms that are based on information-theoretic principles and are able to learn to select, adapt and sequence the building blocks. Furthermore, we developed a new representation for the individual building block that supports co-activation and principled ways for adapting the movement. Finally, we summarize our experiments for learning modular control architectures in simulation and with real robots.