Download PDFOpen PDF in browserDynamical Perceptual-Motor Primitives for Better Deep Reinforcement Learning AgentsEasyChair Preprint 598312 pages•Date: July 1, 2021AbstractRecent innovations in Deep Reinforcement Learning (DRL) and Artificial Intelligence (AI) techniques have allowed for the development of artificial agents that can outperform human counterparts. But when it comes to multiagent task contexts, the behavioral patterning of AI agents is just as important as their performance. Indeed, successful multi-agent interaction requires that co-actors behave reciprocally, anticipate each other’s behaviors, and readily perceive each other’s behavioral intentions. Thus, developing AI agents that can produce behaviors compatible with human co-actors is of vital importance. Of particular relevance here, research exploring the dynamics of human behavior has demonstrated that many human behaviors and actions can be modeled using a small set of dynamical perceptual-motor primitives (DPMPs) and, moreover, that these primitives can also capture the complex behavior of humans in multiagent scenarios. Motived by this understanding, the current paper proposes methodologies which use DPMPs to augment the training and action dynamics of DRL agents to ensure that the agents inherit the essential pattering of human behavior while still allowing for optimal exploration of the task solution space during training. The feasibility of these methodologies is demonstrated by creating hybrid DPMP-DRL agents for a multiagent herding task. Overall, this approach leads to faster training of DRL agents while also exhibiting behavior characteristics of expert human actors. Keyphrases: Deep Reinforcement Learning, Dynamical Motor Primitives, Multiagent Coordination
|