Download PDFOpen PDF in browserImg2Motion: Learning to Drive 3D Avatars using VideosEasyChair Preprint 16334 pages•Date: October 11, 2019AbstractThis paper presents a novel neural network motion retargeting system that drives 3D rigged digital human avatars using videos. We study the problem of building a motion mapping between 2D video and 3D skeletons, in which the source characters can drive the target subjects with varying skeleton structures. In particular, the target 3D avatars may have different kinematic characteristics, e.g. bone lengths, skeleton scales, skeleton topologies, etc. The traditional motion retargeting is between pair to pair characters, especially 2D characters to 2D characters and 3D characters to 3D characters. There is a digital gap of using 2D characters’ animations to drive 3D rigged characters. These traditional techniques may not yet be capable of solving motion retargeting from 2D motions to 3D digital human avatars with sparse skeleton motion data. Inspired by these unsolved limitations, we present a pipeline of building neural network motion retargeting system, which can do motion retargeting from 2D videos to 3D rigged digital human avatars. This whole system with the effective pipeline can be used for game implementations, virtual reality system and also can generate a more comprehensive dataset with larger varieties of human poses by animating existing rigged human models. Keyphrases: 3D pose estimation, Digital Human, motion retargeting
|