The creation of improvised dancing choreographies is an important research field of cross-modal analysis. A key point of this task is how to effectively create and correlate music and dance with a probabilistic one-to-many mapping, which is essential to create realistic dances of various genres. To address this issue, we propose a GAN-based cross-modal association framework, DeepDance, which correlates two different modalities (dance motion and music) together, aiming at creating the desired dance sequence in terms of the input music. Its generator is to predictively produce the dance movements best-fit to current music piece by learning from examples. In another hand, its discriminator acts as an external evaluation from the audience and judges the whole performance. The generated dance movements and the corresponding input music are considered to be wellmatched if the discriminator cannot distinguish the generated movements from the training samples according to the estimated probability. By adding motion consistency constraints in our loss function, the proposed framework is able to create long realistic dance sequences. To alleviate the problem of expensive and inefficient data collection, we propose an effective approach to create a large-scale dataset, YouTube-Dance3D, from open data source. Extensive experiments on currently available musicdance datasets and our YouTube-Dance3D dataset demonstrate that our approach effectively captures the correlation between music and dance and can be used to choreograph appropriate dance sequences.
@ARTICLE{9042236, author={G. {Sun} and Y. {Wong} and Z. {Cheng} and M. S. {Kankanhalli} and W. {Geng} and X. {Li}}, journal={IEEE Transactions on Multimedia}, title={DeepDance: Music-to-Dance Motion Choreography with Adversarial Learning}, year={2020}, volume={}, number={}, pages={1-1}, }
Example-Based Automatic Music-Driven
Conventional Dance Motion Synthesis.
[1] 樊儒昆,音乐驱动的舞蹈动作合成
[2] 兰恒,集合深度学习的音乐舞蹈一体化编排系统
[3] 赖章炯,基于深度学习的音乐舞蹈自动编排技术研究