Open Research will be unavailable from 6pm to 6.30pm on Wednesday 10th December 2025 AEDT due to scheduled maintenance.
 

Tracing from Sound to Movement with Mixture Density Recurrent Neural Networks

Date

Authors

Wallace, Benedikte
Martin, Charles Patrick
Nymoen, Kristian

Journal Title

Journal ISSN

Volume Title

Publisher

ACM

Abstract

In this work, we present a method for generating sound-tracings using a mixture density recurrent neural network (MDRNN). A sound-tracing is a rendering of perceptual qualities of short sound objects through body motion. The model is trained on a dataset of single point sound-tracings with multimodal input data and learns to generate novel tracings. We use a second neural network classifier to show that the input sound can be identified from generated tracings. This is part of an ongoing research effort to examine the complex correlations between sound and movement and the possibility of modelling these relationships using deep learning.

Description

Citation

Benedikte Wallace, Charles P. Martin, and Kristian Nymoen. 2019. Tracing from Sound to Movement with Mixture Density Recurrent Neural Networks. In 6th International Conference on Movement and Computing (MOCO ’19), October 10–12, 2019, Tempe, AZ, USA. ACM, New York, NY, USA, 4 pages. https://doi.org/10.1145/3347122.3371376

Source

Book Title

6th International Conference on Movement and Computing (MOCO ’19)

Entity type

Access Statement

License Rights

Restricted until

2037-12-31

Downloads

File
Description