Download Drum Translation for Timbral and Rhythmic Transformation Many recent approaches to creative transformations of musical audio have been motivated by the success of raw audio generation models such as WaveNet, in which audio samples are modeled by generative neural networks. This paper describes a generative audio synthesis model for multi-drum translation based on a WaveNet denosing autoencoder architecture. The timbre of an arbitrary source audio input is transformed to sound as if it were played by various percussive instruments while preserving its rhythmic structure. Two evaluations of the transformations are conducted based on the capacity of the model to preserve the rhythmic patterns of the input and the audio quality as it relates to timbre of the target drum domain. The first evaluation measures the rhythmic similarities between the source audio and the corresponding drum translations, and the second provides a numerical analysis of the quality of the synthesised audio. Additionally, a semi- and fully-automatic audio effect has been proposed, in which the user may assist the system by manually labelling source audio segments or use a state-of-the-art automatic drum transcription system prior to drum translation.
Download Adversarial Synthesis of Drum Sounds Recent advancements in generative audio synthesis have allowed for the development of creative tools for generation and
manipulation of audio. In this paper, a strategy is proposed for the
synthesis of drum sounds using generative adversarial networks
(GANs). The system is based on a conditional Wasserstein GAN,
which learns the underlying probability distribution of a dataset
compiled of labeled drum sounds. Labels are used to condition
the system on an integer value that can be used to generate audio
with the desired characteristics. Synthesis is controlled by an input
latent vector that enables continuous exploration and interpolation
of generated waveforms. Additionally we experiment with a training method that progressively learns to generate audio at different
temporal resolutions. We present our results and discuss the benefits of generating audio with GANs along with sound examples
and demonstrations.
Download Improved Automatic Instrumentation Role Classification and Loop Activation Transcription Many electronic music (EM) genres are composed through the activation of short audio recordings of instruments designed for seamless repetition—or loops. In this work, loops of key structural groups such as bass, percussive or melodic elements are labelled by the role they occupy in a piece of music through the task of automatic instrumentation role classification (AIRC). Such labels assist EM producers in the identification of compatible loops in large unstructured audio databases. While human annotation is often laborious, automatic classification allows for fast and scalable generation of these labels. We experiment with several deeplearning architectures and propose a data augmentation method for improving multi-label representation to balance classes within the Freesound Loop Dataset. To improve the classification accuracy of the architectures, we also evaluate different pooling operations. Results indicate that in combination with the data augmentation and pooling strategies, the proposed system achieves state-of-theart performance for AIRC. Additionally, we demonstrate how our proposed AIRC method is useful for analysing the structure of EM compositions through loop activation transcription.