Amp-Space: A Large-Scale Dataset for Fine-Grained Timbre Transformation
We release Amp-Space, a large-scale dataset of paired audio
samples: a source audio signal, and an output signal, the result of
a timbre transformation. The types of transformations we study
are from blackbox musical tools (amplifiers, stompboxes, studio
effects) traditionally used to shape the sound of guitar, bass, or
synthesizer sounds. For each sample of transformed audio, the
set of parameters used to create it are given. Samples are from
both real and simulated devices, the latter allowing for orders of
magnitude greater data than found in comparable datasets. We
demonstrate potential use cases of this data by (a) pre-training a
conditional WaveNet model on synthetic data and show that it reduces the number of samples necessary to digitally reproduce a
real musical device, and (b) training a variational autoencoder to
shape a continuous space of timbre transformations for creating
new sounds through interpolation.