Download MUSESCAPE: An interactive content-aware music browser Advances in hardware performance, network bandwidth and audio compression have made possible the creation of large personal digital music collections. Although, there is a significant body of work in image and video browsing, there has been little work that directly addresses the problem of audio and especially music browsing. In this paper, Musescape, a prototype music browsing system is described and evaluated. The main characteristics of the system are automatic configuration based on Computer Audition techniques and the use of continuous audio-music feedback while browsing and interacting with the system. The described ideas and techniques take advantage of the unique characteristics of music signals. A pilot user study was conducted to explore and evaluate the proposed user interface. The results indicate that the use of automatically extracted tempo information reduces browsing time and that continuous interactive audio feedback is appropriate for this particular domain.
Download Driving pitch-shifting and time-scaling algorithm with adaptive and gestural techniques This article intends to demonstrate how a specific digital audio effect can benefit from a proper control, be it from sounds and/or from gesture. When this control is from sounds, it can be called “adaptive” or “sound automated”. When this control is from gesture, it can be called “gesturally controlled”. The audio effects we use for this demonstration are time-scaling and pitch-shifting in the particular contexts of vibrato, prosody change, time unfolding and rythm change.
Download CYMATIC: A tactile controlled physical modelling instrument The recent trend towards the virtual in music synthesis has lead to the inevitable decline of the physical, inserting what might be described as a ‘veil of tactile paralysis’ between the musician and the sound source. The addition of tactile and gestural interfaces to electronic musical instruments offers the possibility of moving some way towards reversing this trend. This paper describes a new computer based musical instrument, known as Cymatic, which offers gestural control as well as tactile and proprioceptive feedback via a force feedback joystick and a tactile feedback mouse. Cymatic makes use of a mass/spring physical modelling paradigm to model multi-dimensional, interconnectable resonating structures that can be played in real-time with various excitation methods. It therefore restores to a degree the musician’s sense of working with a true physical instrument in the natural world. Cymatic has been used in a public performance of a specially composed work, which is described.
Download Introducing Audio D-TOUCH: A tangible user interface for music composition and performance "Audio d-touch" uses a consumer-grade web camera and customizable block objects to provide an interactive tangible interface for a variety of time based musical tasks such as sequencing, drum editing and collaborative composition. Three instruments are presented here. Future applications of the interface are also considered.
Download Additive synthesis based on the continuous wavelet transform: A sinusoidal plus transient model In this paper a new algorithm to compute an additive synthesis model of a signal is presented. An analysis based on the Continuous Wavelet Transform (CWT) has been used to extract the time-varying amplitudes and phases of the model. A coarse to fine analysis increases the algorithm efficiency. The computation of the transient analysis is performed using the same algorithm developed for the sinusoidal analysis, setting the proper parameters. A sinusoidal plus transient schema is obtained. Typical sound transformations have been implemented to validate the obtained results.
Download Analysis and resynthesis of quasi-harmonic sounds: an iterative filterbank approach We employ a hybrid state-space sinusoidal model for general use in analysis-synthesis based audio transformations. This model, which has appeared previously in altered forms (e.g. [5], [8], perhaps others) combines the advantages of a source-filter model with the flexible, time-frequency based transformations of the sinusoidal model. For this paper, we specialize the parameter identification task to a class of “quasi-harmonic” sounds. The latter represent a variety of acoustic sources in which multiple, closely spaced modes cluster about principal harmonics loosely following a harmonic structure (some inharmonicity is allowed.) To estimate the sinusoidal parameters, an iterative filterbank splits the signal into subbands, one per principal harmonic. Each filter is optimally designed by a linear programming approach to be concave in the passband, monotonic in transition regions, and to specifically null out sinusoids in other subband regions. Within each subband, the constant frequencies and exponential decay rates of each mode are estimated by a Steiglitz-McBride approach, then time-varying amplitudes and phases are tracked by a Kalman filter. The instantaneous phase estimate is used to derive an average instantaneous frequency estimate; the latter averaged over all modes in the subband region updates the filter’s center frequency for the next iteration. In this way, the filterbank structure progressively adapts to the specific inharmonicity structure of the source recording. Analysissynthesis applications are demonstrated with standard (time/pitchscaling) transformation protocols, as well as some possibly novel effects facilitated by the “source-filter” aspect.
Download The caterpillar system for data-driven concatenative sound synthesis Concatenative data-driven synthesis methods are gaining more interest for musical sound synthesis and effects. They are based on a large database of sounds and a unit selection algorithm which finds the units that match best a given sequence of target units. We describe related work and our C ATERPILLAR synthesis system, focusing on recent new developments: the advantages of the addition of a relational SQL database, work on segmentation by alignment, the reformulation and extension of the unit selection algorithm using a constraint resolution approach, and new applications for musical and speech synthesis.
Download Enhanced partial tracking using linear prediction In this paper, we introduce a new partial tracking method suitable for the sinusoidal modeling of mixtures of instrumental sounds with pseudo-stationary frequencies. This method, based on the linear prediction of the frequency evolutions of the partials, enables us to track these partials more accurately at the analysis stage, even in complex sound mixtures. This allows our spectral model to better handle polyphonic sounds.
Download Direct estimation of frequency from MDCT-encoded files The Modified Discrete Cosine Transform (MDCT) is a broadlyused transform for audio coding, since it allows an orthogonal time-frequency transform without blocking effects. In this article, we show that the MDCT can also be used as an analysis tool. This is illustrated by extracting the frequency of a pure sine wave with some simple combinations of MDCT coefficients. We studied the performance of this estimation in ideal (noiseless) conditions, as well as the influence of additive noise (white noise / quantization noise). This forms the basis of a low-level feature extraction directly in the compressed domain.
Download On sinusoidal parameter estimation This paper contains a review of the issues surrounding sinusoidal parameter estimation which is a vital part of many audio manipulation algorithms. A number of algorithms which use the phase of the Fourier transform for estimation (e.g. [1]) are explored and shown to be identical. Their performance against a classical interpolation estimator [2] and comparison with the Cramer Rao Bound (CRB) is presented. Component detection is also considered and various methods of improving these algorithms are discussed.