Download Chromax, the Other Side of the Spectral Delay Between Signal Processing and Composition
Spectral delays have been used for a long time as a way to colour and shape spectral characteristics of sound. Most of available software is controlled by drawing an envelope on a window that represents spectral bins, and by setting a maximum delay time. Despite its comfort, such a simplistic approach does not imply any methods for allowing symbolic manipulations on spectral data that are often required by composers and sound designers. Chromax proposes an alternative dynamic parameterization of spectral delays, allowing fine and complex compositional manipulations. It implements a bin-synchronous spectral processing using the new Gen~ technology available in Max6 [1], and provides algorithms to dynamically specify a filter, a delay and a feedback level for each bin of a processed sound.
Download Analysis/Synthesis Using Time-Varying Windows and Chirped Atoms
A common assumption that is often made regarding audio signals is that they are short-term stationary. In other words, it is typically assumed that the statistical properties of audio signals change slowly enough that they can be considered nearly constant over a short interval. However, using a fixed analysis window (which is typical in practice) we have no way to change the analysis parameters over time in order to track the slowly evolving properties of the audio signal. For example, while a long window may be appropriate for analyzing tonal phenomena it will smear subsequent note onsets. Furthermore, the audio signal may not be completely stationary over the duration of the analysis window. This is often true of sounds containing glissando, vibrato, and other transient phenomena. In this paper we build upon previous work targeted at non-stationary analysis/synthesis. In particular, we discuss how to simultaneously adapt the window length and the chirp rate of the analysis frame in order to maximally concentrate the spectral energy. This is done by a) finding the analysis window that leads to the minimum entropy spectrum; and, b) estimating the chirp rate using the distribution derivative method. We also discuss a fast method of analysis/synthesis using the fan-chirp transform and overlap-add. Finally, we analyze several real and synthetic signals and show a qualitative improvement in the spectral energy concentration.
Download On the Modeling of Sound Textures Based on the STFT Representation
Sound textures are often noisy and chaotic. The processing of these sounds must be based on the statistics of its corresponding time-frequency representation. In order to transform sound textures with existing mechanisms, a statistical model based on the STFT representation is favored. In this article, the relation between statistics of a sound texture and its time-frequency representation is explored. We proposed an algorithm to extract and modify the statistical properties of a sound texture based on its STFT representation. It allows us to extract the statistical model of a sound texture and resynthesise the sound texture after modifications have been made. It could also be used to generate new samples of the sound texture from a given sample. The results of the experiment show that the algorithm is capable of generating high quality sounds from an extracted model. This result could serve as a basis for transformations like morphing or high-level control of sound textures.
Download A Streaming Audio Mosaicing Vocoder Implementation
This paper introduces a new extension to the concept of Audio Mosaicing, a process by which a set of unrelated sounds are blended together to form a new audio stream of shared sonic characteristics. The proposed approach is based on the algorithm that underlies the well-known channel vocoder, that is, it splits the input signals into frequency bands, which are then processed individually, and then recombined to form the output. In a similar manner, our mosaicing scheme first uses filterbanks to decompose the set of input audio segments. Then, it introduces the use of Dynamic Time Warping to perform the matching process across the filterbank outputs. Following this, the re-synthesis stage includes a bank of Phase Vocoders, one for each frequency band to facilitate targeted spectral and temporal musical effects prior to recombination. Using multiple filterbanks means that this algorithm lends itself well to parallelisation and it is also shown how computational efficiencies are achieved that permit a real-time implementation.
Download Navigating in a Space of Synthesized Interaction-Sounds: Rubbing, Scratching and Rolling Sounds
In this paper, we investigate a control strategy of synthesized interaction-sounds. The framework of our research is based on the action/object paradigm that considers that sounds result from an action on an object. This paradigm presumes that there exists some sound invariants, i.e. perceptually relevant signal morphologies that carry information about the action or the object. Some of these auditory cues are considered for rubbing, scratching and rolling interactions. A generic sound synthesis model, allowing the production of these three types of interaction together with a control strategy of this model are detailed. The proposed control strategy allows the users to navigate continuously in an ”action space”, and to morph between interactions, e.g. from rubbing to rolling.
Download A Modeller-Simulator for Instrumental Playing of Virtual Musical Instruments
This paper presents a musician-oriented modelling and simulation environment for designing physically modelled virtual instruments and interacting with them via a high performance haptic device. In particular, our system allows restoring the physical coupling between the user and the manipulated virtual instrument, a key factor for expressive playing of traditional acoustical instruments that is absent in the vast majority of computer-based musical systems. We first analyse the various uses of haptic devices in Computer Music, and introduce the various technologies involved in our system. We then present the modeller and simulation environments, and examples of musical virtual instruments created with this new environment.
Download Rumbator: a Flamenco Rumba Cover Version Generator Based on Audio Processing at Note Level
In this article, a scheme to automatically generate polyphonic flamenco rumba versions from monophonic melodies is presented. Firstly, we provide an analysis about the parameters that defines the flamenco rumba, and then, we propose a method for transforming a generic monophonic audio signal into such a style. Our method firstly transcribes the monophonic audio signal into a symbolic representation, and then a set of note-level audio transformations based on music theory is applied to the monophonic audio signal in order to transform it to the polyphonic flamenco rumba style. Some audio examples of this transformation software are also provided.
Download Controlling a Non Linear Friction Model for Evocative Sound Synthesis Applications
In this paper, a flexible strategy to control a synthesis model of sounds produced by non linear friction phenomena is proposed for guidance or musical purposes. It enables to synthesize different types of sounds, such a creaky door, a singing glass or a squeaking wet plate. This approach is based on the action/object paradigm that enables to propose a synthesis strategy using classical linear filtering techniques (source/resonance approach) which provide an efficient implementation. Within this paradigm, a sound can be considered as the result of an action (e.g. impacting, rubbing, ...) on an object (plate, bowl, ...). However, in the case of non linear friction phenomena, simulating the physical coupling between the action and the object with a completely decoupled source/resonance model is a real and relevant challenge. To meet this challenge, we propose to use a synthesis model of the source that is tuned on recorded sounds according to physical and spectral observations. This model enables to synthesize many types of non linear behaviors. A control strategy of the model is then proposed by defining a flexible physically informed mapping between a descriptor, and the non linear synthesis behavior. Finally, potential applications to the remediation of motor diseases are presented. In all sections, video and audio materials are available at the following URL: http://www.lma.cnrs-mrs.fr/~kronland/ thoretDAFx2013/
Download TELTPC Based Re-Synthesis Method for Isolated Notes of Polyphonic Instrumental Music Recordings
In this paper, we presented a flexible analysis/re-synthesis method for smoothly changing the properties of isolated notes in polyphonic instrumental music recordings. True Envelope Linear Predictive Coding (TELPC) method has been employed as the analysis/synthesis model in order to preserve the original timbre quality as much as possible due to its accurate spectral envelope estimation. We modified the conventional LPC analysis/synthesis processing by using pitch synchronous analysis frames to avoid the severe magnitude modulation problem. Smaller frames can thus be used to capture more local characteristics of the original signals to further improve the sound quality. In this framework, one can manipulate a sequence of isolated notes from two commercially available polyphonic instrumental music recordings and interesting re-synthesized results are achieved.
Download Time-Frequency Analysis of Musical Signals using the Phase Coherence
In this paper we propose a technique based on the phase evolution of the Short Time Fourier Transform (STFT) for increasing the spectral resolution in the time-frequency analysis of a musical signal. It is well known that the phase evolution of the STFT coefficients brings important information on the spectral components of the analysed signal. This property has already been exploited in different ways to improve the accuracy in the estimation of the frequency of a single component. In this paper we propose a different approach, where all the coefficients of the STFT are used jointly to build a measure of how likely all the frequency components are, in terms of their phase coherence evaluated in consecutive analysis window. In more detail, we construct a phase coherence function which is then integrated with the usual amplitude spectrum to obtain a refined description of the spectral components of an audio signal.