Download Virtual rooms recreation for Wave Field Synthesis Advanced multichannel sound systems such as Wave Field Synthesis (WFS) allow to recreate spatial wide sound scenes of sources. The recreation of the illusion of a 3D natural and realistic sound scene can be achieved by means of virtual rooms where the wave field is simulated. Such wave field is used as a source of information for the convolution of WFS sound sources with extrapolated impulsive responses in these virtual rooms. To obtain the needed plane waves for auralization, a complete description of the sound field is needed, including an accurate knowledge of the particle velocity. In this paper, virtual rooms are simulated by means of Finite-Differences Time Domain method. This method provides a complete solution of the sound field variables in a wide frequency band and can be used to produce both the impulsive responses of pressure and particle velocity for plane wave decomposition, prior to auralization. To illustrate its applicability, a set of rooms consisting of a typical auditorium room, a cinema and a perfect cube are shown and evaluated.
Download Time and Frequency Domain Room Compensation applied to Wave Field Synthesis In sound rendering systems using loudspeakers, the listening room adds echoes not considered by the reproduction system, thus deteriorating the rendered audio signal. Specifically, Wave Field Synthesis is a 3D audio reproduction system, which allows synthesizing a realistic sound field in a wide area by using arrays of loudspeakers. This paper proposes a room compensation approach based on a multichannel inverse filter bank calculated to compensate the room effects at selected points within the listening area. Time domain and frequency domain algorithms are proposed to accurately compute the bank of inverse filters. A comparative study between these algorithms by means of laboratory experiments is presented.
Download Source separation for microphone arrays using multichannel conjugate gradient techniques This paper proposes a new scheme to improve the source separation problem aimed to microphone array applications like WFS based teleconference systems. A multichannel, sub-band approach to reduce computational complexity is presented. Also, instead of using the LMS adaptive algorithm, a new system based on hybrid Conjugate Gradient-nLMS techniques is developed to accelerate the convergence time. This adaptive algorithm is controlled by a voice activity detector block that basically detects double talk situations and freezes the adaptation process to avoid the appearance of sound artifacts which may cause a significant degradation of the recovered signals and have a great impact in the quality of the full system.
Download Alternative analysis-resynthesis approaches for timescale, frequency and other transformations of musical signals This article presents new spectral analysis-synthesis approaches to musical signal transformation. The analysis methods presented here involve the use of a superior quality technique of frequency estimation, the Instantaneous Frequency Distribution (IFD), and partial tracking. We discuss the theory behind the IFD, comparing it to other existing methods. The partial tracking analysis employed in this process is explained fully. This is followed by a look into the three resynthesis methods proposed by this work, based on different approaches to additive synthesis. A number of transformations of musical signals are proposed to take advantage of the analysis-synthesis techniques. Performance details and specific aspects of this implementation are discussed. This is complemented by a look at some of the results of these methods in the time-stretching of audio signals, where they will be shown to perform better than many of the currently available techniques.
Download A Generalized Polynomial and Sinusoidal Model for Partial Tracking and Time Stretching In this article, we introduce a new generalized model based on polynomials and sinusoids for partial tracking and time stretching. Nowadays, most partial tracking algorithms are based on the McAulay-Quatieri approach and use polynomials for phase, frequency, and amplitude tracks. Some sinusoidal approaches have also been proved to work in certain conditions. We will present here an unified model using both approaches, which will allow more flexible partial tracking and time stretching.
Download Efficient spectral envelope estimation and its application to pitch shifting and envelope preservation In this article the estimation of the spectral envelope of sound signals is addressed. The intended application for the developed algorithm is pitch shifting with preservation of the spectral envelope in the phase vocoder. As a first step the different existing envelope estimation algorithms are investigated and their specific properties discussed. As the most promising algorithm the cepstrum based iterative true envelope estimator is selected. By means of controlled sub-sampling of the log amplitude spectrum and by means of a simple step size control for the iterative algorithm the run time of the algorithm can be decreased by a factor of 2.5-11. As a remedy for the ringing effects in the the spectral envelope that are due to the rectangular filter used for spectral smoothing we propose the use of a Hamming window as smoothing filter. The resulting implementation of the algorithm has slightly increased computational complexity compared to the standard LPC algorithm but offers significantly improved control over the envelope characteristics. The application of the true envelope estimator in a pitch shifting application is investigated. The main problems for pitch shifting with envelope preservation in a phase vocoder are identified and a simple yet efficient remedy is proposed.
Download Sound-System Design for a Professional Full-Flight Simulator In this paper, we present a sound system to be integrated in an accredited realistic full-flight simulator, used for the training of airline pilots. We discuss the design and implementation of a corresponding real-time signal-processing software providing threedimensional audio reproduction of the acoustic events on a flight deck. Here, the emphasis is on an aircraft of a specific type. We address issues of suitable data acquisition methods, and, most importantly, of functional signal analysis and synthesis techniques.
Download A New Functional Framework for a Sound System for Realtime Flight Simulation We will show a new sound framework and concept for realistic flight simulation. Dealing with a highly complex network of mechanical systems that act as physical sound sources the main focus is on a fully modular and extensible/scalable design. The prototype we developed is part of a fully functional Full Flight Simulator for Pilot Training.
Download A Framework for Sonification of Vicon Motion Capture Data This paper describes experiments on sonifying data obtained using the VICON motion capture system. The main goal is to build the necessary infrastructure in order to be able to map motion parameters of the human body to sound. For sonification the following three software frameworks were used: Marsyas, traditionally used for music information retrieval with audio analysis and synthesis, CHUCK, an on-the-fly real-time synthesis language, and Synthesis Toolkit (STK), a toolkit for sound synthesis that includes many physical models of instruments and sounds. An interesting possibility is the use of motion capture data to control parameters of digital audio effects. In order to experiment with the system, different types of motion data were collected. These include traditional performance on musical instruments, acting out emotions as well as data from individuals having impairments in sensor motor coordination. Rhythmic motion (i.e. walking) although complex, can be highly periodic and maps quite naturally to sound. We hope that this work will eventually assist patients in identifying and correcting problems related to motor coordination through sound.
Download Implementation of Arbitrary Linear Sound Synthesis Algorithms by Digital Wave Guide Structures The Digital Wave Guide (DWG) method is one of the most popular techniques for digital sound synthesis via physical modeling. Due to the inherent solution of the wave equation by the structure of the DWG method, it provides a highly efficient algorithm for typical physical modeling problems. In this paper it will be shown, that it is possible to use this efficient structure for any existing linear sound synthesis algorithm. By a consequent description of discrete implementations with State Space Structures (SSSs), suitable linear state space transformations can be used to achieve the typical DWG structure from any given system. The proposed approach is demonstrated with two case studies, where a modal solution achieved with the Functional Transformation Method (FTM) is transformed to a DWG implementation. In the first example the solution of the lossless wave equation is transformed to a DWG structure, yielding an arbitrary size fractional delay filter. In another example a more elaborated model with dispersion and damping terms is transformed, resulting in a DWG model with parameter morphing features.