|Download (pdf)||Whole program||Keynotes||09/18||09/19||09/20||Concert||Banquet||Non-DAFx|
Abstracts of the KeynotesTuesday 09/19/2006
Sparse Time-Frequency Transforms and Applications
by Bruno Torresani (LATP, Universite de Provence, Marseille)
[slides, 1.3Mo] [.zip 22.4Mo]
Time-frequency representations often have the property of "sparsifying" signal representations. This is particularly true for the Gabor and wavelet representations of audio signals. We shall present a number of adaptive methods for finding signal representations that are as sparse as possible. We shall also describe some time-frequency domain models that help improving the sparsity or the interpretability of time-frequency representations. Applications to audio coding and denoising will be presented.
Digital Audio Synthesis and Effects based on Physical ModelsWednesday 09/20/2006
by Julius Smith (CCRMA, Stanford University, Stanford)
This presentation will review the evolution of a number of current methods for digital sound synthesis and audio effects that grew in some way out of physical models for objects and phenomena in the real world. Examples in the synthesis category include model-based synthesis of voice, wind, and stringed musical instruments. Examples in the effects category include basic echo effects, comb filters, and the Leslie effect, among others. In both categories lie models based on digitizing classic analog circuits, such as the Moog Voltage-Controlled Oscillator (VCO).
A Meta-Analysis of Acoustic Correlates of Timbre Dimensions
by Stephen McAdams (CIRMMT, McGill University, Montreal)
[in collaboration with Bruno Giordano (CIRMMT), Patrick Susini, Geoffroy Peeters (IRCAM), and Vincent Rioux (Confluences, Maison des Arts Urbains)]
The problem of generalizing timbre descriptors across various sound sets is of great import in the field of digital audio. A meta-analysis of ten published timbre spaces was conducted using multidimensional scaling analyses of dissimilarity ratings on a set of 128 recorded, resynthesized or synthesized musical instrument tones. A set of signal descriptors derived from the tones was developed, including parameters derived from the long-term amplitude spectrum, from the waveform and amplitude envelope, and from variations in the short-term amplitude spectrum. Relations among all descriptors across the sounds were used to determine families of related descriptors and to reduce the number of descriptors tested as predictors. Subsequently multiple correlations between descriptors and the positions of timbres along perceptual dimensions determined by the multidimensional scaling analyses were computed. The aim was:
Four primary classes of descriptors emerge: spectral centroid, spectral spread, spectral deviation, and temporal envelope (effective duration/attack time). This approach provides a generalizable set of descriptors for musical timbre that can be used in classifiers and search engines.
- 1) to select the subset of acoustic descriptors that provided the most generalizable prediction of timbral relations and
- 2) to provide a signal-based model of timbral description for musical instrument tones.