Corpus-Based Sound Synthesis Survey
This page offers an ongoing bibliographic survey of research work and approaches to corpus-based concatenative sound synthesis (CBCS).
It is the continuously updated online version of this article published in the JNMR special issue on Audio Mosaicing, guest editor Adam T. Lindsay., tracking the many different approaches to CBCS throughout the history of electronic music, starting in the 1950s, even if they weren't known as such at their time, up to the recent surge of automatic contemporary methods.
If you know of any software, research project, or musical approach that is somehow based on a database of sounds, please let me know by mail to schwarz (at) ircam (dot) fr Thanks to the current contributors: Giuseppe Torre, Abram Hindle, Nick Collins.
Here's the bibliography by year:
- A Joyful Ode to Automatic Orchestration, François Pachet. --- SONY CSL's flow machines project to reorchestrate Beethoven's Freude schöner Götterfunken, in seven styles, one of them synthesised by concatentive synthesis of Bossa-Nova recordings.
- An Evaluation Framework and Case Study for Rhythmic Concatenative Synthesis, Cárthach Ó Nuanáin, Perfecto Herrera, Sergi Jordà. --- application of descriptor analysis and a CataRT-style 2D display for drum-loop construction and manipulation in Ableton Live!
- Coleman, Graham. "Descriptor Control of Sound Transformations and Mosaicing Synthesis.", PhD thesis, UPF, Barcelona, 2016.
- Bilous, James Eric. "Concatenative Synthesis for Novel Timbral Creation." (2016).
- Ghisi, Daniele, and Carlos Agon. Real-Time Corpus-Based Concatenative Synthesis for Symbolic Notation, TENOR 2016.
- Evaluation of a Sketching Interface to Control a Concatenative Synthesiser. Augoustinos Tsiros, Grégory Leplâtre. --- continuation of the beautiful work on transforming graphical texture, shape, and colour to sound via CBCS for sound design.
- Introducing CatOracle: Corpus-based concatenative improvisation with the Audio Oracle algorithm, Aaron Einbond, Diemo Schwarz, Riccardo Borghesi, Norbert Schnell.
- Concatenative Synthesis via Chord-Based Segmentation For "An Experiment with Time", Daniele Ghisi, Mattia G. Bergomi.
- Automatic Sample Layout by Kyle McDonald, using a spectral representation and t-SNE for CataRT-style browsing drum samples. See also the facebook discussion
- Listening to features. Manuel Moussallam, Antoine Liutkus, Laurent Daudet. arXiv 2015. --- crazy but fun idea if it is possible to reconstitute songs from typical MIR features by concatenative synthesis from an instrument sample corpus
- A journey towards the development of a sound-led interdisciplinary performance: Agorá. Giuseppe Torre. International Journal of Performance Arts and Digital Media, 2015.
- Handsy Mapper by Johnny Venom: controlling CataRT from Kinect-tracked hand positions.
- Lindborg, PerMagnus, and Joyce Beetuan Koh. "About When We Collide: A Generative and Collaborative Sound Installation." Sound and Interactivity 20 (2015): 104.
- Interacting with a Corpus of Sounds. Diemo Schwarz. eContact! online journal of the Canadian Electroacoustic Community, vol. 2, n° 16.
- Composing Music by Selection: Content-Based Algorithmic-Assisted Audio Composition. Bernardes, G. PhD dissertation on earGram, University of Porto.
- Garcia, Jérémie. "Supporting music composition with interactive paper." PhD diss., Université Paris Sud-Paris XI, 2014. --- beautiful use-case of recording 2D descriptor trajectories during composition with CataRT on interactive paper to be conserved and annotated.
- Rich Contacts: Corpus-Based Convolution of Contact Interaction Sound for Enhanced Musical Expression. Diemo Schwarz (Ircam), Pierre Alexandre Tremblay, Alex Harker (CERENEM). -- articulating grains from a corpus with intuitive contact interaction: hit, scratch, and strum a corpus of sound
- AudioQuilt: 2D Arrangements of Audio Samples using Metric Learning and Kernelized Sorting. Ohad Fried, Zeyu Jin, Reid Oda, Adam Finkelstein (Princeton University). -- sample-browsing by descriptors, nifty warping of user-specified similarities to 2D spaces
- Evaluating the Perceived Similarity Between Audio-Visual Features Using Corpus-Based Concatenative Synthesis. Augoustinos Tsiros (Edinburgh Napier University). -- very effective sonification of graphical textural properties
- Tangible Scores: Shaping the Inherent Instrument Score. Enrique Tomás, Martin Kaltenbrunner (Kunstuniversität Linz). -- triggering grains from interaction sounds on a textured surface.
- Davies, Matthew EP, Adam M. Stark, Fabien Gouyon, and Masataka Goto. "Improvasher: A Real-Time Mashup System for Live Musical Input." In NIME, pp. 541-544. 2014.
- Gossmann, Joachim, and Max Neupert. "Musical Interface to Audiovisual Corpora of Arbitrary Instruments." In NIME, pp. 151-154. 2014.
- Query-by-multiple-examples: content-based search in computer-assisted sound-based musical composition. Tiago Fernandes Tavares, Jônatas Manzolli.
- Fine-tuned control of concatenative synthesis with CataRT using the Bach Library for Max. Aaron Einbond, Christopher Trapani, Andrea Agostini, Daniele Ghisi, Diemo Schwarz.
- Considering Roughness to Describe and Generate Vertical Musical Structure in Content-Based Algorithmic-Assisted Audio Composition. Gilberto Bernardes, Matthew E. P. Davies, Carlos Guedes, Bruce Pennycook. --- earGram's high-level control using loudness and roughness
- A remix instrument based on fragment feature-analysis. M. Neupert, J. Gossmann. IEEE International Conference on Multimedia and Expo Workshops (ICMEW), 2013.
- Mostitch, Abram Hindle -- audio collager via parameter lookup of FM synthesis, with some help from Marsyas. Check out Mostitch vs. Back Street Boys.
- Composing Morphology: Concatenative Synthesis as an Intuitive Medium for Prescribing Sound in Time. Benjamin Hackbarth, Norbert Schnell, Philippe Esling, Diemo Schwarz. Contemporary Music Review. April 2013, vol. 32, n° 1, p. 49-59.
- Dirty Tangible Interfaces: Expressive Control of Computers with True Grit. Matthieu Savary, Diemo Schwarz, Denis Pellerin, Florence Massin, Christian Jacquemin, Roland Cahen. CHI 2013. Paris, p. 2991-2994.
- Keynote address Interacting with a Corpus of Sounds. Diemo Schwarz. SI13: NTU/ADM Symposium on Sound and Interactivity. Singapore.
- Costello, Edward, Victor Lazzarini, and Joseph Timoney. "A streaming audio mosaicing vocoder implementation." (2013): 194-201. --- audio mosaicing per spectral band(!)
- Constellation: A Tool for Creative Dialog Between Audience and Composer. Akito van Troyer. To our knowledge, Constellation is the first web browser-based application that used real-time concatenative synthesis as a running synthesis engine for an asynchronous collaborative and creative music composition.
- Retexture — Towards Interactive Environmental Sound Texture Synthesis through Inversion of Annotations. Diemo Schwarz
- Design of a Customizable Timbre Space Synthesizer. Daniel Gómez, Rafael Vega and Carlos Arce-Lopera.
- A. Refsum Jensenius and V. Johnson, "Performing the Electric Violin in a Sonic Space," in Computer Music Journal, vol. 36, no. 4, pp. 28-39, Dec. 2012.
- Zappi, Victor, Dario Mazzanti, Andrea Brogni, and Darwin Caldwell. "Concatenative Synthesis Unit Navigation and Dynamic Rearrangement in vrGrains." (2012). --- 3D corpus browsing installation
- van Troyer, Akito. "Constellation: A Tool for Creative Dialog Between Audience and Composer."
- Norowi, Noris Mohd. "An Artificial Intelligence Approach to Concatenative Sound Synthesis." PhD diss., University of Plymouth, 2013.
- Murray-Rust, Dave, and Rocio von Jungenfeld. "Thawing colours: dangling from the fuzzy end of interfaces." In Fourth International Workshop on Physicality, p. 32. 2012.
- Peláez, Pablo Molina. "DJ Support Agent Based on Audio Mosaicing: Enhancing DJ-Skills for People with or without them." PhD diss., Universitat Pompeu Fabra, 2010.
- EarGram: an Application for Interactive Exploration of Large Databases of Audio Snippets for Creative Purposes. Bernardes, G., Guedes, C., and Pennycook, B.
- EarGram: An application for interactive exploration of concatenative sound synthesis in Pure Data. Bernardes, G., Guedes, C., & Pennycook, B. --- Post-proceeding publication in LNCS: M. Aramaki, M. Barthet, R. Kronland-Martinet, & S. Ystad (Eds.), From sounds to music and emotions (pp. 110-129). Berlin-Heidelberg: Springer-Verlag.
- The Sound Space as Musical Instrument: Playing Corpus-Based Concatenative Synthesis. Diemo Schwarz.
- DIRTI — Dirty Tangible Interfaces. Matthieu Savary, Diemo Schwarz, Denis Pellerin.
- Castet, Julien. "Performing experimental music by physical simulation." In NIME. 2012.
- Rébillat, Marc. "Vibrations de plaques multi-excitateurs de grandes dimensions pour la création d’environnements virtuels audio-visuels." PhD diss., École Polytechnique, 2012.
- Browsing Music and Sound Using Gestures in a Self-Organized 3D Space. Gabrielle Odowichuk and George Tzanetakis. --- approach very similar to CataRT in 3D, but omitting to giving reference
- Precise Pitch Control in Real Time Corpus-Based Concatenative Synthesis. Aaron Einbond, Christopher Trapani and Diemo Schwarz
- Navigating Variation: Composing for Audio Mosaicing. Diemo Schwarz and Benjamin Hackbarth.
- Corpus-Based Concatenative Synthesis: Perceptual Descriptors as an Approach to Composing and Analyzing Timbre. Aaron Einbond, Diemo Schwarz. Crossing the Corpus Callosum II: Neuroscience, Healing & Music. Cambridge, MA.
- Distance Mapping for Corpus-Based Concatenative Synthesis. Diemo Schwarz. Sound and Music Computing (SMC), Padova.
- Nuvolet : 3d gesture-driven collaborative audio mosaicing, Josep M Comajuncosas, Enric Guaus, Alex Barrachina, and John O'Connell. NIME 2011. --- a 3D positional interface to CataRT using the Kinect for collaborative (well, not really) exploration of sound clouds and interactive target-based audio mosaicing, for a piece by composer Ariadna Alsina. The article adds an estimation of the defining features of the new interface in comparison to other free-space interfaces. As the authors correctly say: "the challenges inherent in the design of open-air interfaces pose some unavoidable issues which deserve further research."
- Corpus-Based Improvisation. Diemo Schwarz, Victoria Johnson. (Re)thinking Improvisation. Malmö.
- La collection numérique comme modèle pour la synthèse sonore, la composition et l’exploration multimédia interactives. Alain Bonardi, Francis Rousseaux, Diemo Schwarz, Benjamin Roadley. Revue Musimédiane. Juin 2011, n° 6
- Martin, Augusto Gabriel Vigliensoni. "Touchless gestural control of concatenative sound synthesis." PhD diss., McGill University, 2011.
- Aperghis, Georges, and Grégory Beller. "Contrôle gestuel de la synthèse concaténative en temps réel dans Luna Park." Rapport de recherche et développement (2011).
- Molina, Pablo, Martín Haro, and Sergi Jordá. "BeatJockey: A New Tool for Enhancing DJ Skills." In NIME, pp. 288-291. 2011.
- Prioritizing Audio Features Selection Using Analysis Hierarchy Process As A Mean To Extend User Control in Concatenative Sound Synthesis. ICMC 2011, Noris Mohd Norowi, Eduardo Reck Miranda. --- Introduces us to the AHP (Analysis Hierarchy Process from decision sciences) method to inverse user priority judgments in order to derive the selection weights for CBCS, but otherwise misses its point since that does not actually prioritise the selection. See our SMC 2011 article for an elegant way to integrate actual step by step priorities with the geometric unit search.
- CBPSC: Corpus-Based Processing for Supercollider. ICMC 2011, Thomas Stoll. --- Implementation of CataRT-based CBP (see ICMC 2009 above) in Supercollider.
- Yee-King, Matthew John. "An autonomous timbre matching improviser." In ICMC. 2011.
This fruitful year saw many applications of CBCS and the first concatenative iPhone App!
- Concat by researcher-musician Nick Collins at Sussex University (not musician-researcher Nicolas Collins) is an application for iPhone that does live audio controlled musaicing on a corpus of self-recorded sounds. The matching is based on four timbral features, and you can choose the size of the matched segment played back. As Nick puts it: Found myself doing various bits of iphone hacking in the last year, latest app is a realtime concatenative synthesizer (strangely called Concat).
- A video based analysis system for realtime control of concatenative sound synthesis and spatialisation. Jensenius, Alexander Refsum; Johnson, Victoria. Proceedings of the second Norwegian Artificial Intelligence Symposium : 2010, 85-88.
- Surfing the Waves: Live Audio Mosaicing of an Electric Bass Performance as a Corpus Browsing Interface. NIME 2010, Pierre Alexandre Tremblay, Diemo Schwarz.  --- Controlling CataRT live by an electric bass.
- Mechanisms for Controlling Complex Sound Sources: Applications to Guitar Feedback Control. NIME 2010, Aengus Martin, Sam Ferguson, and Kirsty Beilharz.  --- Using CBCS for inversion of a complex synthesis model, here a feedbacking robotic guitar.
- Grainstick: A Collaborative, Interactive Sound Installation. ICMC 2010, Grace Leslie et. al.  --- Uses a tiny bit of CataRT for a sound installation composed by Pierre Jodlowski.
- Spatializing Timbre with Corpus-Based Concatenative Synthesis. Einbond Aaron, Schwarz Diemo, ICMC 2010 --- Using CataRT and the FTM library for Max/MSP we develop a technique for the organization of a navigation space for synthesis based on user-defined spatial zones and informed by the perceptual concept of timbre space. The goal is to place the listener in the midst of a virtual space of sounds organized by their descriptor data, simulating an immersion in timbre space.
- A Modular Sound Descriptor Analysis Framework for Relaxed-real-time Applications. ICMC 2010, Schwarz Diemo, Schnell Norbert --- Describes the future CataRT import architecture.
- A Timbre Analysis and Classification Toolkit for Pure Data. ICMC 2010, William Brent. --- Publication of TimbreID PD externals mentioned 2009.
- A Database System for Organizing Musique Concrete. ICMC 2010, Christopher Bailey. --- CBCS in FileMaker on hand-assigned descriptors!
- Abstraction in a Unit-based Audio Processing System. ICMC 2010, Tom Stoll. --- Requirements for Tom's version of CataRT
- (Ab)using MIR to create music: corpus-based synthesis and audio mosaicing SMC 2010 tutorial, Diemo Schwarz.
- Descriptor-based Sound Texture Sampling. SMC 2010, Diemo Schwarz and Norbert Schnell. --- Application of CataRT to environmental sound texture synthesis.
- Timbre Remapping through a Regression-Tree Technique. SMC 2010, Dan Stowell, MD Plumbley. --- Solves the problem of mapping one (controlling) corpus to a source corpus raised by Tremblay and Schwarz
- Augmenting Sound Mosaicing with Descriptor-Driven Transformation. DAFx 2010, Graham Coleman, Esteban Maestre, Jordi Bonada. --- This scheme has also been implemented as one of the hacks of the Barcelona Music Hack Day: Remix Sound Harmonizer/MakeItRick
- Corpus-Based Transcription as an Approach to the Compositional Control of Timbre, ICMC 2009, Aaron Einbond, Diemo Schwarz, Jean Bresson. --- Using CataRT and related tools in the FTM and Gabor libraries for Max/MSP we describe a technique for real-time analysis of a live signal to pilot corpus-based synthesis, along with examples of compositional realizations in works for instruments, electronics, and sound installation. To extend this technique to computer-assisted composition for acoustic instruments, we develop tools using the Sound Description Interchange Format (SDIF) to export sonic descriptors to OpenMusic where they may be further manipulated and transcribed into an instrumental score. This presents a flexible technique for the compositional organization of noise-based instrumental sounds.
- BotTalk by Ben Smith "generates mp3 mashups of a last.fm user's top artists. The mashup is created from mp3s from each artist. These are sorted according to pitch and each track has a drum track created by taking the bars from the next track that match the key of the current track. Each track is then mixed with it's beat track, with the beat track looped. The resulting mix tracks are then ordered by loudness and about 30 seconds of each are bolted together to create the final mix-tape." --- corpus-based concatenative mashup synthesis (noticed by HarS)
- Auditory Spectral Summarisation for Audio Signals with Musical Applications, ISMIR 2009, Sam Ferguson and Densil Cabrera, uses the temporal reduction capability of concatenative synthesis for auditory display and exploration of audio spectra.
- TimbreID is a collection of PD externals that allow descriptor analysis, corpus-based concatenative synthesis and CataRT-style 2D interaction. (found by Bruno Ruviaro)
- Expressive Concatenative Synthesis by Reusing Samples from Real Performance Recordings. CMJ 2009, Esteban Maestre, Rafael Ramirez, Stefan Kersten, Xavier Serra. --- culmination of the research work on expression modeling and concatenative synthesis since 2004.
- Beyond Concatenation: Some Ideas for the Creative Use of Corpus-based Sonic Material, ICMC 2009, Thomas Stoll. 
- Scalability in Content-Based Navigation of Sound Databases, ICMC 2009, Schwarz Diemo, Schnell Norbert, Gulluni Sebastien.
- Sound Object Classification for Symbolic Audio Mosaicing: A Proof-of-Concept, SMC 2009, Jordi Janer, Martin Haro, Gerard Roma, Takuya Fujishima, Naoaki Kojima.
- Sound Search by Content-based Navigation in Large Databases, SMC 2009, Diemo Schwarz and Norbert Schnell. --- applies interactive corpus-based browsing based on a 2D projection of the descriptor space (very similar to CataRT) combined with quick selection of classes to searching sounds in large instrument or Fx sound databases.
- Principles and Applications of Interactive Corpus-Based Concatenative Synthesis. JIM 2008, Diemo Schwarz, Roland Cahen, Sam Britton. --- New reference article about CataRT and its musical applications.
- Extending voice-driven synthesis to audio mosaicing, Jordi Janer, Maarten de Boer, Music Technology Group, Universitat Pompeu Fabra, Barcelona, SMC 2009. Abstract: This paper presents a system for controlling audio mosaicing with a voice signal, which can be interpreted as a further step in voice-driven sound synthesis. Compared to voice-driven instrumental synthesis, it increases the variety in the synthesized timbre. Also, it provides a more direct interface for audio mosaicing applications, where the performer voice controls rhythmic, tonal and timbre properties of the output sound. In a ﬁrst step, voice signal is segmented into syllables, extracting a set of acoustic features for each segment. In the concatenative synthesis process, the voice acoustic features (target) are used to retrieve the most similar segment from the corpus of audio sources. We implemented a system working in pseudo-realtime, which analyzes voice input and sends control messages to the concatenative synthesis module. Additionally, this work raises questions to be further explored about mapping the input voice timbre space onto the audio sources timbre space.
- An Interface for Sequencing with Concatenative Sound Synthesis, Lukasz Kuzma, Master's Thesis at UPF, Barcelona, also presented at SMC 2009.
- Hagevold, Jan Erik. "An Approach to Real-Time Content-Based Audio Synthesis." PhD diss., New York University, 2008.
- Corpus-Based Concatenative Synthesis: Assembling sounds by content-based selection of units from large sound databases. IEEE Signal Processing Magazine Special Section: Signal Processing for Sound Synthesis, Diemo Schwarz. --- New reference article about CBCS.
- Benovoy, Mitchel, Andrew Brouse, Thomas Greg Corcoran, Hannah Drayson, Cumhur Erkut, Jean-Julien Filatriau, Christian Frisson et al. "Audiovisual content generation controlled by physiological signals for clinical and artistic applications." (2007).
- Penrose, Joshua, and Gerald Philips. "Putting the Pieces Together." (2007).
- Chroma Palette: Chromatic Maps of Sound As Granular Syntheis Interface (Justin Donaldson, Ian Knopke, Chris Raphael) --- Uses a 2D interface for (usably real-time?) interactive browsing in a sound space based on chroma (pitch classes). The 20 descriptor dimensions (12 pitch classes + 8 octave classes) are reduced to 2 dimensions by a multi-dimensional scaling approach.
- Musical Applications of Real-Time Corpus-Based Concatenative Synthesis (Diemo Schwarz, Ircam-CNRS STMS, France; Sam Britton, freelance, United Kingdom; Roland Cahen, ENSCI, France; Thomas Goepfer, Ircam-Centre Pompidou, France) --- paper about recent pieces and performances using CataRT
- MUSED: Navigating the Personal Sample Library (Graham Coleman, United States) --- A system resembling CataRT for sample-based music composition. It analyses an input song, segments it into notes and characterises each segment by pitch, loudness, brilliance, etc. Then, the segments are layed out in 2D space and can be choosen and played by clicking on them. There is an innovative reordering feature of the display, that places the segments following to the clicked-on segment in a row close to it.
- Biotools: Introducing a Hardware and Software Toolkit for Fast Implementation of Biosignals for Musical Applications (Miguel Angel Ortiz Pérez, Benjamin Knapp, Sonic Arts Research Centre, United Kingdom) --- Uses CataRT for performances controlled by a muscular-tension biosensor. See this video of a performance.
- Audiovisual Concatenative Synthesis (Nick Collins, Sussex University, United Kingdom) --- uses 5 audio + 5 video descriptors, audio or video-driven selection of audio+video segment from a database. Matching of a whole sequence of audio/video by « shingling » (cf. Casey).
- Soundspotter / Remix-TV: Fast Approximate Matching for Audio and Video Performance (Michael Casey, Mick Grierson, Goldsmiths, University of London, United Kingdom) --- Audio-driven audio+video lookup and concatenation using Casey’s SoundSpotter. Matching of a whole sequence of audio/video by « shingling ». References Michel Chion’s work about film-sound. (cf. Collins)
- Guidage: A Fast Audio Query Guided Assemblage (Arshia Cont, UCSD / Ircam, France; Dubnov Shlomo, University of California in San Diego, United States; Gérard Assayag, Ircam, France) --- Factor-Oracle based implicit-segmentation audio-driven recomposition
- DataJockey (Alex Norman, Xavier Amatriain): a tool to help a digital DJ select tracks based on meta-data and sound descriptors. --- This is corpus-based synthesis on a larger time-scale, that of a DJ set.
- ScrambledHacks by Sven König --- spectral lookup alongside with ganged music video concatenation!
- New spectral explorations by Jean-François Charles --- spectral frame mosaicing in Jitter
- Concatenative Synthesis Using Score-Aligned Transcriptions. ICMC 2006, Roger Dannenberg's take on chroma (i.e. pitch-class) based lookup of notes or chords from classical music
- Using Concatenative Synthesis for Expressive Performance in Jazz Saxophone. ICMC 2006, Maestre, E., Hazan, A., Ramirez, R., and Perez, A.
- Meapsoft --- java software for descriptor-based concatenative reordering
- OStitch, Abram Hindle. -- FFT-based frequency-bin-wise mosaicing experimentation, open source OCaml code
Taxonomy up to 2005
This figure offers a comparative survey and taxonomy of the many different approaches until 2005, referenced in . It shows a comparison of musical sound synthesis methods according to selection (y-axis) and analysis (x-axis), use of concatenation quality (bold), and real-time capabilities (italics)}
- ↑ Schwarz, D. (2006). Concatenative Sound Synthesis: The Early Years. Journal of New Music Research, 35(1), 3. Bib
- ↑ Maestre, E., Hazan, A., Ramirez, R., & Perez, A. (2006). Using Concatenative Synthesis for Expressive Performance in Jazz Saxophone. Paper presented at International Computer Music Conference (ICMC). Bib
- ↑ Schwarz, D. (2007). Corpus-Based Concatenative Synthesis. IEEE Signal Processing Magazine, 24(2), 92. Bib
- ↑ Schwarz, D., Britton, S., Cahen, R., & Goepfer, T. (2007). Musical Applications of Real-Time Corpus-Based Concatenative Synthesis. Paper presented at International Computer Music Conference (ICMC), Copenhagen, Denmark. Bib
- ↑ Schwarz, D., Cahen, R., & Britton, S. (2008). Principles and Applications of Interactive Corpus-Based Concatenative Synthesis. Paper presented at Journées d'Informatique Musicale (JIM), GMEA, Albi, France. Bib
- ↑ 6.0 6.1 Stoll, T. (2009). Beyond Concatenation: Some Ideas for the Creative Use of Corpus-based Sonic Material. Paper presented at International Computer Music Conference (ICMC), Montreal, QC. Bib
- ↑ Einbond, A., Schwarz, D., & Bresson, J. (2009). Corpus-Based Transcription as an Approach to the Compositional Control of Timbre. Paper presented at International Computer Music Conference (ICMC), Montreal, QC, Canada. Bib
- ↑ Maestre, E., Ramírez, R., Kersten, S., & Serra, X. (2009). Expressive Concatenative Synthesis by Reusing Samples from Real Performance Recordings. Computer Music Journal, 33(4), 23. Bib
- ↑ 9.0 9.1 Tremblay, P. A., & Schwarz, D. (2010). Surfing the Waves : Live Audio Mosaicing of an Electric Bass Performance as a Corpus Browsing Interface. Paper presented at Conference for New Interfaces for Musical Expression, Sydney, Australia. Bib
- ↑ Martin, A., Ferguson, S., & Beilharz, K. (2010). Mechanisms for Controlling Complex Sound Sources: Applications to Guitar Feedback Control. Paper presented at Conference for New Interfaces for Musical Expression, Sydney, Australia. Bib
- ↑ Leslie, G., Schwarz, D., Warusfel, O., Bevilacqua, F., Zamborlin, B., Jodlowski, P., & Schnell, N. (2010). Grainstick: A Collaborative, Interactive Sound Installation. Paper presented at International Computer Music Conference (ICMC), New York, NY. Bib
- ↑ Einbond, A., & Schwarz, D. (2010). Spatializing Timbre with Corpus-Based Concatenative Synthesis. Paper presented at International Computer Music Conference (ICMC), New York, NY. Bib
- ↑ Schwarz, D., & Schnell, N. (2010). A Modular Sound Descriptor Analysis Framework for Relaxed-real-time Applications. Paper presented at International Computer Music Conference (ICMC), New York, NY. Bib
- ↑ Schwarz, D., & Schnell, N. (2010). Descriptor-based Sound Texture Sampling. Paper presented at International Conference on Sound and Music Computing (SMC), Barcelona, Spain. Bib
- ↑ Stowell, D., & Plumbley, M. (2010). Timbre Remapping through a Regression-Tree Technique. Bib