Julien Bloit

From IMTR

Research associate

I hold a PhD from Université Pierre et Marie Curie - Paris 6 (EDITE), completed at IRCAM, in the Real-Time Musical Interaction team.

Supervisor : Xavier Rodet, professor at Paris 6


Contact : julien.bloit [at] ircam.fr

Contents

Research interests

I work on probabilistic modeling of audio events in a monophonic audio stream. I'm mostly interested in describing sound morphologies, in the sense of sound "gestures" rather than notes defined by a static {pitch, intensity, duration} triplet. One of the underlying motivations derives from the context of musical interaction with complex instrumental sounds from contemporary playing techniques.

One of the challenges consists in defining the latent layer of the model (the symbolic units) when no definitive taxonomy exists to describe all possible instrumental sounds. My work studies how a complex musical vocabulary can be factorized with a simpler set of elementary profiles.

For this purpose, I have been working with hidden Markov models, and studying specific extensions such as segmental models (where a single state emits an observation sequence), or state-space factorizing techniques.

On a longer term, one of my goals would be to automatically derive a set of elementary profiles, which could also serve on a musicological ground. For this purpose, I'm interested in automatic model selection problematics.

For the sake of real time interaction, I have studied under which conditions can the Viterbi algorithm output the optimal path in an online context. I have identified necessary conditions for a short-time version of the algoritm to work, and studied the relation between an HMM's topology and the decoding latency.


PhD details

Title : Musical interaction and sonic gestures : temporal modeling of audio decriptors

Completed on the 17th of March, 2010 in front of the following jury :

  • Pr. Manuel Davy (Reporter), INRIA.
  • Pr. David Wessel (Reporter), CNMAT, UC Berkeley.
  • Pr. Thierry Artières (Examiner), LIP6, Paris 6.
  • Frédéric Bevilacqua (Examiner), IRCAM, Paris 6.
  • Pr. Xavier Rodet (Supervisor), IRCAM, Paris 6.

Abstract

This thesis deals with the modeling of instrumental sounds in a musical interaction context involving an interpret and a computer music system. Whenever this interaction relies on the extraction of symbolic information, the existing systems usually assume that the signal is structured with notes, defined by steady values of pitch, length and intensity. However, this representation lacks the possibility to account for contemporary instrumental vocabularies, which explore other musical dimensions (like timbre and temporal evolution) through the use of extended playing techniques.

Instead of undertaking the exhaustive modeling of contemporary instrumental sounds, we assume that a sonic gesture vocabulary can be represented as the combination of characteristic profiles on several perceptive dimensions. Thus a sonic gesture is modeled with trajectories on multiple streams of audio descriptors, approximating these dimensions.

In a Bayesian framework, our approach studies a multistream model able to account for the asynchrony among several hidden processes, as well as the statistical dependency between descriptors. For each stream, we propose the modeling of trajectories using segmental models, the structure of which allows a better representation of duration and succesive correlations between observations than models based on framewise observations. We subsequently study the relation between real time decoding constraints and a model's topology, particularly in terms of an accuracy/latency trade-off. Evaluations were conducted on several synthetic databases, and on a set of violin sounds recorded for the purpose of this work.

Publications


NB: strangely, articles downloaded from the articles.ircam.fr server have the following name : file_name.pdf.part Delete the .part extension after downloading, the pdf should be ok.

Talks

  • Bloit,J. Towards sonic gesture models for interaction. Mills College, Seminar in Electronic Music Performance (Chris Brown), feb. 2011.
  • Bloit,J. Modélisation d’événements musicaux : approche multi-flux. IRCAM research and technology seminars, oct. 2008.
  • Bloit,J. Short-time Viterbi for online HMM decoding. UCSD, CRCA. nov. 7, 2007.
  • Bloit,J. Lanchantin,P. Rodet,X. Décodage dans un flux : reconnaissance et alignement de parole. IRCAM research and technology seminars, sep. 2007.
  • Bloit,J. Modélisation et reconnaissance en temps réel d’événements musicaux : sélection de modèle et décodage. IRCAM research and technology seminars, dec. 2006.

Projects

  • Urban Musical Game : use a sportball as a music controller.
  • HARTIS : real-time audio synthesis software for the automobile industry (PSA's sound design division).
  • From Kafka to K... : design and implementation (Flash) of the musicological DVD-ROM. Ircam Hypermedia lab.
  • Sound for the short-movie Métempsycose by Buzio Saraiva.
  • DVD à la carte : design and implementation (xml/Cocoon) of a DVD-on-demand web application. Ircam Hypermedia lab.
  • Groupe Dunes : design and implementation (Max/MSP) of two scenographic installations (Sans titre Provisoire. Ferme du Buisson, Noisiel, Festival Temps d’Images, 2001 / L’Espace Turbulent. Québec, Caserne Dalhousie, 2001).

Links

Personal tools