Discussions: Machines of Loving Graces, Creation and Creativity Between Humans and Machines

Day 1 – YouTube livestream: (See the link below for Day 2)

<

All Watched Over by Machines of Loving Grace is a 1967 poem by Richard Brautigan that describes the peaceful and harmonious cohabitation of humans and computers. Fifty years later – following the technological revolutions brought about by the internet, big data, cloud computing and deep learning – Brautigan’s vision resonates singularly. It compels us to rethink our interactions with machines, even in regards to the intimate act of artistic creation. In collaboration with its American partners, France’s Institute for Research and Coordination in Acoustics/Music (IRCAM) invites participants to discover some remarkable examples of possible artistic relationships between humans and computers during two seminars featuring researchers and artists who are building this new deal.

This event is organized by IRCAM – the Institute for Research and Coordination in Acoustics/Music – and the STMS lab (The French National Centre for Scientific Research, the Ministry of Culture and Sorbonne University), in collaboration with Georgia Tech’s School of Music, NYU Steinhardt – Music Education, the UC Berkeley Center for New Music and Audio Technologies, and the Atlanta Office of the Cultural Services of the Embassy of France in the United States.

Discussions:
Day 1: “Working Creatively with Machines”
October 14, 11:00 a.m. (Atlanta) / 5:00 p.m. (Paris)
Rewatch YouTube livestream

Introduction by Yves Berthelot, Vice-Provost for International Initiatives, Georgia Institute of Technology and Frank Madlener, Director of IRCAM

Participants :
  • Carmine Cella (CNMAT – CU Berkeley)
  • Rémy Mignot (IRCAM)
  • Nicolas Obin (Sorbonne Université & IRCAM)
  • Alex Ruthman (New York University)
  • Jason Freeman (Georgia Tech)
Day 2: “Performances with Machines”
October 15, 11:00 a.m. (Atlanta) / 5:00 p.m. (Paris)
Rewatch YouTube livestream




Participants:
  • Jérôme Nika (IRCAM)
  • Benjamin Levy (IRCAM)
  • Daniele Ghisi (Composer)
  • Grace Leslie (Georgia Tech)
  • Elaine Chew (CNRS)
 

Virtual exhibition:

Until November 10, a virtual exhibition will be accessible in order to present excerpts of works and tools that heavily rely on artificial intelligence techniques and that question the relationship between human and machines. The exhibition includes works by such artists and researchers as Daniele Ghisi (composer), Jérôme Nika and Gérard Assayag (researchers, IRCAM), Jason Freeman (Georgia Tech), Carmine Emanuel Cella (professor and composer, CNMAT – CU Berkeley), Alex Ruthman (professor, New York University).
More info here.
Link to the virtual exhibition

Panelists Biographies:

  • Elaine Chew: Elaine Chew is a senior CNRS researcher at the STMS Lab at IRCAM and a Visiting Professor at King’s College London. Her research focuses on mathematical and computational modeling of musical structures in music and in electrocardiographic sequences. The tools are used to explicate leading performances, understand music-heart-brain interactions, stratify arrhythmias into subtypes, and generate AI music. Her work has been recognised by PECASE and NSF CAREER awards, Fellowships at Harvard’s Radcliffe Institute for Advanced Study, and ERC ADG and POC awards. She was Professor of Digital Media at QMUL (2011-2019), Assistant/tenured Associate Professor and Viterbi Early Career Chair at USC (2001-2013). and held visiting appointments at Harvard (2008-2009) and Lehigh (2000-2001). As a pianist, she integrates her research into concert-conversations that showcase scientific visualisations and lab-grown compositions. She received PhD and SM degrees in Operations Research from MIT, BAS in Mathematical & Computational Sciences (honors) and Music (distinction) from Stanford, and LTCL/FTCL piano diplomas.
  • Daniele Ghisi: Born in Italy in 1984, Daniele Ghisi studied Music Composition at Bergame Conservatory with S. Gervasoni and continued his studies with IRCAM Cursus. In 2009-2010, he is a residency composer at Akademie der Künste (Berlin), in 2011-2012, he is residency composer in Spain, member of the Academie de France in Madrid – Casa de Velázquez. In 2015, he is in residency in Milano, with the Divertimento Ensemble, which recorded it’s first monographic CD (Geografie). Since 2010, he develops, with composer Andrea Agostini, “bach: automated composer’s helper”, the library for computer-assisted composition. He is the co-founder of the blog nothing.eu, in which he writes. He is edited by Ricordi. Between 2017 and 2020, he teaches Electroacoustic Composition at the Gênes Conservatory. Actually, he’s composer-researcher at California University, Berkeley (CNMAT).
  • Jérome Nika: Jérôme Nika is researcher in human-machine musical interaction in the Music Representations Team at Ircam. He graduated from the French Grandes Écoles Télécom ParisTech and ENSTA ParisTech. In addition, he studied acoustics, signal processing and computer science applied to music, and composition. He specialized in the applications of computer science and signal processing to digital creation and music through a PhD (Young Researcher Prize in Science and Music, 2015; Young Researcher Prize awarded by the French Association of Computer Music, 2016), and then as a researcher at Ircam. His research focuses on the introduction of authoring, composition, and control in human-computer music co-improvisation. This work led to numerous collaborations and musical productions, particularly in improvised music (Steve Lehman, Bernard Lubat, Benoît Delbecq, Rémi Fox) and contemporary music (Pascal Dusapin, Marta Gentilucci). In 2019 – 2020, his work was involved in 3 ambitions productions : Lullaby Experience, an evolutive project by composer Pascal Dusapin, and two improvised music projects: Silver Lake Studies, in duo with Steve, and C’est Pour ça, in duo with Rémi Fox. In 2020 he is in residency at Le Fresnoy – Studio National des Arts Contemporains. More info: https://jeromenika.com
  • Jason Freeman: Jason Freeman is a Professor of Music at Georgia Tech and Chair of the School of Music. His artistic practice and scholarly research focus on using technology to engage diverse audiences in collaborative, experimental, and accessible musical experiences. He also develops educational interventions in K-12, university, and MOOC environments that broaden and increase engagement in STEM disciplines through authentic integrations of music and computing. His music has been performed at Carnegie Hall, exhibited at ACM SIGGRAPH, published by Universal Edition, broadcast on public radio’s Performance Today, and commissioned through support from the National Endowment for the Arts. Freeman’s wide-ranging work has attracted over $10 million in funding from sources such as the National Science Foundation, Amazon, and Turbulence. It has been disseminated through over 80 refereed book chapters, journal articles, and conference publications. Freeman received his B.A. in music from Yale University and his M.A. and D.M.A. in composition from Columbia University.
  • Carmine Emanuele Cella: Carmine Emanuele Cella is a internationally renown composer with advanced studies in applied mathematics. He studied at Conservatory of Music G. Rossini in Italy getting master degrees in piano, computer music and composition and he got a PhD in musical composition at the Accademia di S. Cecilia in Rome. He also studied philosophy and mathematics and got a PhD in mathematical logic at the University of Bologna entitled On Symbolic Representations of Music (2011). In 2007-2008, Carmine-Emanuele Cella works as a researcher in Paris in Ircam’s Analysis/Synthesis team working on audio indexing and since January 2019, he is Assistant Professor of Music and Technology at the University of Berkeley.
  • Alex Ruthmann: S. Alex Ruthmann is Associate Professor of Music Education & Music Technology, and the Director of the NYU Music Experience Design Lab (MusEDLab) at NYU Steinhardt. He holds affiliate appointments with the NYU Digital Media Design for Learning program, and the Program on Creativity and Innovation at NYU Shanghai. He currently serves as Chair of the Music in Schools and Teacher Education Commission for the International Society for Music Education. Ruthmann is co-author of Scratch Music Projects, a new book published by Oxford University Press bringing creative music and coding projects to students and educators. He is co-editor of the Oxford Handbook of Technology and Music Education, and the Routledge Companion to Music, Technology and Education. He also serves as Associate Editor of the Journal of Music, Technology, and Education. Ruthmann’s research focuses on the design of new technologies and experiences for music making, learning and engagement. Partners include the New York Philharmonic, Shanghai Symphony, Peter Gabriel, Herbie Hancock, Yungu and Portfolio Schools, Tinkamo, UNESCO, and the Rock and Roll Forever Foundation. The MusEDLab creative learning and software projects are in active use by over 900,000 people in over 150 countries across the world.
  • Rémy Mignot: Rémi Mignot is a researcher of the analysis-synthesis team of IRCAM. In 2009, he obtained his PhD thesis from the EDITE doctoral school, on the modeling and the simulation of acoustic waves for wind instruments, with Thomas Hélie (IRCAM) and Denis Matignon (Supaero). In 2010-2012, he joined the Langevin Institut (Paris) for a post-doctoral research about the sampling of room impulse responses using compressed sensing, with Laurent Daudet (Paris Diderot) and François Ollivier (UPMC). In 2012-2014, he left for the department of Signal Processing and Acoustics of Aalto University at Espoo (Finland), to work with Vesa Välimäki on the extended subtractive synthesis of musical instruments. He came back to IRCAM in 2014 to do researches about audio indexing and classification with Geoffroy Peeters. Since 2018, he has been responsible of researches on music information retrieval in the analysis-synthesis team.
  • Nicolas Obin: Nicolas OBIN is a researcher in audio signal processing, machine learning, and statistical modeling of sound signals with specialization on speech processing. My main area of research is the generative modeling of the expressivity in spoken and singing voices, with application to various fields such as speech synthesis, conversational agents, and computational musicology. He is actively involved in promoting digital science and technology for arts, culture, and heritage. In particular, he collaborated with renowned artists (Georges Aperghis, Philippe Manoury, Roman Polansky, Philippe Parreno, Eric Rohmer, André Dussolier), and helped to reconstitute the digital voice of personalities, like the artificial cloning of André Dussolier’s voice (2011), the short-film Marilyn (P. Parreno, 2012) and Juger Pétain documentary (R. Saada, 2014).He regularly conducts guest lectures for reknown schools (Collège de France, Ecole Normale Supérieure, Sciences Po), organizations (CNIL, AIPPI) and in the press and the media (Le Monde, Télérama, TF1, France 5, Arte, Pour la Science).
  • Grace Leslie: Grace Leslie is a flutist, electronic musician, and scientist. She develops brain-music interfaces and other physiological sensor systems that reveal aspects of her internal cognitive and affective state, those left unexpressed by sound or gesture, to an audience. As an electronic music composer and improviser, she maintains a brain-body performance practice. Grace strives to integrate the manners of conventional emotional and musical expression that she learned as a flutist with the new forms available to her as an electronic musician, using brain- computer interface to reveal aspects of her internal mental state, those left unexpressed by sound or gesture, to an audience. In recent years she has performed this music in academic and popular music venues, conferences and residencies in the United States, UK, Australia, Germany, Singapore, South Korea, China, and Japan, and released three records of this mind-body music. During her Ph.D. (Music and Cognitive Science at UCSD) studies she completed a yearlong position at Ircam in Paris, where she collaborated on an interactive sound installation and performed experiments studying the effect of active involvement on music listening. She completed her undergraduate and Masters work in Music, Science, and Technology at CCRMA,
    Stanford University.
  • Benjamin Levy: Computer music designer at IRCAM, Benjamin Lévy studied both sciences—primarily computer science, with at PhD in engineering—and music. Since 2008, he has collaborated on both scientific and musical projects with several teams at IRCAM, in particular around the OMax improvisation software. As an R&D engineer and developer, he has also worked in the private sector for companies specialized in creative audio technologies. He has taken part in several artistic projects at IRCAM and elsewhere as a computer musician for contemporary music works as well as jazz, free improv, theater, and dance. He has collaborated with choreographers such as Aurélien Richard, worked on musical theater with Benjamin Lazar, and performs with the jazz saxophonist Raphaël Imbert.

Event Partners:

  • IRCAM: The Institute for Research and Coordination in Acoustics/Music is one of the world’s largest public research centers dedicated to musical creation and scientific research. A unique venue where artistic vision converges with scientific and technological innovation, the institute directed by Frank Madlener brings together over 160 collaborators. IRCAM hosts the UMR9912 STMS Ircam – CNRS – Sorbonne University science and technologies research lab.
  • STMS Lab: The fundamental principle of the STMS Lab is to encourage productive interaction among scientific research, technological developments, and contemporary music production. Since its establishment in 1995, this initiative has provided the foundation for the institute’s activities. One of the major issues is the importance of contributing to the renewal of musical expression through science and technology. Conversely, specific problems related to contemporary composition have led to innovative, theoretical, methodological, and applied advances in the sciences with ramifications far beyond the world of music. The work carried out in the STMS joint research lab (Science and Technology of Music and Sound) is supported by the CNRS, Sorbonne Université and the French Ministry of Culture.
  • NYU: New York University is a private research university based in New York City and founded in 1831 by Albert Gallatin as an institution to “admit based upon merit rather than birthright or social class”. NYU is the largest independent research university in the United States. The university has numerous research efforts, including founding the American Chemical Society and holding research partnerships with the Inception Institute of Artificial Intelligence and with major technology firms such as Twitter and IBM. The university has also since launched various internal research centers in the fields of artificial intelligence, history, culture, medicine, mathematics, philosophy, and economics.
  • Georgia Tech: The Georgia Institute of Technology, also known as Georgia Tech, is a top-ranked public college and one of the leading research universities in the USA. Georgia Tech provides a technologically focused education to more than 25,000 undergraduate and graduate students in fields ranging from engineering, computing, and sciences, to business, design, and liberal arts. Georgia Tech’s wide variety of technologically-focused majors and minors consistently earn strong national rankings. Georgia Tech has six colleges and 28 schools focusing on Business, Computing, Design, Engineering, Liberal Arts, and Sciences.
  • UC Berkeley: The University of California is a public research university. The University was founded in 1868, born out of a vision in the State Constitution of a university that would “contribute even more than California’s gold to the glory and happiness of advancing generations.” It is the oldest campus of the University of California system and ranks among the world’s top universities in major educational publications. Berkeley offers over 350 degree programs through its 14 colleges and schools.