Machine Learning for Sound Recognition: Difference between revisions

From Canadian Centre for Ethnomusicology
Jump to navigation Jump to search
(Created page with "Deep Learning for Sound Recognition<br> How do we recognize the components and attributes of sound, describe and parse an audio recording of music, speech, or environmental so...")
 
No edit summary
Line 1: Line 1:
Deep Learning for Sound Recognition<br>
 
== Deep Learning for Sound Recognition ==
 
<br>
How do we recognize the components and attributes of sound, describe and parse an audio recording of music, speech, or environmental sounds, or extract sonic features, classify types, segment units, and identify sources of sounds? Sometimes recordings capture a single sound source: a single instrument, speaker, or bird; others may find multiple but coordinated sources:  a musical ensemble, or a conversation; yet typically in fieldwork, a recording encompasses a complex mix of uncoordinated sound sources, a total soundscape that may include music as well as speech, music from multiple groups performing simultaneously, many speakers speaking at once, or many bird calls, all of which are layered together with “noise” such as the sounds of crowds, highways and factories, rain, wind and thunder. Unlike the analogous challenges in visual “recordings” (photographs), recognizing complex sound environments on audio recordings remains a rather mysterious process.<br>
How do we recognize the components and attributes of sound, describe and parse an audio recording of music, speech, or environmental sounds, or extract sonic features, classify types, segment units, and identify sources of sounds? Sometimes recordings capture a single sound source: a single instrument, speaker, or bird; others may find multiple but coordinated sources:  a musical ensemble, or a conversation; yet typically in fieldwork, a recording encompasses a complex mix of uncoordinated sound sources, a total soundscape that may include music as well as speech, music from multiple groups performing simultaneously, many speakers speaking at once, or many bird calls, all of which are layered together with “noise” such as the sounds of crowds, highways and factories, rain, wind and thunder. Unlike the analogous challenges in visual “recordings” (photographs), recognizing complex sound environments on audio recordings remains a rather mysterious process.<br>
In contrast to an earlier era of “small data” (largely the result of the limited capacity of expensive analog recorders), the advent of inexpensive, portable, digital recording devices of enormous capacity combined with a growing interest in sound across the humanities, social sciences, and sciences, now contribute vast collections of sound recordings, resulting in interest in sound within the realm of “big data.” To date, most of the sound collection data is not annotated and in all practicality, is therefore inaccessible for research.<br>
In contrast to an earlier era of “small data” (largely the result of the limited capacity of expensive analog recorders), the advent of inexpensive, portable, digital recording devices of enormous capacity combined with a growing interest in sound across the humanities, social sciences, and sciences, now contribute vast collections of sound recordings, resulting in interest in sound within the realm of “big data.” To date, most of the sound collection data is not annotated and in all practicality, is therefore inaccessible for research.<br>
Computational recognition of sound, its types, sources, and components is crucial for a wide array of fields, including ethnomusicology, music studies, sound studies, linguistics (especially phonetics), media studies, library and information science, and bioacoustics, in order to enable indexing, searching, retrieval, and regression of audio information. While expert human listeners may be able to recognize complex sound environments with ease, the process is slow: they listen in real time, and they must be trained to hear sonic events contrapuntally. Through this project, we aim to explore opportunities for application of big data deep learning  that will ultimately enable these functions across large sound collections for ongoing interdisciplinary research.<br>
Computational recognition of sound, its types, sources, and components is crucial for a wide array of fields, including ethnomusicology, music studies, sound studies, linguistics (especially phonetics), media studies, library and information science, and bioacoustics, in order to enable indexing, searching, retrieval, and regression of audio information. While expert human listeners may be able to recognize complex sound environments with ease, the process is slow: they listen in real time, and they must be trained to hear sonic events contrapuntally. Through this project, we aim to explore opportunities for application of big data deep learning  that will ultimately enable these functions across large sound collections for ongoing interdisciplinary research.<br>
Team Members<br>
 
== Team Members ==
<br>
Principal Investigator: Michael Frishkopf, Professor of Ethnomusicology, Department of Music<br>
Principal Investigator: Michael Frishkopf, Professor of Ethnomusicology, Department of Music<br>
Antti Arppe, Assistant Professor of Quantitative Linguistics
Antti Arppe, Assistant Professor of Quantitative Linguistics
Line 14: Line 19:
Scott Smallwood, Associate Professor of Music Composition, Department of Music
Scott Smallwood, Associate Professor of Music Composition, Department of Music
Benjamin V. Tucker, Associate Professor of Phonetics, Department of Linguistics<br>
Benjamin V. Tucker, Associate Professor of Phonetics, Department of Linguistics<br>
Collaborators:
 
== Collaborators ==
:
Ichiro Fujinaga, Associate Professor in Music Technology, Schulich School of Music, McGill University
Ichiro Fujinaga, Associate Professor in Music Technology, Schulich School of Music, McGill University
George Tzanetakis, Associate Professor, Department of Computer Science, University of Victoria
George Tzanetakis, Associate Professor, Department of Computer Science, University of Victoria
Line 21: Line 28:
Diane Thram, Professor Emerita, Music Department, Rhodes University, South Africa
Diane Thram, Professor Emerita, Music Department, Rhodes University, South Africa
Philippe Collard, André Lapointe, Frédéric Osterrath, & Gilles Boulianne, Centre de recherche informatique de Montréal (CRIM)<br>
Philippe Collard, André Lapointe, Frédéric Osterrath, & Gilles Boulianne, Centre de recherche informatique de Montréal (CRIM)<br>
Students:
 
== Students ==
:
Sergio Poo Hernandez, MSc in Computing Science
Sergio Poo Hernandez, MSc in Computing Science
Noah Weninger, Undergraduate Research Assistant, Computing Science, University of Alberta<br>
Noah Weninger, Undergraduate Research Assistant, Computing Science, University of Alberta<br>
Funding Support (U of A):<br>
 
== Funding Support (U of A) ==):<br>
KIAS Cluster Grant 2017
KIAS Cluster Grant 2017
Canadian Centre for Ethnomusicology
Canadian Centre for Ethnomusicology

Revision as of 19:02, 15 October 2017

Deep Learning for Sound Recognition


How do we recognize the components and attributes of sound, describe and parse an audio recording of music, speech, or environmental sounds, or extract sonic features, classify types, segment units, and identify sources of sounds? Sometimes recordings capture a single sound source: a single instrument, speaker, or bird; others may find multiple but coordinated sources: a musical ensemble, or a conversation; yet typically in fieldwork, a recording encompasses a complex mix of uncoordinated sound sources, a total soundscape that may include music as well as speech, music from multiple groups performing simultaneously, many speakers speaking at once, or many bird calls, all of which are layered together with “noise” such as the sounds of crowds, highways and factories, rain, wind and thunder. Unlike the analogous challenges in visual “recordings” (photographs), recognizing complex sound environments on audio recordings remains a rather mysterious process.
In contrast to an earlier era of “small data” (largely the result of the limited capacity of expensive analog recorders), the advent of inexpensive, portable, digital recording devices of enormous capacity combined with a growing interest in sound across the humanities, social sciences, and sciences, now contribute vast collections of sound recordings, resulting in interest in sound within the realm of “big data.” To date, most of the sound collection data is not annotated and in all practicality, is therefore inaccessible for research.
Computational recognition of sound, its types, sources, and components is crucial for a wide array of fields, including ethnomusicology, music studies, sound studies, linguistics (especially phonetics), media studies, library and information science, and bioacoustics, in order to enable indexing, searching, retrieval, and regression of audio information. While expert human listeners may be able to recognize complex sound environments with ease, the process is slow: they listen in real time, and they must be trained to hear sonic events contrapuntally. Through this project, we aim to explore opportunities for application of big data deep learning that will ultimately enable these functions across large sound collections for ongoing interdisciplinary research.

Team Members


Principal Investigator: Michael Frishkopf, Professor of Ethnomusicology, Department of Music
Antti Arppe, Assistant Professor of Quantitative Linguistics Erin Bayne, Professor, Department of Biological Sciences Vadim Bulitko, Associate Professor, Department of Computing Science Astrid Ensslin, Professor of Media and Digital Communication Abram Hindle, Assistant Professor, Department of Computing Science Mary Ingraham, Professor of Musicology, Director, Sound Studies Initiative, Department of Music Sean Luyk, Music Librarian and Service Manager of ERA Audio + Video, University of Alberta Libraries Scott Smallwood, Associate Professor of Music Composition, Department of Music Benjamin V. Tucker, Associate Professor of Phonetics, Department of Linguistics

Collaborators

Ichiro Fujinaga, Associate Professor in Music Technology, Schulich School of Music, McGill University George Tzanetakis, Associate Professor, Department of Computer Science, University of Victoria Anna Lomax Wood, President and Director of Research for the Association for Cultural Equity, Michael Cohen, Professor of Computer Science, University of Aizu, Aizu-Wakamatsu, Japan. Diane Thram, Professor Emerita, Music Department, Rhodes University, South Africa Philippe Collard, André Lapointe, Frédéric Osterrath, & Gilles Boulianne, Centre de recherche informatique de Montréal (CRIM)

Students

Sergio Poo Hernandez, MSc in Computing Science Noah Weninger, Undergraduate Research Assistant, Computing Science, University of Alberta

== Funding Support (U of A) ==):
KIAS Cluster Grant 2017 Canadian Centre for Ethnomusicology Hindle/Bulitko Computing Science Labs Biacoustic Unit (Biological Sciences) Alberta Phonetics Laboratory (Linguistics) Alberta Language Technology Lab (Linguistics) University of Alberta Research Experience (UARE)
Funding Support (Other):
NVIDIA Corporation Spatial Media Laboratory, University of Aizu, Japan Compute Canada Centre de recherche informatique de Montréal SSHRC