Machine Learning for Sound Recognition: Difference between revisions

From Canadian Centre for Ethnomusicology
Jump to navigation Jump to search
Line 67: Line 67:


'''Music''':
'''Music''':
Michael Frishkopf, Yourui Guo, Noah Weninger, Matthew Kelley, Sergio Hernandez, Vadim Bulitko.  Deep Learning for Sound Recognition.  Peer reviewed roundtable accepted for the [http://www.indiana.edu/~semhome/2018/index.shtml 2018 Annual Meeting of the Society for Ethnomusicology], Albuquerque.


Rameel Sethi, Noah Weninger, Abram Hindle, Vadim Bulitko, Michael Frishkopf (2018), "Training Deep Convolutional Networks with Unlimited Synthesis of Musical
Rameel Sethi, Noah Weninger, Abram Hindle, Vadim Bulitko, Michael Frishkopf (2018), "Training Deep Convolutional Networks with Unlimited Synthesis of Musical
Line 79: Line 81:
Shonfield, J., and E. M. Bayne. 2017. [https://www.ace-eco.org/vol12/iss1/art11/ Autonomous recording units in avian ecological research: current use and future applications.] Avian Conservation and Ecology 12(1):14.  
Shonfield, J., and E. M. Bayne. 2017. [https://www.ace-eco.org/vol12/iss1/art11/ Autonomous recording units in avian ecological research: current use and future applications.] Avian Conservation and Ecology 12(1):14.  
https://doi.org/10.5751/ACE-00974-120114
https://doi.org/10.5751/ACE-00974-120114


Yip, D. A., L. Leston, E. M. Bayne, P. Sólymos, and A. Grover. 2017. [https://www.ace-eco.org/vol12/iss1/art11/ Experimentally derived detection distances from audio recordings and human observers enable integrated analysis of point count data.] Avian Conservation and Ecology 12(1):11.  
Yip, D. A., L. Leston, E. M. Bayne, P. Sólymos, and A. Grover. 2017. [https://www.ace-eco.org/vol12/iss1/art11/ Experimentally derived detection distances from audio recordings and human observers enable integrated analysis of point count data.] Avian Conservation and Ecology 12(1):11.  

Revision as of 22:06, 11 June 2018

short URL: http://bit.ly/cce-dlsr

Deep Learning for Sound Recognition

How do we recognize sound? How can we identify sound's many components and attributes? Given an audio recording (whether of musical, linguistic, or environmental sounds), how do we extract sonic features (acoustic or psychoacoustic: pitch, meter, emotion), classify types (genres, dialects, species), segment units (phonemes, notes, songs), and identify sources (speakers, singers, instruments, composers)? The general problem is complex, since visual and other sensory information is lacking. Sometimes recordings capture a single sound source: one instrument, speaker, or bird; others may gather multiple but coordinated sources: a musical ensemble, or a conversation.

More generally, in ethnomusicological, linguistic, or bioacoustic fieldwork, a recording encompasses a mix of uncoordinated sound sources, a total soundscape that mixes music, speech, and environmental sound in complex ways: music from multiple groups performing simultaneously, many speakers talking at once, or multiple environmental sound sources. Recordings layer "signals" (sounds of research interest) with “noise” (unwanted sound, including anthropogenic sounds of crowds, highways and factories, or natural sounds of animals, plants, rain, wind and thunder). Unlike the analogous challenges of recognizing components of visual “recordings” (photographs), our ability to recognize features of complex sound environments on audio recordings remains a rather mysterious process. More complex still are the psychoacoustic and cognitive processes by which we recognize emotions in particular sounds - particularly in speech and music.

In contrast to an earlier era of “small data” (largely the result of the limited capacity of expensive analog recorders), the advent of inexpensive, portable, digital recording devices of enormous capacity combined with a growing interest in sound across the humanities, social sciences, and sciences, now contribute vast collections of sound recordings, resulting in interest in sound within the realm of “big data.” To date, most of the sound collection data is not annotated and in all practicality, is therefore inaccessible for research.

Computational recognition of sound, its types, sources, attributes, and components--what may be called "machine audition" by analogy to the better-developed field of "machine vision"-- is crucial for a wide array of fields, including ethnomusicology, music studies, sound studies, linguistics (especially phonetics), media studies, library and information science, and bioacoustics, in order to enable indexing, searching, retrieval, and regression of audio information. While expert human listeners may be able to recognize certain complex sound environments with ease, the process is slow: they listen in real time, and they must be trained to hear sonic events contrapuntally.


This project explores the use of artificial intelligence to solve these problems. We apply supervised machine learning - primarily, deep learning neural networks trained on large datasets -- to develop sound recognition algorithms. Such algorithms enable labelling of digital repositories, and support interdisciplinary research on sound. More theoretically, this research helps develop machine audition and machine learning more broadly.

Team Members

Principal Investigator: Michael Frishkopf, Professor of Ethnomusicology, Department of Music
Antti Arppe, Assistant Professor of Quantitative Linguistics
Erin Bayne, Professor, Department of Biological Sciences
Vadim Bulitko, Associate Professor, Department of Computing Science
Astrid Ensslin, Professor of Media and Digital Communication
Abram Hindle, Assistant Professor, Department of Computing Science
Mary Ingraham, Professor of Musicology, Director, Sound Studies Initiative, Department of Music
Sean Luyk, Music Librarian and Service Manager of ERA Audio + Video, University of Alberta Libraries
Scott Smallwood, Associate Professor of Music Composition, Department of Music
Benjamin V. Tucker, Associate Professor of Phonetics, Department of Linguistics

Collaborators

Ichiro Fujinaga, Associate Professor in Music Technology, Schulich School of Music, McGill University
George Tzanetakis, Associate Professor, Department of Computer Science, University of Victoria (developer of Marsyas)
Anna Lomax Wood, President and Director of Research for the Association for Cultural Equity, Hunter College, NYC
Michael Cohen, Professor of Computer Science, University of Aizu, Aizu-Wakamatsu, Japan.
Diane Thram, Professor Emerita, Music Department, Rhodes University, South Africa
Philippe Collard, André Lapointe, Frédéric Osterrath, & Gilles Boulianne, Centre de recherche informatique de Montréal (CRIM)

Students

Sergio Poo Hernandez, PhD student in Computing Science
Matthew Kelley, PhD student in Linguistics
Rameel Sethi, MA student in Computing Science
Noah Weninger, Undergraduate Research Assistant, Computing Science
Shelby Carleton, Undergraduate Research Assistant, MLCS
Yourui Guo, Undergraduate Research Assistant, Computing Science

Funding Support (U of A)

KIAS Team Grant 2016
KIAS Cluster Grant 2017
Canadian Centre for Ethnomusicology
Hindle/Bulitko Computing Science Labs
Bioacoustic Unit (Biological Sciences)
Alberta Phonetics Laboratory (Linguistics)
Alberta Language Technology Lab (Linguistics)
University of Alberta Research Experience (UARE)

Funding Support (Other)

NVIDIA Corporation
Spatial Media Laboratory, University of Aizu, Japan
Compute Canada
Centre de recherche informatique de Montréal
SSHRC


Publications and Presentations

Linguistics:

Kelley, Matthew C. and Benjamin V. Tucker. A comparison of input types to a deep neural network-based forced aligner. Accepted for Interspeech 2018, Sep 2-6, Hyderabad, India.

Ensslin, Astrid, Tejasvi Goorimoorthee, Shelby Carleton, Vadim Bulitko, and Sergio Poo Hernandez (2017), “Deep Learning for Speech Accent Detection in Videogames,” ed. Mike Cook et al., Proceedings of AIIDE / EXAG (Experimental AI in Games) 4, Oct 5-9th 2017, University of Utah.


Music:

Michael Frishkopf, Yourui Guo, Noah Weninger, Matthew Kelley, Sergio Hernandez, Vadim Bulitko. Deep Learning for Sound Recognition. Peer reviewed roundtable accepted for the 2018 Annual Meeting of the Society for Ethnomusicology, Albuquerque.

Rameel Sethi, Noah Weninger, Abram Hindle, Vadim Bulitko, Michael Frishkopf (2018), "Training Deep Convolutional Networks with Unlimited Synthesis of Musical Examples for Multiple Instrument Recognition." Accepted for Sound & Music Computing, July 2018.

Frishkopf, Michael, with research from Sergio Hernandez, supervised by Vadim Bulitko. Towards an Extensible Global Jukebox: Deep Learning for Cantometrics Coding; for panel, "The Global Jukebox: Science, Humanism and Cultural Equity Chair: Anna Wood, Association for Cultural Equity. Society for Ethnomusicology annual meeting, Denver, 2017. Presented again for Eighteenth International Symposium on Spatial Media, Aizu-Wakamatsu, Japan, March 3-4, 2018, with additional research results from Yourui Guo.

Bioacoustics:

Knight, EC, KC Hannah G. Foley, C. Scott, R. Mark Brigham, and E. Bayne. 2017. Recommendations for acoustic recognition. Avian Conservation and Ecology 12 (2): 14. https://doi.org/10.5751/ACE-01114-120214

Shonfield, J., and E. M. Bayne. 2017. Autonomous recording units in avian ecological research: current use and future applications. Avian Conservation and Ecology 12(1):14. https://doi.org/10.5751/ACE-00974-120114

Yip, D. A., L. Leston, E. M. Bayne, P. Sólymos, and A. Grover. 2017. Experimentally derived detection distances from audio recordings and human observers enable integrated analysis of point count data. Avian Conservation and Ecology 12(1):11. https://doi.org/10.5751/ACE-00997-120111

Use of an Acoustic Location System to Understand Songbird Response to Vegetation Regeneration on Reclaimed Wellsites in the Boreal Forest of Alberta M.Sc. thesis S. Wilson, Sep 2017.

Resources