Machine Learning for Sound Recognition

From Canadian Centre for Ethnomusicology
Jump to navigation Jump to search

short URL: http://bit.ly/cce-dlsr

Deep Learning for Sound Recognition

How do we recognize sound? How can we identify sound's many components and attributes? Given an audio recording (whether of musical, linguistic, or environmental sounds), how do we extract sonic features (acoustic or psychoacoustic: pitch, meter, emotion), classify types (genres, dialects, species), segment units (phonemes, notes, songs), and identify sources (speakers, singers, instruments, composers)? The general problem is complex, since visual and other sensory information is lacking. Sometimes recordings capture a single sound source: one instrument, speaker, or bird; others may gather multiple but coordinated sources: a musical ensemble, or a conversation.

More generally, in ethnomusicological, linguistic, or bioacoustic fieldwork, a recording encompasses a mix of uncoordinated sound sources, a total soundscape that mixes music, speech, and environmental sound in complex ways: music from multiple groups performing simultaneously, many speakers talking at once, or multiple environmental sound sources. Recordings layer "signals" (sounds of research interest) with “noise” (unwanted sound, including anthropogenic sounds of crowds, highways and factories, or natural sounds of animals, plants, rain, wind and thunder). Unlike the analogous challenges of recognizing components of visual “recordings” (photographs), our ability to recognize features of complex sound environments on audio recordings remains a rather mysterious process. More complex still are the psychoacoustic and cognitive processes by which we recognize emotions in particular sounds - particularly in speech and music.

In contrast to an earlier era of “small data” (largely the result of the limited capacity of expensive analog recorders), the advent of inexpensive, portable, digital recording devices of enormous capacity combined with a growing interest in sound across the humanities, social sciences, and sciences, now contribute vast collections of sound recordings, resulting in interest in sound within the realm of “big data.” To date, most of the sound collection data is not annotated and in all practicality, is therefore inaccessible for research.

Computational recognition of sound, its types, sources, attributes, and components--what may be called "machine audition" by analogy to the better-developed field of "machine vision"-- is crucial for a wide array of fields, including ethnomusicology, music studies, sound studies, linguistics (especially phonetics), media studies, library and information science, and bioacoustics, in order to enable indexing, searching, retrieval, and regression of audio information. While expert human listeners may be able to recognize certain complex sound environments with ease, the process is slow: they listen in real time, and they must be trained to hear sonic events contrapuntally.

This project explores the use of supervised machine learning - primarily, deep learning neural networks trained on large datasets -- for algorithmic implementation of sound recognition across large digital repositories, to support interdisciplinary research on sound, as well as to develop machine audition and machine learning more broadly.

Team Members

Principal Investigator: Michael Frishkopf, Professor of Ethnomusicology, Department of Music
Antti Arppe, Assistant Professor of Quantitative Linguistics
Erin Bayne, Professor, Department of Biological Sciences
Vadim Bulitko, Associate Professor, Department of Computing Science
Astrid Ensslin, Professor of Media and Digital Communication
Abram Hindle, Assistant Professor, Department of Computing Science
Mary Ingraham, Professor of Musicology, Director, Sound Studies Initiative, Department of Music
Sean Luyk, Music Librarian and Service Manager of ERA Audio + Video, University of Alberta Libraries
Scott Smallwood, Associate Professor of Music Composition, Department of Music
Benjamin V. Tucker, Associate Professor of Phonetics, Department of Linguistics

Collaborators

Ichiro Fujinaga, Associate Professor in Music Technology, Schulich School of Music, McGill University
George Tzanetakis, Associate Professor, Department of Computer Science, University of Victoria (developer of Marsyas)
Anna Lomax Wood, President and Director of Research for the Association for Cultural Equity, Hunter College, NYC
Michael Cohen, Professor of Computer Science, University of Aizu, Aizu-Wakamatsu, Japan.
Diane Thram, Professor Emerita, Music Department, Rhodes University, South Africa
Philippe Collard, André Lapointe, Frédéric Osterrath, & Gilles Boulianne, Centre de recherche informatique de Montréal (CRIM)

Students

Sergio Poo Hernandez, PhD student in Computing Science
Matthew Kelley, PhD student in Linguistics
Rameel Sethi, MA student in Computing Science
Noah Weninger, Undergraduate Research Assistant, Computing Science
Shelby Carleton, Undergraduate Research Assistant, MLCS
Yourui Guo, Undergraduate Research Assistant, Computing Science

Funding Support (U of A)

KIAS Team Grant 2016
KIAS Cluster Grant 2017
Canadian Centre for Ethnomusicology
Hindle/Bulitko Computing Science Labs
Bioacoustic Unit (Biological Sciences)
Alberta Phonetics Laboratory (Linguistics)
Alberta Language Technology Lab (Linguistics)
University of Alberta Research Experience (UARE)

Funding Support (Other)

NVIDIA Corporation
Spatial Media Laboratory, University of Aizu, Japan
Compute Canada
Centre de recherche informatique de Montréal
SSHRC


Publications and Presentations

Ensslin, Astrid, Tejasvi Goorimoorthee, Shelby Carleton, Vadim Bulitko, and Sergio Poo Hernandez (2017), “Deep Learning for Speech Accent Detection in Videogames,” ed. Mike Cook et al., Proceedings of AIIDE / EXAG (Experimental AI in Games) 4, Oct 5-9th 2017, University of Utah.

Frishkopf, Michael. Towards an Extensible Global Jukebox: Deep Learning for Cantometrics Coding; for panel, "The Global Jukebox: Science, Humanism and Cultural Equity Chair: Anna Wood, Association for Cultural Equity. Society for Ethnomusicology annual meeting, Denver, 2017.


Kinds of supervised learning