https://www.artsrn.ualberta.ca/ccewiki/index.php?title=Machine_Learning_for_Sound_Recognition&feed=atom&action=history
Machine Learning for Sound Recognition - Revision history
2024-03-28T16:43:58Z
Revision history for this page on the wiki
MediaWiki 1.38.5
https://www.artsrn.ualberta.ca/ccewiki/index.php?title=Machine_Learning_for_Sound_Recognition&diff=7899&oldid=prev
Michaelf: /* Introduction to Deep Learning for Sound Recognition */
2023-12-23T20:45:00Z
<p><span dir="auto"><span class="autocomment">Introduction to Deep Learning for Sound Recognition</span></span></p>
<table style="background-color: #fff; color: #202122;" data-mw="interface">
<col class="diff-marker" />
<col class="diff-content" />
<col class="diff-marker" />
<col class="diff-content" />
<tr class="diff-title" lang="en-GB">
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Older revision</td>
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Revision as of 14:45, 23 December 2023</td>
</tr><tr><td colspan="2" class="diff-lineno" id="mw-diff-left-l4">Line 4:</td>
<td colspan="2" class="diff-lineno">Line 4:</td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>'''A project of the Canadian Centre for Ethnomusicology'''</div></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>'''A project of the Canadian Centre for Ethnomusicology'''</div></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td></tr>
<tr><td class="diff-marker" data-marker="−"></td><td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>== Introduction to <del style="font-weight: bold; text-decoration: none;">Deep </del>Learning for Sound Recognition ==</div></td><td class="diff-marker" data-marker="+"></td><td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>== Introduction to <ins style="font-weight: bold; text-decoration: none;">Machine </ins>Learning for Sound Recognition ==</div></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>How do we recognize sound? How do we identify sound's many components and attributes? How do animals do this? Can an algorithm carry out these tasks?</div></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>How do we recognize sound? How do we identify sound's many components and attributes? How do animals do this? Can an algorithm carry out these tasks?</div></td></tr>
</table>
Michaelf
https://www.artsrn.ualberta.ca/ccewiki/index.php?title=Machine_Learning_for_Sound_Recognition&diff=7884&oldid=prev
Michaelf: /* Team Members */
2023-11-17T00:27:47Z
<p><span dir="auto"><span class="autocomment">Team Members</span></span></p>
<table style="background-color: #fff; color: #202122;" data-mw="interface">
<col class="diff-marker" />
<col class="diff-content" />
<col class="diff-marker" />
<col class="diff-content" />
<tr class="diff-title" lang="en-GB">
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Older revision</td>
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Revision as of 18:27, 16 November 2023</td>
</tr><tr><td colspan="2" class="diff-lineno" id="mw-diff-left-l58">Line 58:</td>
<td colspan="2" class="diff-lineno">Line 58:</td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>Scott Smallwood, Associate Professor of Music Composition, Department of Music<br></div></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>Scott Smallwood, Associate Professor of Music Composition, Department of Music<br></div></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>Benjamin V. Tucker, Associate Professor of Phonetics, Department of Linguistics<br></div></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>Benjamin V. Tucker, Associate Professor of Phonetics, Department of Linguistics<br></div></td></tr>
<tr><td colspan="2" class="diff-side-deleted"></td><td class="diff-marker" data-marker="+"></td><td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div><ins style="font-weight: bold; text-decoration: none;">Michael Dawson, Professor of Psychology</ins></div></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>== Collaborators ==</div></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>== Collaborators ==</div></td></tr>
</table>
Michaelf
https://www.artsrn.ualberta.ca/ccewiki/index.php?title=Machine_Learning_for_Sound_Recognition&diff=5347&oldid=prev
Michaelf: /* Publications and Presentations */
2019-08-12T15:16:06Z
<p><span dir="auto"><span class="autocomment">Publications and Presentations</span></span></p>
<table style="background-color: #fff; color: #202122;" data-mw="interface">
<col class="diff-marker" />
<col class="diff-content" />
<col class="diff-marker" />
<col class="diff-content" />
<tr class="diff-title" lang="en-GB">
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Older revision</td>
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Revision as of 09:16, 12 August 2019</td>
</tr><tr><td colspan="2" class="diff-lineno" id="mw-diff-left-l102">Line 102:</td>
<td colspan="2" class="diff-lineno">Line 102:</td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>'''Music and Ethnomusicology''':</div></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>'''Music and Ethnomusicology''':</div></td></tr>
<tr><td colspan="2" class="diff-side-deleted"></td><td class="diff-marker" data-marker="+"></td><td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div><ins style="font-weight: bold; text-decoration: none;"></ins></div></td></tr>
<tr><td colspan="2" class="diff-side-deleted"></td><td class="diff-marker" data-marker="+"></td><td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div><ins style="font-weight: bold; text-decoration: none;">Jayarathne, Isuru, Michael Cohen, Michael Frishkopf, and Gregory Mulyk. 2019. “Relaxation ‘Sweet Spot’ Exploration in Pantophonic Musical Soundscape Using Reinforcement Learning.” In Proceedings of the 24th International Conference on Intelligent User Interfaces: Companion, 55–56. IUI ’19. New York, NY, USA: ACM. https://doi.org/10.1145/3308557.3308686.</ins></div></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>Michael Frishkopf, Yourui Guo, Noah Weninger, Matthew Kelley, Sergio Hernandez, Vadim Bulitko. Deep Learning for Sound Recognition. Peer reviewed roundtable accepted for the [http://www.indiana.edu/~semhome/2018/index.shtml 2018 Annual Meeting of the Society for Ethnomusicology], Albuquerque.</div></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>Michael Frishkopf, Yourui Guo, Noah Weninger, Matthew Kelley, Sergio Hernandez, Vadim Bulitko. Deep Learning for Sound Recognition. Peer reviewed roundtable accepted for the [http://www.indiana.edu/~semhome/2018/index.shtml 2018 Annual Meeting of the Society for Ethnomusicology], Albuquerque.</div></td></tr>
</table>
Michaelf
https://www.artsrn.ualberta.ca/ccewiki/index.php?title=Machine_Learning_for_Sound_Recognition&diff=5344&oldid=prev
Michaelf: /* Publications and Presentations */
2019-08-11T03:53:10Z
<p><span dir="auto"><span class="autocomment">Publications and Presentations</span></span></p>
<table style="background-color: #fff; color: #202122;" data-mw="interface">
<col class="diff-marker" />
<col class="diff-content" />
<col class="diff-marker" />
<col class="diff-content" />
<tr class="diff-title" lang="en-GB">
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Older revision</td>
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Revision as of 21:53, 10 August 2019</td>
</tr><tr><td colspan="2" class="diff-lineno" id="mw-diff-left-l106">Line 106:</td>
<td colspan="2" class="diff-lineno">Line 106:</td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>Rameel Sethi, Noah Weninger, Abram Hindle, Vadim Bulitko, Michael Frishkopf (2018), "Training Deep Convolutional Networks with Unlimited Synthesis of Musical</div></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>Rameel Sethi, Noah Weninger, Abram Hindle, Vadim Bulitko, Michael Frishkopf (2018), "Training Deep Convolutional Networks with Unlimited Synthesis of Musical</div></td></tr>
<tr><td class="diff-marker" data-marker="−"></td><td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>Examples for Multiple Instrument Recognition." <del style="font-weight: bold; text-decoration: none;">Accepted for </del>[http://smcnetwork.org/ Sound & Music Computing], July 2018.</div></td><td class="diff-marker" data-marker="+"></td><td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>Examples for Multiple Instrument Recognition." <ins style="font-weight: bold; text-decoration: none;"> </ins>[http://smcnetwork.org/ Sound & Music Computing], July 2018.</div></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>Frishkopf, Michael, with research from Sergio Hernandez, supervised by Vadim Bulitko. Towards an Extensible Global Jukebox: Deep Learning for Cantometrics Coding; for panel, "The Global Jukebox: Science, Humanism and Cultural Equity Chair: Anna Wood, Association for Cultural Equity. Society for Ethnomusicology annual meeting, Denver, 2017. Presented again for Eighteenth International Symposium on Spatial Media, Aizu-Wakamatsu, Japan, March 3-4, 2018, with additional research results from Yourui Guo.</div></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>Frishkopf, Michael, with research from Sergio Hernandez, supervised by Vadim Bulitko. Towards an Extensible Global Jukebox: Deep Learning for Cantometrics Coding; for panel, "The Global Jukebox: Science, Humanism and Cultural Equity Chair: Anna Wood, Association for Cultural Equity. Society for Ethnomusicology annual meeting, Denver, 2017. Presented again for Eighteenth International Symposium on Spatial Media, Aizu-Wakamatsu, Japan, March 3-4, 2018, with additional research results from Yourui Guo.</div></td></tr>
</table>
Michaelf
https://www.artsrn.ualberta.ca/ccewiki/index.php?title=Machine_Learning_for_Sound_Recognition&diff=5210&oldid=prev
Michaelf at 04:44, 4 April 2019
2019-04-04T04:44:59Z
<p></p>
<table style="background-color: #fff; color: #202122;" data-mw="interface">
<col class="diff-marker" />
<col class="diff-content" />
<col class="diff-marker" />
<col class="diff-content" />
<tr class="diff-title" lang="en-GB">
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Older revision</td>
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Revision as of 22:44, 3 April 2019</td>
</tr><tr><td colspan="2" class="diff-lineno" id="mw-diff-left-l1">Line 1:</td>
<td colspan="2" class="diff-lineno">Line 1:</td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td></tr>
<tr><td class="diff-marker" data-marker="−"></td><td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>short URL: http://bit.ly/<del style="font-weight: bold; text-decoration: none;">cce-dlsr</del></div></td><td class="diff-marker" data-marker="+"></td><td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>short URL: http://bit.ly/<ins style="font-weight: bold; text-decoration: none;">mlsr</ins></div></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>'''A project of the Canadian Centre for Ethnomusicology'''</div></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>'''A project of the Canadian Centre for Ethnomusicology'''</div></td></tr>
</table>
Michaelf
https://www.artsrn.ualberta.ca/ccewiki/index.php?title=Machine_Learning_for_Sound_Recognition&diff=5208&oldid=prev
Michaelf: Michaelf moved page Deep Learning for Sound Recognition to Machine Learning for Sound Recognition
2019-04-04T04:43:57Z
<p>Michaelf moved page <a href="/ccewiki/index.php/Deep_Learning_for_Sound_Recognition" class="mw-redirect" title="Deep Learning for Sound Recognition">Deep Learning for Sound Recognition</a> to <a href="/ccewiki/index.php/Machine_Learning_for_Sound_Recognition" title="Machine Learning for Sound Recognition">Machine Learning for Sound Recognition</a></p>
<table style="background-color: #fff; color: #202122;" data-mw="interface">
<tr class="diff-title" lang="en-GB">
<td colspan="1" style="background-color: #fff; color: #202122; text-align: center;">← Older revision</td>
<td colspan="1" style="background-color: #fff; color: #202122; text-align: center;">Revision as of 22:43, 3 April 2019</td>
</tr><tr><td colspan="2" class="diff-notice" lang="en-GB"><div class="mw-diff-empty">(No difference)</div>
</td></tr></table>
Michaelf
https://www.artsrn.ualberta.ca/ccewiki/index.php?title=Machine_Learning_for_Sound_Recognition&diff=5140&oldid=prev
Michaelf at 20:35, 26 March 2019
2019-03-26T20:35:09Z
<p></p>
<table style="background-color: #fff; color: #202122;" data-mw="interface">
<col class="diff-marker" />
<col class="diff-content" />
<col class="diff-marker" />
<col class="diff-content" />
<tr class="diff-title" lang="en-GB">
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Older revision</td>
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Revision as of 14:35, 26 March 2019</td>
</tr><tr><td colspan="2" class="diff-lineno" id="mw-diff-left-l1">Line 1:</td>
<td colspan="2" class="diff-lineno">Line 1:</td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>short URL: http://bit.ly/cce-dlsr</div></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>short URL: http://bit.ly/cce-dlsr</div></td></tr>
<tr><td colspan="2" class="diff-side-deleted"></td><td class="diff-marker" data-marker="+"></td><td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div><ins style="font-weight: bold; text-decoration: none;"></ins></div></td></tr>
<tr><td colspan="2" class="diff-side-deleted"></td><td class="diff-marker" data-marker="+"></td><td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div><ins style="font-weight: bold; text-decoration: none;">'''A project of the Canadian Centre for Ethnomusicology'''</ins></div></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>== Introduction to Deep Learning for Sound Recognition ==</div></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>== Introduction to Deep Learning for Sound Recognition ==</div></td></tr>
</table>
Michaelf
https://www.artsrn.ualberta.ca/ccewiki/index.php?title=Machine_Learning_for_Sound_Recognition&diff=4502&oldid=prev
Michaelf: /* Introduction to Deep Learning for Sound Recognition */
2018-12-02T20:18:40Z
<p><span dir="auto"><span class="autocomment">Introduction to Deep Learning for Sound Recognition</span></span></p>
<table style="background-color: #fff; color: #202122;" data-mw="interface">
<col class="diff-marker" />
<col class="diff-content" />
<col class="diff-marker" />
<col class="diff-content" />
<tr class="diff-title" lang="en-GB">
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Older revision</td>
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Revision as of 14:18, 2 December 2018</td>
</tr><tr><td colspan="2" class="diff-lineno" id="mw-diff-left-l12">Line 12:</td>
<td colspan="2" class="diff-lineno">Line 12:</td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>Unlike the analogous challenges of recognizing components of visual “recordings” (photographs), our ability to recognize features of complex sound environments on audio recordings remains a rather mysterious process. More complex still are the psychoacoustic and cognitive processes by which we infer emotions conveyed by sounds - particularly in speech and music.</div></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>Unlike the analogous challenges of recognizing components of visual “recordings” (photographs), our ability to recognize features of complex sound environments on audio recordings remains a rather mysterious process. More complex still are the psychoacoustic and cognitive processes by which we infer emotions conveyed by sounds - particularly in speech and music.</div></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td></tr>
<tr><td class="diff-marker" data-marker="−"></td><td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>In contrast to an earlier era of “small data” (largely the result of the limited capacity of expensive analog recorders), the advent of inexpensive, portable, digital recording devices of enormous capacity combined with a growing interest in sound across the humanities, social sciences, and sciences, now contribute vast collections of sound recordings, resulting in interest in sound within the realm of “big data.” To date, most of the sound collection data is not annotated and in all practicality, is therefore inaccessible for research.<del style="font-weight: bold; text-decoration: none;"><br><br></del></div></td><td class="diff-marker" data-marker="+"></td><td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>In contrast to an earlier era of “small data” (largely the result of the limited capacity of expensive analog recorders), the advent of inexpensive, portable, digital recording devices of enormous capacity combined with a growing interest in sound across the humanities, social sciences, and sciences, now contribute vast collections of sound recordings, resulting in interest in sound within the realm of “big data.” To date, most of the sound collection data is not annotated and in all practicality, is therefore inaccessible for research.</div></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td></tr>
<tr><td class="diff-marker" data-marker="−"></td><td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>Computational recognition of sound, its types, sources, attributes, and components--what may be called "machine audition" by analogy to the better-developed field of "machine vision"-- is therefore crucial for a wide array of fields, including ethnomusicology, music studies, sound studies, linguistics (especially phonetics), media studies, library and information science, and bioacoustics, in order to enable indexing, searching, retrieval, and regression of audio information. While expert human listeners may be able to recognize certain complex sound environments with ease, the process is slow: they listen in real time, and they must be trained to hear sonic events contrapuntally. Sound recognition algorithms are thus of great potential value as research ''tools''.<del style="font-weight: bold; text-decoration: none;"><br></del></div></td><td class="diff-marker" data-marker="+"></td><td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>Computational recognition of sound, its types, sources, attributes, and components--what may be called "machine audition" by analogy to the better-developed field of "machine vision"-- is therefore crucial for a wide array of fields, including ethnomusicology, music studies, sound studies, linguistics (especially phonetics), media studies, library and information science, and bioacoustics, in order to enable indexing, searching, retrieval, and regression of audio information. While expert human listeners may be able to recognize certain complex sound environments with ease, the process is slow: they listen in real time, and they must be trained to hear sonic events contrapuntally. Sound recognition algorithms are thus of great potential value as research ''tools''.</div></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>To the extent that such algorithms resemble biological processes of sound recognition, in humans or other species, they may also cast light on the nature of hearing, contributing to perception, psychoacoustics, and auditory cognition.</div></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>To the extent that such algorithms resemble biological processes of sound recognition, in humans or other species, they may also cast light on the nature of hearing, contributing to perception, psychoacoustics, and auditory cognition.</div></td></tr>
</table>
Michaelf
https://www.artsrn.ualberta.ca/ccewiki/index.php?title=Machine_Learning_for_Sound_Recognition&diff=4501&oldid=prev
Michaelf: /* Introduction to Deep Learning for Sound Recognition */
2018-12-02T20:18:21Z
<p><span dir="auto"><span class="autocomment">Introduction to Deep Learning for Sound Recognition</span></span></p>
<table style="background-color: #fff; color: #202122;" data-mw="interface">
<col class="diff-marker" />
<col class="diff-content" />
<col class="diff-marker" />
<col class="diff-content" />
<tr class="diff-title" lang="en-GB">
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Older revision</td>
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Revision as of 14:18, 2 December 2018</td>
</tr><tr><td colspan="2" class="diff-lineno" id="mw-diff-left-l18">Line 18:</td>
<td colspan="2" class="diff-lineno">Line 18:</td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>To the extent that such algorithms resemble biological processes of sound recognition, in humans or other species, they may also cast light on the nature of hearing, contributing to perception, psychoacoustics, and auditory cognition.</div></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>To the extent that such algorithms resemble biological processes of sound recognition, in humans or other species, they may also cast light on the nature of hearing, contributing to perception, psychoacoustics, and auditory cognition.</div></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td></tr>
<tr><td class="diff-marker" data-marker="−"></td><td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div><del style="font-weight: bold; text-decoration: none;"><br></del>This project explores the use of artificial intelligence to address these problems. Specifically, we apply machine learning - primarily, deep learning neural networks trained on large datasets -- to develop sound recognition algorithms. Such algorithms enable labelling of digital repositories, supporting interdisciplinary research on sound, and potentially contributing to our understanding of auditory perception and cognition. More theoretically, this research may help develop machine audition and machine learning.</div></td><td class="diff-marker" data-marker="+"></td><td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>This project explores the use of artificial intelligence to address these problems. Specifically, we apply machine learning - primarily, deep learning neural networks trained on large datasets -- to develop sound recognition algorithms. Such algorithms enable labelling of digital repositories, supporting interdisciplinary research on sound, and potentially contributing to our understanding of auditory perception and cognition. More theoretically, this research may help develop machine audition and machine learning.</div></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>== DLSR sub-projects ==</div></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>== DLSR sub-projects ==</div></td></tr>
</table>
Michaelf
https://www.artsrn.ualberta.ca/ccewiki/index.php?title=Machine_Learning_for_Sound_Recognition&diff=4500&oldid=prev
Michaelf: /* Introduction to Deep Learning for Sound Recognition */
2018-12-02T20:18:10Z
<p><span dir="auto"><span class="autocomment">Introduction to Deep Learning for Sound Recognition</span></span></p>
<table style="background-color: #fff; color: #202122;" data-mw="interface">
<col class="diff-marker" />
<col class="diff-content" />
<col class="diff-marker" />
<col class="diff-content" />
<tr class="diff-title" lang="en-GB">
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Older revision</td>
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Revision as of 14:18, 2 December 2018</td>
</tr><tr><td colspan="2" class="diff-lineno" id="mw-diff-left-l10">Line 10:</td>
<td colspan="2" class="diff-lineno">Line 10:</td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>Sometimes recordings capture a single sound source: one instrument, speaker, or bird; others may gather multiple but coordinated sources: a musical ensemble, or a conversation, but such recordings often represent artificial situations. More typically, real field recordings resulting from ethnomusicological, linguistic, or bioacoustic fieldwork mix together a range of uncoordinated sound sources. The result is a total soundscape combining music, speech, and environmental sound in complex ways: music from multiple groups performing simultaneously, many speakers talking at once, or multiple environmental sound sources. Recordings of such soundscapes layer "signals" (sounds of research interest) with “noise” (unwanted sound, including anthropogenic sounds of crowds, highways and factories; biogenic sounds of animals and plants; and environmental sounds of rain, wind or thunder, along with sounds introduced by the recording process itself.)</div></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>Sometimes recordings capture a single sound source: one instrument, speaker, or bird; others may gather multiple but coordinated sources: a musical ensemble, or a conversation, but such recordings often represent artificial situations. More typically, real field recordings resulting from ethnomusicological, linguistic, or bioacoustic fieldwork mix together a range of uncoordinated sound sources. The result is a total soundscape combining music, speech, and environmental sound in complex ways: music from multiple groups performing simultaneously, many speakers talking at once, or multiple environmental sound sources. Recordings of such soundscapes layer "signals" (sounds of research interest) with “noise” (unwanted sound, including anthropogenic sounds of crowds, highways and factories; biogenic sounds of animals and plants; and environmental sounds of rain, wind or thunder, along with sounds introduced by the recording process itself.)</div></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td></tr>
<tr><td class="diff-marker" data-marker="−"></td><td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>Unlike the analogous challenges of recognizing components of visual “recordings” (photographs), our ability to recognize features of complex sound environments on audio recordings remains a rather mysterious process. More complex still are the psychoacoustic and cognitive processes by which we infer emotions conveyed by sounds - particularly in speech and music.<del style="font-weight: bold; text-decoration: none;"><br><br></del></div></td><td class="diff-marker" data-marker="+"></td><td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>Unlike the analogous challenges of recognizing components of visual “recordings” (photographs), our ability to recognize features of complex sound environments on audio recordings remains a rather mysterious process. More complex still are the psychoacoustic and cognitive processes by which we infer emotions conveyed by sounds - particularly in speech and music.</div></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>In contrast to an earlier era of “small data” (largely the result of the limited capacity of expensive analog recorders), the advent of inexpensive, portable, digital recording devices of enormous capacity combined with a growing interest in sound across the humanities, social sciences, and sciences, now contribute vast collections of sound recordings, resulting in interest in sound within the realm of “big data.” To date, most of the sound collection data is not annotated and in all practicality, is therefore inaccessible for research.<br><br></div></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>In contrast to an earlier era of “small data” (largely the result of the limited capacity of expensive analog recorders), the advent of inexpensive, portable, digital recording devices of enormous capacity combined with a growing interest in sound across the humanities, social sciences, and sciences, now contribute vast collections of sound recordings, resulting in interest in sound within the realm of “big data.” To date, most of the sound collection data is not annotated and in all practicality, is therefore inaccessible for research.<br><br></div></td></tr>
<tr><td colspan="2" class="diff-lineno" id="mw-diff-left-l18">Line 18:</td>
<td colspan="2" class="diff-lineno">Line 18:</td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>To the extent that such algorithms resemble biological processes of sound recognition, in humans or other species, they may also cast light on the nature of hearing, contributing to perception, psychoacoustics, and auditory cognition.</div></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>To the extent that such algorithms resemble biological processes of sound recognition, in humans or other species, they may also cast light on the nature of hearing, contributing to perception, psychoacoustics, and auditory cognition.</div></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td></tr>
<tr><td class="diff-marker" data-marker="−"></td><td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div><br>This project explores the use of artificial intelligence to address these problems. Specifically, we apply machine learning - primarily, deep learning neural networks trained on large datasets -- to develop sound recognition algorithms. Such algorithms enable labelling of digital repositories, supporting interdisciplinary research on sound, and potentially contributing to our understanding of auditory perception and cognition. More theoretically, this research may help develop machine audition and machine learning.<del style="font-weight: bold; text-decoration: none;"><br></del></div></td><td class="diff-marker" data-marker="+"></td><td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div><br>This project explores the use of artificial intelligence to address these problems. Specifically, we apply machine learning - primarily, deep learning neural networks trained on large datasets -- to develop sound recognition algorithms. Such algorithms enable labelling of digital repositories, supporting interdisciplinary research on sound, and potentially contributing to our understanding of auditory perception and cognition. More theoretically, this research may help develop machine audition and machine learning.</div></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br/></td></tr>
<tr><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>== DLSR sub-projects ==</div></td><td class="diff-marker"></td><td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>== DLSR sub-projects ==</div></td></tr>
</table>
Michaelf