Autonomous Adaptive Soundscape project
Autonomously Adaptive Soundscapes for Reducing Stress in Critically-Ill Patients
an intelligent bioalgorithmic system generating personalized therapeutic soundscapes, designed especially for critically ill patients in the ICU, using machine learning and autonomic biofeedback, at the intersection of Critical Care, Machine Learning, and Sound Studies/Ethnomusicology
High stress levels and anxiety, associated with delirium and sleep deprivation, are very common in critically ill patients, and may compromise recovery and survival, as well as increase length and costs of hospital stays.1–3 Pharmacological approaches typically used to treat these conditions have non-negligible expense, limited effectiveness, and potentially serious side effects. As means to counter high stress levels and related effects, music and sound therapies are low-cost, non-invasive, and without known side effects. Research has shown them to be highly effective if customized to the patient.4 Yet critically ill patients cannot be expected to communicate effectively with music therapists, who are scarce, frequently unavailable, and costly. Linguistic or cultural differences between patient and therapist may also limit their effectiveness.
Building on and integrating knowledge from four faculties (Arts, Science, Nursing, and Medicine and Dentistry), we propose to develop an innovative Autonomous Adaptive Soundscape (AAS), an intelligent bio-algorithmic system generating therapeutic soundscapes for critically ill patients, using machine learning and biofeedback to induce relaxation, improve sleep, and reduce agitation, anxiety and delirium. The AAS seeks to optimize the patient’s sonic environment by dynamically selecting, tuning, and mixing files from an audio library encompassing a wide range of recordings (natural, musical, and synthetic). A reinforcement learning approach5 will guide the search of the soundscape space based on autonomic biosignals indicating the patient’s current state, thereby delivering a customized soundscape to the patient. No conscious, active engagement with the system will be required from the patient. System design entails close collaboration between researchers in music, computer science, and health sciences. Our ultimate objective is a system suitable for the Intensive Care Unit (ICU), one that is highly effective, always available, simple to operate, minimally intrusive, and low risk. Within the short-term of the Seed Grant we aim to develop, test, and evaluate a prototype system using healthy subjects, train HQPs, and disseminate results. We feel confident that success in this pilot project will greatly enhance our likelihood of success in relevant Tri-agency competitions for a larger grant sufficient to conduct more extensive, longer-term, Patient-Oriented Research in the ICU. The proposed system also promises enormous potential beyond the ICU, as stress, anxiety, and insomnia are pervasive social problems. Recent evidence-based research supports the use of music and sound for sleep,6 and has led to the specification of sonic criteria (rhythm, pitch, frequency, volume, genre, and duration) suitable for relaxation.7 However this research has not yet been operationalized in autonomous adaptive soundscapes, as we propose to do.
Research methods and design gather three broad disciplinary areas— music, computer science, and health sciences—across four interrelated and overlapping phases: 1. Engaging participants, 2. AAS development, 3. Healthy subject testing, and 4. Assessment, dissemination, and grant application. The project team comprises a diverse group of participants, including academics, health professionals, graduate students, and volunteers, serving as researchers, developers, advisors, and experimental subjects, with careful attention to equity, diversity, and inclusion (EDI) principles, leading to balanced participation of women, Indigenous peoples, persons with disabilities, and racialized minorities.
We will employ a quasi-experimental design with pre- and post-intervention comparisons involving 20 (10 men, 10 women) healthy volunteers: team members carefully selected for diversity by researchers with training in EDI concepts and strategies. Following their informed consent, participants will receive 30 minute AAS interventions using headphones; efficacy will be assessed through pre- and post-intervention comparisons of: a) high and low frequency components of Heart Rate Variability (HRV); b) vital signs: blood pressure, heart and respiratory rates, and skin conductance, and c) self-reported relaxation scores on a 0-10 numeric rating scale, using the State-Trait Anxiety Inventory (STAI-6). We will also explore participants’ perceptions of system effectiveness, feasibility, and acceptability using a questionnaire (Likert and open-ended questions). We will synthesize experimental and qualitative data, disseminate results through conferences and journal articles in relevant disciplines (e.g. Medical Ethnomusicology, Music Therapy, Sound Studies, Machine Learning, Nursing, Critical Care, Rehabilitation Medicine, and Integrative Health), and apply for larger Tri-Council grants within two years.
Monthly meetings open to all project participants will be conducted using a non-hierarchical consensus model8 to build a strongly knit, inclusive, cooperative research community, to encourage active participation, and to open critical discussion, for both ethical and scientific imperatives. We will conduct comprehensive cumulative and gender- subgroup analysis to reveal potential gender-specific experiences and recommendations. We will report on progress at a concluding on-campus seminar, to which all participants will be invited, to present and discuss results, and to solicit suggestions for future directions. Periodic surveys sent to investigators will continue to track project progress for the following three years.
Our methodology explicitly incorporates EDI principles and addresses Canada’s Intersectional policy (Gender Based Analysis Plus, GBA+), in all aspects. First, our team formation strategy has helped, and will continue to help, ensure team balance in gender, race, ethnicity, religion, age and disability. Second, AAS technology is designed to be adaptive across social identity, including gender, linguistic capacities, culture, and physical limitations, through reinforcement learning and audio library diversity. Ultimately, the project seeks to advance health equality through inclusive Patient-Oriented Research towards an adaptive sound therapeutic tool responsive to each patient’s unique identity, state, and needs. This initiative closely aligns with the University of Alberta’s Strategic Plan, “For the Public Good”, by building bridges and developing interdisciplinary synergies between different research areas, faculties, and units; by training graduate students; and by engaging the university’s community in Patient-Oriented Research towards better health care for a diverse public. It is relevant to three University Signature Areas: Precision Health; Intersections of Gender; and the recently approved AI and Society. Finally, it is highly relevant to four local research units: the Integrative Health Institute (IHI), the Alberta Machine Intelligence Institute (AMII), the Sound Studies Institute (SSI), and the Canadian Centre for Ethnomusicology (CCE).
The team integrates members of various ranks and statuses including UofA professors, students, and former students in multiple disciplines: Michael Frishkopf (applicant), Professor of Music, Director of the Canadian Centre for Ethnomusicology, Adjunct Professor of Medicine, Lead for International Indigenous Medicine at the Integrative Health Institute, Sound Studies Institute affiliated researcher, and Precision Health Engagement Committee member. Abram Hindle (co-applicant), Associate Professor of Computing Science, established researcher in data sciences applied to software engineering, applying statistics, AI, and machine learning to software productivity, software development practices, software performance, music information retrieval, and ECG data. Elizabeth Papathanassoglou (co-applicant), Professor of Nursing and Scientific Director, Neurosciences, Rehabilitation & Vision Strategic Clinical Network, Alberta Health Services, with extensive experience in critical care research, focusing on inflammation, stress, and patient experience, including the role of music; pioneered research on stress, stress neuropeptides in critical illness. Demetrios James Kutsogiannis (co-applicant), Professor in the Department of Critical Care Medicine and Adjunct Professor in the School of Public Health, practicing clinician researcher in neurocritical care and general critical care medicine at the University of Alberta Hospital, and Director of the Critical Care Research Group, Royal Alexandra Hospital. Martha Steenstrup (team member): Adjunct Professor of Computing Science, with research focused on asynchronous, distributed, adaptive algorithms - including trial-and-error learning algorithms - for control, with formal training in computer science (PhD), mathematics, and music composition. Yourui Guo (team member): graduate student in Computing Science (UofA), with expertise in electroacoustic music, pursuing an MSc under the supervision of Hindle, Frishkopf, and Nathan Sturtevant (Computing Science) with thesis project centered on AAS. Tiffany Sparrow Brulotte (team member): musician and accredited music therapist experienced with diverse populations, specializing in trauma release and neurological rehabilitation in medical settings including ICU research with Papathanassoglou; completed MA (UofA) in Medical Ethnomusicology. Yuluan Wang (team member): MSc in Rehabilitation Medicine (UofA), with research on the role of music for inducing sleep; currently studying Medicine (UofA). Greg Mulyk (team member): composer (MMus, UofA), sound designer, and programmer specializing in music for visual media; developed manually-operated soundscape software for relaxation towards the proposed AAS. Michael Cohen (team member): Professor of Computer Science, University of Aizu (Japan); Cohen’s research centers on human-computer interaction, interactive and immersive multimedia, binaural and spatial hearing, virtual and mixed reality, and signal processing. Other team members, selected as subjects or advisors, will be fully integrated in the research process, with careful attention to equity, diversity, and inclusion.