The Bridge2AI-Voice Consortium, a program based on the NIH-funded study called Voice as a Biomarker of Health that is co-led by experts from Weill Cornell Medicine’s Englander Institute for Precision Medicine (EIPM) and the USF Health Morsani College of Medicine in Tampa, recently held a unique, first-of-its-kind gathering of experts across industries to explore the impact voice can have on health care.
The project, one of four components in the NIH’s Bridge2AI program, also includes lead investigators from 10 other universities. This collective is spearheading a broader discussion of using AI to impact how voice can and should be used in health care.
“The event was really remarkable, providing an important forum for the discussion of research, as well as the ethics and legal implications of creating a framework for the appropriate development of these technologies to advance science, develop new treatments, and protect the rights of the patient,” said Olivier Elemento, Ph.D., Director of the EIPM.
The 1-day interactive Voice AI Symposium took place on April 19th in Washington, D.C., and brought together experts in the field of voice biomarkers, including industry, startups, academia, researchers, patients, patient advocacy groups and underserved & underrepresented communities. These experts presented research and discussed the impact voice can have on treating diseases as varied as lung cancer, Parkinson’s disease, heart disease, and mental health.
Voice biomarkers are gaining greater interest in academia, among technology and pharmaceutical companies, and governmental regulatory agencies, and come with fascinating and difficult ethical and legal questions that remain unanswered. The Voice AI Symposium allowed for broader discussion on the use of AI in health care, the use of voice in medical diagnoses, the ethical components in collecting and sharing voice data, and more.
Using this data, machine learning models will be trained to spot diseases by detecting changes in the human voice, which could empower doctors with a low-cost diagnostic tool to be used alongside other clinical methods. The Voice AI Symposium kick-started this discussion to gain greater insight and create opportunities for more collaboration and building strong safeguards while moving the technology forward.
Anaïs Rameau, M.D., an attending laryngologist at the Sean Parker Institute for the Voice, and an Assistant Professor of Otolaryngology at Weill Cornell Medical College, was excited about the mix of professionals at the symposium: “This conference was very unique because we were not just interacting with other academics. We have the NIH, of course, but also industry partners and patient representatives which brings talent and knowledge from many different perspectives. They are challenging us in new ways of thinking and communicating. I really enjoyed the conversations around bringing these new technologies more quickly, in a safe manner, to less advantaged communities.”
“What we learned, really, was that collaboration is the key for the future of Voice AI, to fuel the research and get better patient outcomes,” said Yaël Bensoussan, M.D., assistant professor in the Department of Otolaryngology – Head & Neck Surgery and director of the USF Health Voice Center in the USF Health Morsani College of Medicine. “We talked about ethics, we talked about trust, and we talked about the loyalty that we must have for our patients to protect them in this rapidly evolving field.”
Highlighting the importance of the patient perspective was a central element of the Symposium, and throughout the day patients and patient advocates spoke eloquently about their hopes for the future of this technology.
“I have both a personal and professional interest in this field,” said Charles Reavis, President of Dysphonia International. “I can see a tremendous amount of opportunity in utilizing AI to help physicians more quickly and accurately diagnose diseases. One day I hope everyone at risk for my condition will be able to use their cell phone to record their voice and send the data to their physicians, and have it analyzed by artificial intelligence to create a baseline of information to track the progression of disease. A continuous stream of voice data can then help a physician determine a more precise treatment regimen. The hurdles we need to overcome include patient and physician education about the potential of this technology, advances in AI and cell phone applications, reimbursement, and privacy concerns.”
Patient advocacy groups expressed strong support for the development of these technologies, but also stressed the importance of breaking down the barriers that prevent the sharing of data.
“The major reason conferences like this are important are to bring together people from different disciplines to share data, and to collaboratively create de-centralized science systems to share these data sets,” said Mr. Keith Comito, Executive Director of Lifespan.io. “The collaboration here will be incredibly important in moving the fields forward rapidly and responsibly. This is especially true for voice, which is remarkably predictive of a person’s overall health.”
The symposium included a poster session in which students shared relevant research projects across a range of topics, from the ethical use of AI and machine learning technologies to the biology of aging and the use of technology to capture, store and share voice data.
Ms. Reem Saleh from the Oregon Health & Science University presented the poster “Voice and AI: Being FAIR and CARE-ing as we build skills and develop our workforce.” She said, “My poster is addressing biases and ethics in AI. It’s very important that we’re aware of the biases that can arise and the concerns that people have with these models. There is an urgent need to educate audiences about AI model use, we must always promote responsible and ethical behaviors around these emerging technologies.”
# # #