不良研究所

News

Researchers find design flaws and oversight issues in certain health apps, offer solutions for more effective tools

鈥楶otential misdiagnosis is just a click away鈥
Published: 6 November 2024

AI-powered apps offering medical diagnoses at the click of a button are often limited by biased data and a lack of regulation, leading to inaccurate and unsafe health advice, a new study found.

不良研究所 researchers presented symptom data from known medical cases to two popular, representative apps to see how well they diagnosed the conditions. While the apps sometimes gave correct diagnoses, they often failed to detect serious conditions, according to published in the Journal of Medical Internet Research. This potentially resulted in delayed treatment.

The researchers identified two main issues with the health apps they studied: biased data and a lack of regulation.

Bias and the 鈥榖lack box鈥 phenomenon

The bias issue is known as the "garbage in, garbage out" problem.

"These apps often learn from skewed datasets that don鈥檛 accurately reflect diverse populations," said Ma鈥檔 H. Zawati, lead author and Associate Professor in 不良研究所鈥檚 Department of Medicine.

Because the apps rely on data from smartphone users, they tend to exclude lower-income individuals. Race and ethnicity are also underrepresented in the data, said the authors. This creates a cycle where an app's assessments are based on a narrower group of users, leading to more biased results and potentially inaccurate medical advice.

While apps often include disclaimers stating they do not provide medical advice, the scholar argues that users' interpretations of these disclaimers 鈥 if read 鈥 do not always align.

The second issue is the "black box" nature of AI systems, where the technology evolves with minimal human oversight. Zawati said lack of transparency means even an app鈥檚 developers may not fully understand how it reaches conclusions.

"Without clear regulations, developers aren鈥檛 held accountable, making doctors reluctant to recommend these tools. For users, this means a potential misdiagnosis is just a click away,鈥 said Zawati, who is also an Associate Member in 不良研究所鈥檚 Department of Equity, Ethics and Policy and the Faculty of Law and Research Director of the Centre of Genomics and Policy in 不良研究所鈥檚 Department of Human Genetics.

Call for AI oversight

To overcome limitations, developers can train apps on more diverse data sets, conduct regular audits to catch biases, enhance transparency to improve understanding of how algorithms work and include more human oversight in the decision-making process, he suggested.

鈥淏y prioritizing thoughtful design and rigorous oversight, AI-powered health apps have the potential to make health care more accessible to the public and become a valuable tool in clinical settings,鈥 Zawati said.

This research was funded by the Fonds de recherche du Qu茅bec.

About the study

by Ma鈥檔 H. Zawati and Michael Lang was published in the Journal of Medical Internet Research.

Back to top