How AI Can Aid Clinicians in Analyzing Medical Images


Health care

In recent years, AI has become a powerful tool for analyzing medical images. Thanks to advances in computing and the vast sets of medical data from which AI can learn, it has proven to be a valuable aid in reading and analyzing patterns of X-rays, MRIs and CT scans, enabling doctors to make better and faster decisions, especially in the treatment and diagnosis of life-threatening diseases like cancer. In some contexts, these AI tools even offer advantages over their human counterparts.

“AI systems can quickly process thousands of images and provide predictions much faster than human examiners,” says Onur Asan, an associate professor at Stevens Institute of Technology, whose research focuses on human-machine interaction in health care. “Unlike humans, AI does not get tired or lose focus over time.”

Yet many clinicians view AI with at least some degree of suspicion, largely because they are unsure how it arrives at its predictions, a problem known as the “black box” problem. “When clinicians don’t know how AI generates its predictions, they are less likely to trust it,” says Asan. “So we wanted to know if providing additional explanations could help clinicians and how different degrees of AI explainability influence diagnostic accuracy, as well as confidence in the system.”

Working with his doctoral student Olya Rezaeian and Assistant Professor Alparslan Emrah Bayrak of Lehigh University, Asan conducted a study of 28 oncologists and radiologists who used AI to analyze breast cancer images. Clinicians also received varying levels of explanation of the AI ​​tool’s assessments. At the end, participants answered a series of questions designed to assess their confidence in the AI-generated assessment and the difficulty of the task.

The team found that AI did improve diagnostic accuracy for clinicians compared to the control group, but there were some interesting caveats.

The study found that providing deeper explanations did not necessarily produce more trust. “We found that greater explainability does not equate to more confidence,” says Asan. This is because adding additional or more complex explanations requires clinicians to process additional information, allowing them to devote their time and focus to image analysis. When explanations were more elaborate, clinicians took longer to make decisions, which decreased their overall performance.

“Processing more information adds a greater cognitive workload to clinicians. It also makes them more likely to make errors and possibly harm the patient,” says Asan. “You don’t want to add cognitive load to users by adding more tasks.”

Asan’s research also found that in some cases, clinicians placed too much trust in AI, which could lead to crucial image information being overlooked and harm patients. “If an AI system is not well designed and makes errors while users have high confidence in it, some clinicians may develop blind trust in believing that everything the AI ​​suggests is true and not examine the results sufficiently,” says Asan.

The team presented their findings in two recent studies: The impact of AI explanations on clinician confidence and diagnostic accuracy in breast cancerpublished in the journal Applied Ergonomics on November 1, and Explainability and confidence of AI in clinical decision support systems: effects on confidence, diagnostic performance and cognitive load in breast cancer carepublished in the International Journal of Human-Computer Interaction on August 7.

Asan believes that AI will continue to be a useful assistant for clinicians in interpreting medical imaging, but such systems must be built thoughtfully. “Our results suggest that designers should exercise caution when integrating explanations into AI systems,” he says, so that they do not become too cumbersome to use. Additionally, he adds, adequate training will be necessary for users, as human oversight will still be necessary. “Clinicians using AI should receive training that focuses on interpreting AI results and not just relying on them. »

Ultimately, there should be a good balance between ease of use and usefulness of AI systems, Asan notes. “Research reveals that there are two main metrics for a person to use any form of technology: perceived usefulness and perceived ease of use,” he says. “So if doctors think this tool is useful for doing their job and it’s easy to use, they will use it.”

To access more business news, visit NJB News Now.

Related articles:

Leave a Reply

Your email address will not be published. Required fields are marked *