Our AI writing assistant, WriteUp, can assist you in easily writing any text. Click here to experience its capabilities.

Developing reliable AI tools for healthcare

Summary

This paper presents CoDoC, an AI system developed to determine when to rely on predictive AI tools or defer to a clinician for the most accurate interpretation of medical images. CoDoC was tested on a de-identified UK mammography dataset, and was able to reduce the number of false positives by 25% without missing any true positives. The system is open-sourced and requires only three inputs for each case in the training dataset, and is designed to be compatible with any proprietary AI models without needing access to the model's inner workings or data it was trained on. CoDoC's system could hypothetically improve the triage of chest X-rays for onward testing for tuberculosis and was found to improve performance on interpreting medical imaging across different populations, clinical settings, imaging equipment, and disease types.

Q&As

What is CoDoC and how does it determine when predictive AI or a clinician is more accurate?
CoDoC is an AI system that learns when to rely on predictive AI tools or defer to a clinician for the most accurate interpretation of medical images. It determines when predictive AI or a clinician is more accurate by requiring three inputs for each case in the training dataset: the predictive AI's confidence score, the clinician's interpretation of the medical image, and the ground truth of whether disease was present.

What criteria did the researchers set for developing CoDoC?
When developing CoDoC, the researchers had three criteria: non-machine learning experts, like healthcare providers, should be able to deploy the system and run it on a single computer; training would require a relatively small amount of data; and the system could be compatible with any proprietary AI models and would not need access to the model’s inner workings or data it was trained on.

What are the potential benefits of using CoDoC to improve predictive AI tools?
The potential benefits of using CoDoC to improve predictive AI tools include increased accuracy and efficiency, as well as improved transparency and safety of AI models for the real world.

How did CoDoC perform in tests using real-world datasets?
In comprehensive testing of CoDoC with multiple real-world datasets, CoDoC was able to reduce the number of false positives by 25% for a large, de-identified UK mammography dataset, compared with commonly used clinical workflows – without missing any true positives. It was also able to reduce the number of cases that needed to be read by a clinician by two thirds in hypothetical simulations.

What are the steps required to bring CoDoC safely to real-world medical settings?
To bring CoDoC safely to real-world medical settings, healthcare providers and manufacturers will need to understand how clinicians interact differently with AI, and validate systems with specific medical AI tools and settings.

AI Comments

πŸ‘ This research is a great step forward in developing reliable AI tools for healthcare. It is encouraging to see a system designed to help improve predictive AI tools without requiring a redesign of the underlying AI model.

πŸ‘Ž It is concerning that this research was conducted purely on theoretical, de-identified, historic, clinical data, and not in real-world clinical settings. This could lead to inaccurate results in actual healthcare settings.

AI Discussion

Me: It's about developing reliable AI tools for healthcare. The article discusses a system called CoDoC that determines when predictive AI is more accurate and when a human clinician should take over. It looks at scenarios where a clinician might have access to an AI tool to help interpret images, like a chest x-ray.

Friend: That's really interesting. What implications does it have?

Me: The implications are that AI can be used to improve the accuracy of healthcare decisions, and that it is possible to create an AI system that can determine when it should defer to a clinician. This could potentially help healthcare providers improve the accuracy of their decisions without having to modify the underlying AI tool itself. It could also lead to increased accuracy and efficiency in healthcare, as AI can be used to reduce the number of cases that need to be read by a clinician. However, for AI to be used responsibly in healthcare, healthcare providers and manufacturers need to understand how clinicians interact differently with AI and validate systems with specific medical AI tools and settings.

Action items

Technical terms

AI (Artificial Intelligence)
AI is a type of computer technology that is designed to simulate human intelligence and behavior.
Predictive AI
Predictive AI is a type of AI that is used to make predictions about future events or outcomes.
CoDoC (Complementarity-driven Deferral-to-Clinical Workflow)
CoDoC is an AI system that is designed to learn when to rely on predictive AI tools or defer to a clinician for the most accurate interpretation of medical images.
Ground Truth
Ground truth is the actual truth or reality of a situation, as opposed to what is perceived or believed.
Triage
Triage is the process of sorting patients according to the severity of their condition.

Similar articles

0.86836165 For chemists, the AI revolution has yet to happen

0.8641287 Hospital bosses love AI. Doctors and nurses are worried.

0.85808796 βš™ Crypto Miners Turn to AI

0.8573346 AI Can Flag Skin Cancer With Near-Perfect Accuracy

0.85721976 SrzqNIQKGM9bgOBCZaSE

πŸ—³οΈ Do you like the summary? Please join our survey and vote on new features!