What is it about?

In healthcare, research on artificial intelligence is becoming increasingly dedicated to applying predictive analytic techniques to make clinical predictions. Even though artificial intelligence has shown promising results in cancer image recognition, triage service automation, and in disease prognosis, its clinical value has not been addressed. Currently, there is a lack of understanding around how some of these algorithms work. Despite knowing the potential risks associated with using artificial intelligence in healthcare, there is no clear framework to evaluate predictive algorithms, which are being commercially implemented within the healthcare industry. To ensure patient safety, regulatory authorities should ensure that proposed algorithms meet the accepted standards of clinical benefit, just as they do for therapeutics and predictive biomarkers. In this article, we offer a framework for the evaluation of predictive algorithms. Although not exhaustive, these criteria can enhance the quality of predictive algorithms and ensure that the algorithms effectively improve clinical outcomes.

Featured Image

Why is it important?

The study can be used as a guideline to evaluate healthcare AI systems.

Perspectives

I believe this article can offer a framework for the evaluation of predictive algorithms and can enhance the quality of healthcare AI systems and ensure that the algorithms effectively improve clinical outcomes.

Avishek Choudhury
West Virginia University

Read the Original

This page is a summary of: A framework for safeguarding artificial intelligence systems within healthcare, British Journal of Healthcare Management, August 2019, Mark Allen Group,
DOI: 10.12968/bjhc.2019.0066.
You can read the full text:

Read

Resources

Contributors

The following have contributed to this page