When can we Kick (Some) Humans “Out of the Loop”? An Examination of the use of AI in Medical Imaging for Lumbar Spinal Stenosis

Asian Bioethics Review 17 (1):207-223 (2025)
  Copy   BIBTEX

Abstract

Artificial intelligence (AI) has attracted an increasing amount of attention, both positive and negative. Its potential applications in healthcare are indeed manifold and revolutionary, and within the realm of medical imaging and radiology (which will be the focus of this paper), significant increases in accuracy and speed, as well as significant savings in cost, stand to be gained through the adoption of this technology. Because of its novelty, a norm of keeping humans “in the loop” wherever AI mechanisms are deployed has become synonymous with good ethical practice in some circles. It has been argued that keeping humans “in the loop” is important for reasons of safety, accountability, and the maintenance of institutional trust. However, as the application of machine learning for the detection of lumbar spinal stenosis (LSS) in this paper’s case study reveals, there are some scenarios where an insistence on keeping humans in the loop (or in other words, the resistance to automation) seems unwarranted and could possibly lead us to miss out on very real and important opportunities in healthcare—particularly in low-resource settings. It is important to acknowledge these opportunity costs of resisting automation in such contexts, where better options may be unavailable. Using an AI model based on convolutional neural networks developed by a team of researchers at NUH/NUS medical school in Singapore for automated detection and classification of the lumbar spinal canal, lateral recess, and neural foraminal narrowing in an MRI scan of the spine to diagnose LSS, we will aim to demonstrate that where certain criteria hold (e.g., the AI is as accurate or better than human experts, risks are low in the event of an error, the gain in wellbeing is significant, and the task being automated is not essentially or importantly human), it is both morally permissible and even desirable to kick the humans out of the loop.

Other Versions

No versions found

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 103,449

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Two Reasons for Subjecting Medical AI Systems to Lower Standards than Humans.Jakob Mainz, Jens Christian Bjerring & Lauritz Munch - 2023 - Acm Proceedings of Fairness, Accountability, and Transaparency (Facct) 2023 1 (1):44-49.

Analytics

Added to PP
2024-05-17

Downloads
17 (#1,196,561)

6 months
7 (#469,699)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Kathryn Muyskens
Yale-NUS College