Rapidly Accelerating Use of Artificial Intelligence and Robotics Demands Ethical Analysis

By Paul B. Hofmann, DrPH, LFACHE

Soon after the coronavirus struck China, a pediatrician/infectious disease specialist was interviewed on the radio to explain the application of artificial intelligence (AI) and defend the use of robotics in caring for patients. The KCBS San Francisco–based reporter expressed concern about the possibility of these technologies depersonalizing patient care.

The specialist responded by describing how the use of both AI and robotics contributed to better care for infected patients, promoted a safer environment for other patients as well as staff members, and benefited the broader community. He acknowledged the importance of human interaction with patients, but he also noted that the prevalence of hospital-acquired infections like methicillin-resistant Staphylococcus aureus (MRSA) continues to be a serious problem, affecting thousands of patients daily. In addition, he emphasized how robotic devices can let specialists communicate directly with patients via screens. And as we know, telehealth has dramatically expanded the provision of clinical expertise to facilities in rural communities lacking adequate resources.

The reporter then asked how patients feel about interacting with robots. The specialist said there have been very few studies dealing with patients’ attitudes. Yet at least one company’s experience suggests patients are receptive to having virtual visits with clinicians: Teladoc had more than 50% growth in mental health virtual visits in the third quarter of 2019, and the company’s total virtual visits grew 45% to 928,000 compared to the same period for the previous year (Teladoc Health, 2019).

The proliferation of AI and robotics use is inescapable, so how can the associated ethical challenges be identified and addressed? It was only last year that the FDA released model 1.0 of its software precertification to provide an initial tool to test AI and machine learning technology. It noted that because the technology was advancing rapidly, the health information technology sector needed to move quickly to ensure the safety of AI and machine learning in practical applications.

A 2019 Forbes Insights article began by stating:

The ethical guidelines laid out in the Hippocratic Oath nearly 2,500 years ago are about to collide with 21st century artificial intelligence.  

AI promises to be a boon to medical practice, improving diagnoses, personalizing treatment, and spotting future public-health threats. By 2024, experts predict that healthcare AI will be a nearly $20 billion market, with tools that transcribe medical records, assist surgery, and investigate insurance claims for fraud. [Emphasis added]

Even so, the technology raises some knotty ethical questions. What happens when an AI system makes the wrong decision—and who is responsible if it does? How can clinicians verify, or even understand, what comes out of an AI “black box”? How do they make sure AI systems avoid bias and protect patient privacy? (Insights Team, 2019)

Led by researchers at the Alan Turing Institute in London, more than 7,000 studies were reviewed that suggested AI could outperform clinicians in reading x-rays or CT scans. The researchers concluded that many of the studies exaggerated their claims. The authors stated, “The danger is that public and commercial appetite for healthcare AI outpaces the development of a rigorous evidence base to support this comparatively young field” (Nagendran et al., 2020).

In addition, New York University professor Meredith Broussard expresses concern about “techno-chauvinism”—the flawed assumption that technology is always the superior, more effective solution to any problem. Author of Artificial Unintelligence: How Computers Misunderstand the World, she says “automated systems discriminate by default” (Broussard, 2018).

Although the AMA issued its first guidelines in June 2018 for developing, using, and regulating AI, the content revealed how many unanswered questions remain, reflecting appropriate concerns about ensuring patient consent and preserving confidentiality.

Recommendations

At a minimum, next steps should include:

  1. Capitalize on the work of organizations that have been involved in the use of AI and robotics for years, particularly academic medical centers.
  2. Support clinical trials to build a more robust data set on the efficacy and general cost-effectiveness of AI and robotics.
  3. Engage patients and their family members by soliciting feedback on the advantages and shortcomings of these technologies. While working at the Massachusetts General Hospital’s Laboratory of Computer Science in the late 1960s when a computer application was first used to solicit initial patient histories, my colleagues were surprised to learn patients were often less inhibited in disclosing sensitive information to the application than when speaking with a clinician.
  4. Conduct further studies on the preliminary use of AI-based algorithms, which potentially may be more effective in performing ethics consults by avoiding unconscious human biases.
  5. Benefit from the advice of professional groups, such as The Hastings Center, which published “How Bioethics Can Shape AI and Machine Learning” in The Hastings Center Report (Nabi, 2018) and the AMA, which devoted a major portion of a 2019 AMA Journal of Ethics issue to the “Ethical Dimensions of Using AI in Health Care” (Rigby, 2019).

Alan Cossitt, a board-certified hospital chaplain who spent 25 years developing various technologies, including one of the first commercial neural networks, proposes that the ethical analysis of technology be done by a technology ethics committee. In an article published this year by The Hastings Center, he states, “This committee would not replace the clinical ethics committees or [institutional review boards] but would work with them as needed” (Cossitt, 2020). Among the questions a technology ethics committee might be asked to address, he includes:

>          Should we use this? This is the first question to ask in considering any new technology. In other words, is the new app or algorithm ultimately beneficial to patients and clinicians?

>          For a predictive algorithm, what type of patient consent is ethical? How can consent be gathered? Should opt-in or opt-out be the default?

>          Is an algorithm that measures patient health biased?

>          Who should have access to AI-generated data and patient identities? When and under what conditions?

>          Does a project designed to help patients change unhealthy behaviors—one that uses psychological targeting (for example, “extracting people’s psychological profiles from their digital footprints”)—respect patient autonomy?

>          Does a project using iPhone® apps raise health equity concerns, as the advantages are not equally available to low-income and high-income patients? Does the phone’s surveillance capabilities, combined with the data the app gathers, put patients at risk?

>          Under what circumstances is tracking a staff member’s location within a hospital ethical and not just legal?

>          How does the hospital or health care system detect unintended consequences of a technology? How should the organization respond? (Cossitt, 2020)

Ruben Amarasingham, president and CEO of Pieces Technologies, who has been writing and speaking about AI for over 10 years, emphasizes that a committee should evaluate the performance of AI systems and not rely solely on vendors to oversee their products or services (Amarasingham, 2018).

Former FDA Commissioner Scott Gottlieb, who headed an organization that has approved more than 40 AI products in the past five years, stated “the potential of digital health is nothing short of revolutionary” (Gottlieb, 2017). According to a recent KPMG study of five industries, including healthcare, 90% of leaders “believe AI technology will improve the patient experience and have the greatest impact on diagnostics (47 percent), electronic health records management (41 percent) and robotic tasks (40 percent)” (KPMG, 2020).

This view is clearly shared by Eric Topol, former chief of cardiovascular medicine at the Cleveland Clinic and now executive vice president of Scripps Research, who has been described as perhaps the most articulate advocate of the benefits of AI. His latest book is titled Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again (Topol, 2019).

Nonetheless, an Accenture survey conducted last year found that about half of U.S. doctors say they are anxious about using AI-powered software (Landi, 2019). Most recently, in referencing Topol’s book, authors from Australia’s largest university wrote, “Far from facilitating a return to the ‘golden age of doctoring,’ the use of medical AI seems likely to further erode therapeutic relationships and threaten professional and patient satisfaction” (Sparrow & Hatherley, 2020).

Conclusion

Irrefutably, the rate at which AI and robotics is being introduced in hospitals and health systems is rapidly accelerating. As noted by the researchers in Britain: “However, at present, many arguably exaggerated claims exist about equivalence with or superiority over clinicians, which presents a risk for patient safety and population health at the societal level, with AI algorithms applied in some cases to millions of patients” (Nagendran et al., 2020). To avoid misuse and unintended consequences of these technologies, we are ethically obligated to provide comprehensive oversight of the costs and benefits for patients, staff, and organizations to ensure all stakeholders are well served.

Paul Hofmann is a California-based healthcare consultant with extensive experience in clinical and organizational ethics and a former CEO of Stanford and Emory University Hospitals.

References

Amarasingham, R. (2018, March 5–9). Why clinical augmentation is necessary for healthcare AI [Conference session]. HIMSS Machine Learning & AI for Healthcare Conference, Las Vegas, NV. https://www.healthcaremachinelearningai.com/las-vegas/2018/session/why-clinical-augmentation-necessary-healthcare-ai

Broussard, M. (2018). Artificial unintelligence: How computers misunderstand the world. MIT Press.

Cossitt, A. (2020, February 5). Why healthcare organizations need technology ethics committees. The Hastings Center. https://www.thehastingscenter.org/why-health-care­organizations­need­technology­ethics-committees

Insights Team (2019, February 11). Rethinking medical ethics. Forbes. https://www.forbes.com/sites/insights-intelai/2019/02/11/rethinking-medical-ethics

Gottlieb, S. (2017, July 27). FDA announces new steps to empower consumers and advance digital healthcare. U.S. Food and Drug Administration. https://www.fda.gov/news-events/fda-voices-perspectives-fda-leadership-and-experts/fda-announces-new-steps-empower-consumers-and-advance-digital-healthcare

KPMG. (2020). Living in an AI world: Achievements and challenges in artificial intelligence across five industries. https://advisory.kpmg.us/articles/2020/living-in-an-ai-world.html

Landi, H. (2019, April 25). Nearly half of U.S. doctors say they are anxious about using AI-powered software: Survey. FierceHealthcare.  https://www.fiercehealthcare.com/practices/nearly-half-u-s-doctors-say-they-are-anxious-about-using-ai-powered-software-survey

Nabi, J. (2018). How bioethics can shape AI and machine learning. The Hastings Center Report, 48(5), 10–13. https://doi.org/10.1002/hast.895

Nagendran, M., Chen, Y., Lovejoy, C. A., Gordon, A. C., Komorowski, M., Harvey, H., Topol, E. J., Ioannidis, J. P. A., Collins, G. S., & Maruthappu, M. (2020). Artificial intelligence versus clinicians: Systematic review of design, reporting standards, and claims of deep learning studies. British Medical Journal, 368. https://doi.org/10.1136/bmj.m689

Rigby, M. (2019). Ethical dimensions of using AI in health care. AMA Journal of Ethics, 21(2), E121–E124. https://doi.org/10.1001/amajethics.2019.121

Sparrow, S., & Hatherley, J. (2020). High hopes for “deep medicine”? AI, economics, and the future of care. The Hastings Center Report, 50(1), 14–17. https://doi.org/10.1002/hast.1079

Teladoc Health (2019, October 30). Teladoc Health reports third quarter 2019 results [Press release]. https://teladochealth.com/newsroom/press/release/third-quarter-2019-results

Topol, E. (2019). Deep medicine: How artificial intelligence can make healthcare human again. Basic Books.