ѻýҕl

Artificial Intelligence Could Improve Medical Practice -- But Only If Done Right

<ѻýҕl class="mpt-content-deck">— Yes, AI can help, if clinicians are consulted during its creation
MedpageToday
 A computer rendering of an android physician looking at a tablet in an operating room
  • author['full_name']

    Fred Pelzman is an associate professor of medicine at Weill Cornell, and has been a practicing internist for nearly 30 years. He is medical director of Weill Cornell Internal Medicine Associates.

I don't know about you, but right now I'm worried that there is someone out there figuring out how to make AI make healthcare providers work harder -- not work better.

And most of all, I'm worried that that "someone" is not us.

I'm worried that there are many forces out there, some of them aligned, some of them working independently, that are taking a look at artificial intelligence and saying, "This is the answer! This is how we are finally going to fix the healthcare system." But I'm concerned that it's not us.

We've seen this happen before, with seemingly well-intentioned people who feel they know the best way to do things telling those of us on the front lines how to do healthcare. Somebody tries to tell us what medicines we can prescribe, which treatments and procedures will be covered for our patients, how to write a note in the electronic medical record, or what we are mandated to screen for.

The Art of Medicine

All those years ago, when we learned how to write a progress note in medical school, we were taught by people who were never really worried about creating a billing-compliant document. My greatest mentors were those who were terrific clinicians, brilliant diagnosticians, and caring and compassionate people -- people who weren't interested in committing fraud but were interested in taking care of patients.

They saw this process that we went through -- the history, the physical exam, and the compilation of data -- as a work of art. Building on our stored knowledge, collective memories, a knowledge of the literature, and so much more, all of that went into actually taking care of people. It wasn't about just making sure that you had 10 organ systems reviewed, each of them with a large collection of itemized symptoms that were not really relevant to the patient's care on that day.

Sure, way back in medical school I remember being taught about what was involved in a "complete" review of systems, all the things you needed to go through to make sure you had not missed anything. But after a while, much of this falls by the wayside. The more experienced a clinician gets, the less they are reliant on these obscure and often irrelevant items.

The powers-that-be have insisted that we continue to include these, along with so much other ephemera and trivia to plump up our notes and keep someone other than ourselves, our colleagues, and our patients healthy and happy. Someone also decided we need to ask for a pain score at every office visit, screen for depression and suicidality at every office visit, ask about falls at every office visit, and click a bunch of boxes about social determinants of health, when no one has ever really shown that doing all this leads to fixing these problems in any direct way.

A Solution That Comes With Problems

Now something big and promising and actually very terrifying comes along: the looming promise of artificial intelligence in healthcare. Even as I write this, there are probably meetings taking place, conferences, think tanks, and lots of people coming up with bright ideas about how to use this to get more done in healthcare.

But those of us working in the day-to-day world of taking care of patients know that solutions like this, more often than not, create more problems than they solve. If an artificial intelligence system just generates a vast differential diagnosis and pushes an enormous number of suggestions at us for what to do next, we end up being obligated to cover these bases, to order these tests, to go down these misleading paths of our patient's care.

Over the past few months, there been a number of well-publicized examples of what I've heard described as a "hallucitations" -- the creation of false data to support something that artificial intelligence says is true. There was even involving a lawsuit against an airline, where the lawyer for the plaintiff presented a written argument to the judge that was filled with false case references, created by the artificial intelligence chatbot, which in fact the lawyer had allegedly asked the system to confirm were true, which it did with conviction.

But they weren't.

Seeing the Potential

As I've written before, I feel that there is truly amazing potential for these kinds of systems to do a lot of rote work and busywork and repetitive tasks to make our lives easier, not harder. And perhaps, as our radiologist colleagues are already doing in some fashion, it can serve as an aid to work alongside us, helping make sure we don't miss things, without overdoing it and creating excessive worry.

I can see a future where some intelligent systems like these work as our assistant, helping make sure that tasks get accomplished, that patients get reminded about things they are due for, that appropriate follow-up appointments are made and kept, that data are collected and collated appropriately, perhaps even with some helpful interpretation and suggestions thrown in for good measure.

But if we let the pharmaceutical companies, the insurance companies, the hospital systems, the creators of electronic health records and others, do this without the direct hands-on input and guidance of those who are ultimately going to use these tools, we risk plunging yet again into an even deeper morass of stuff that people think is a really good idea but ends up not actually helping anybody.

So, I wish that the folks who are working on this, at whatever tech companies out there, would be willing to reach out to primary care doctors, to surgeons, to radiologists, to ophthalmologists, to dermatologists, to every member of the healthcare team, to ask us how we think this stuff might help. Let them show us what it can do, and then let us suggest ways it could help, and let us warn them about what it shouldn't do.

If this were to happen, we will be more likely to end up with something that saves money, prevents burnout, and saves lives. As we learned in the first Terminator (1984) movie (and all subsequent ones), we always let things get out of hand. they thought was going to fix everything, begins to learn rapidly, and becomes self-aware at 2:14 a.m. on August 29, 1997. Who could have ever predicted that this would happen?

Amazing to me now how long ago that date looks from where we are now. And how worrisome it is that we still might not have learned anything.