How Artificial Intelligence Could Transform Health Care

Why UCSF’s Robert Wachter is optimistic the new technology will deliver on its promise.

By Victoria Colliver

Health care has historically been slow to adopt new technologies that involve wholesale changes to the nature of the work. Witness the slow and checkered roll-out of electronic health records and the utter failure of prior efforts to implement artificial intelligence tools, such as IBM’s vaunted but ultimately doomed experience with Watson Health.

But in a commentary released in JAMA on the one-year anniversary of the public launch of ChatGPT, Robert Wachter, MD, chair of UC San Francisco’s School of Medicine, is bullish on the potential of new generative artificial intelligence tools to transform the health-care environment in a way previous technologies could not.

In the piece, published Nov. 30, 2023, Wachter and co-author Erik Brynjolfsson, PhD, director of the Digital Economy Lab and a senior fellow at Institute for Human Centered AI, both at Stanford University, argue that generative AI – AI that can produce high-quality text, images and other content distinct from data it’s been trained on – has unique properties that will likely shorten the usual lag time between promise and results, leading to productivity gains instead of logjams.

Wachter has long chronicled the challenges of health information technology and is the author of The Digital Doctor: Hope, Hype, and Harm at the Dawn of Medicine’s Computer Age.

What is it about the health care industry that has made it slow to embrace what you refer to in the article as “general purpose” technologies, or ones that can influence systems by performing a broad range of tasks?

In 1993, my co-author Erik Brynjolfsson coined the term “the productivity paradox of information technology,” referring to the nearly universal painful experience of industries as they try to adopt so-called general-purpose technologies, ones that broadly change the nature of the work across an entire organization. The paradox is that, despite the hype and best intentions, many years, sometimes decades, go by, with no significant gains in productivity. That’s the bad news. The good news is that, if the technologies are any good, eventually the paradox is overcome, with massive gains in productivity and often quality and customer experience. Examples include electricity, electric motors, automobiles, computers, and the Internet.

Health care has been a laggard in efforts to embrace general purpose technologies until recently. In 2008, fewer than one in ten U.S. hospitals had an electronic health record (EHR).

Why were we so late to the digital dance? Lots of reasons: misaligned incentives — the hospital or doctor would have to pay for the computer, but some of the economic benefit would go to the insurance company — complexity, privacy regulations, and a general resistance to change. Finally, beginning in about 2010, healthcare did begin to digitize the record. Now fewer than one in 10 hospitals do not have an EHR, which sets the stage for today’s AI.

Why do you think genAI has the power to overcome the curse of the “productivity paradox”?

The good news about the productivity paradox of IT is that if the technology is any good, it ultimately is resolved. So, first, the technology needs to get better. Secondly, the system has to change the way work is done to take advantage of these new tools.

While healthcare is a notoriously hard place for digital transformation, genAI has some unique attributes that will make it easier for it to deliver on its promise. First off, it’s relatively easy to use. And unlike EHR adoption, it doesn’t require a bunch of new hardware or wholesale changes in the way work is done, since doctors, nurses, and to some extent patients are already doing much of the health care related work on the computer.

Probably most importantly, the health care ecosystem is better prepared for genAI than it would have been five or 10 years ago. We’re all used to using digital data and systems. It’s easier to plug in third-party software tools than it used to be. Pressures on healthcare systems to deliver high quality, safe, equitable care at a lower cost are accelerating, and there are shortages of nearly all kinds of clinical and non-clinical personnel. It’s easy to see how genAI could help existing health care organizations meet their clinical and business needs.

Finally, those of us in leadership positions in healthcare are less naïve than we used to be about what it takes to integrate digital tools in our work, and good organizations, like UCSF Health, have trained leaders and created governance structures to help smooth the way to successful implementation.

How do you see genAI being used first in health care?

The early efforts in healthcare AI, in the 1960s through the 1980s, all failed miserably, in part because the systems weren’t great, but mostly because the developers chose to try to tackle the hardest problem: replacing the doctor’s brain as the engine for diagnosis.

Today, most of the players in the genAI field have learned that lesson. The early gains will be in areas of administrative friction — helping patients schedule appointments, refill medications, find a doctor, and get answers to some of their questions.

For doctors and healthcare systems, genAI will help create clinical notes, prior authorization requests to insurance companies, and letters to patients and other doctors. It will also summarize complex patient records. There will be some early work on diagnosis, but largely in the form of suggesting possible diagnoses rather than replacing physicians. The stakes are simply too high, as are the consequences of being wrong.

What are the roadblocks that may prevent this new technology from being adopted or used successfully?

Generative AI must continue to improve, particularly as the stakes grow. The good news is that, even in the past year, there have been significant improvements. While integrating AI into EHR systems is easier than it used to be, it’s still not as easy as it needs to be. AI will be expensive, and health care systems will need to find the money to invest, which they will do if they see a return on that investment down the road.

The potential labor-management tensions around AI, as recent strikes in the entertainment and automobile industries have highlighted, also will need to be navigated. But the labor shortages in health care and the high levels of burnout will dampen some of that pushback.

Finally, as AI enters more clinical realms, we need to figure out how to develop systems in which doctors and nurses can work collaboratively with the technology – trusting it when that trust is merited, but not falling asleep at the metaphorical wheel.

What should be done to help pave the way for success for this new technology?

Clearly, there need to be some regulations that establish guardrails for genAI, particularly in high stakes endeavors like clinical medicine. How to do this effectively and efficiently is a daunting problem, particularly for general purpose technologies.

It’s one thing to regulate a new drug, or even a specific AI algorithm for reading pathology slides. It’s another to regulate AI that provides advice or predictions being used by the entire system of care – particularly since the AI you approved yesterday might evolve to give different answers tomorrow.