This is the first episode in our series Smarter health. Read more about the series here.
American health care is complex. Expensive. Hard to access.
Could artificial intelligence change that?
In the first episode in our series Smarter health, we explore the potential of AI in health care — from predicting patient risk, to diagnostics, to just helping physicians make better decisions.
Today, On Point: We consider whether AI’s potential can be realized in our financially-motivated health care system.
Dr. Ziad Obermeyer, associate professor of health policy and management at the University of California, Berkeley School of Public Health. Emergency medicine physician. (@oziadias)
Richard Sharp, director of the biomedical ethics research program at the Mayo Clinic. (@MayoClinic)
MEGHNA CHAKRABARTI: I’m Meghna Chakrabarti. Welcome to an On Point special series: Smarter health: Artificial intelligence and the future of American health care.
CHAKRABARTI: Episode one, the digital caduceus. In the not so distant future, artificial intelligence and machine learning technologies could transform the health care you receive, whether you’re aware of it or not. Here are just a couple of examples. Dr. Vindell Washington is chief clinical officer at Verily Life Sciences, which is owned by Google’s parent company, Alphabet. Washington oversees the development of Onduo.
It’s a virtual care model for chronic illness. Technology that weaves together multiple streams of complex, daily medical data in order to guide and personalize health care decisions across entire patient populations.
VINDELL WASHINGTON [Tape]: You might have a blood pressure cuff reading, you may have a blood sugar reading, you may have some logging that you’ve done. So there’s mood logging that you can do with sort of a voice diary, etc., and they would all be sort of analyzed.
And the kind of research and work we do is much more around predicting undesired outcomes and making the right interventions with the right individuals to drive them to their best state of health.
CHAKRABARTI: And what about the diagnostic potential of artificial intelligence? Finale Doshi-Velez, assistant professor of computer science at Harvard University, says, Imagine being able to take out your smartphone and with bio-monitoring and imaging, be able to get an accurate diagnosis wherever you are.
FINALE DOSHI-VELEZ [Tape]: Identification of common pathogens is an application that is really moving forward, especially in resource limited areas.
CHAKRABARTI: Doshi-Velez says that’s a potential game changer in places where the nearest hospital may be hours away.
Americans spend more on health care than any other nation in the world. In 2021, health care costs in this country topped $4.3 trillion, according to the Centers for Medicare and Medicaid Services. Five years from now, that number will balloon to $6 trillion. That’s more than the entire economies of Germany, Great Britain or Canada.
We’re spending 20% of the nation’s GDP on health care. But we’re not getting healthier in return. Average life expectancy in the United States has dropped down to 77 years, five years shorter than in comparable countries. Dr. Kedar Mate, CEO of the non-profit Institute for Health Care Improvement, says U.S. health care is a system in dire need of reform.
KEDAR MATE [Tape]: I think of sort of three primary ways in which people, the public, think of health care quality today: Is my care accessible? Is it convenient for me to get to? Do I receive what I need? Is my care affordable? Am I going to get hit with a giant medical bill at the end of this care process? And is it effective? And on all of those three, you know, there’s potential for it to improve the quality of care. And there’s also the risk.
CHAKRABARTI: But regardless of those risks, the global AI health market is expected to soar. One industry analysis says the market could top $60 billion, a tenfold increase in the next five years. AI’s advancing, and what might happen if it advances closer to health care’s holy grail? Harnessing the predictive power of artificial intelligence. That horizon is still far off, but the early work is tantalizing.
Dr. Isaac Kohane is director of the informatics program at Boston’s Children’s Hospital. He gave us an example. There’s research showing that AI can detect evidence of abuse.
DR. ISAAC KOHANE [Tape]: It’s crazy. In 2009, for example, we had already published that we could detect domestic abuse just from the discharge diagnosis of patients. With not only high accuracy, but on average, two years before the health care system was aware of it.
CHAKRABARTI: Could AI and machine learning go further still and predict an illness before it happens? Jonathan Berent is founder of Nextsense, a Silicon Valley company developing a specialized earbud to detect anomalous brain activity, including the activity associated with epilepsy.
JONATHAN BERENT [Tape]: You know, the ML and AI is really about seizure prediction. So as we measure the sleep data at night, we can start to give that forecast of, you know, what is my day going to look like? Is this a high risk day ? Should I be driving or not? Should I be taking extra medicine?
CHAKRABARTI: At Cedars-Sinai Medical Center in Los Angeles, Dr. Sumeet Chugh says multiple teams are well on their way to designing AI systems to answer a key question about heart attacks, one of the biggest killers in the United States.
DR. SUMEET CHUGH [Tape]: Can we find better ways of predicting patients who are at higher risk of cardiac arrest?
CHAKRABARTI: And in oncology, Stacy Hurt, patient advocate and cancer survivor herself, says AI’s prodigious capacity for pattern recognition could provide patients a lifeline before they know they need one.
STACY HURT [Tape]: I think it’s really promising. You know, they’re using AI technology to detect disease patterns that could be predictive of colon cancer.
CHAKRABARTI: That’s the hope anyway. Some would call it hype. We spent four months reporting on what the true impact might be between the hope and the hype of AI and machine learning’s rapid expansion into health care.
We spoke on the record with approximately 30 experts across the country, including physicians, computer scientists, patient advocates, bioethicists and federal regulators. So for the next four Fridays in this special series, we’re going to talk about what smarter health really means.
Our episodes will explore AI’s true potential in health care, its ethical implications, the race to create an entirely new body of regulation, and how it might change what it means to be a doctor and a patient in America.
So today we’re going to focus on that potential of AI and machine learning in medicine. Dr. Ziad Obermeyer is an emergency medicine physician and distinguished associate professor of health policy and management at the University of California, Berkeley School of Public Health. And he joins us. Doctor Obermeyer, welcome to On Point.
DR. ZIAD OBERMEYER: Thank you so much for having me.
CHAKRABARTI: I first want to know what it is about the practice of medicine or even your personal experience as an emergency physician that made you think that there’s a place for AI and machine learning in health care.
OBERMEYER: I think my interest in this field came exactly from that practice, because when you’re working in the E.R., there are just so many decisions and the stakes are so high, and those decisions are incredibly difficult. If a patient comes in with a little bit of nausea or trouble breathing, that’s most likely to be something innocent. But it could also be a heart attack. So, you know, what do I do? Do I test them? Well, I often did. And the test came back negative, meaning that I exposed that patient to risks and costs of testing without giving them any benefit.
But should I have just sent them home instead with, like, a prescription? You know, a missed heart attack is a huge problem. It’s not just the most common cause of death in the U.S., but also the most common reason for malpractice in the emergency setting. And so medicine is full of these kinds of terrible choices. And I think AI has huge potential to help because we don’t always make the right choices in those high stakes settings.
CHAKRABARTI: So choices, some mistakes, missed opportunities. I mean, even in your own life, your own personal health care, there was like a misdiagnosis. Can you tell us that story?
OBERMEYER: Oh, sure. Well, I had just come to Berkeley, and it was a couple of days before the first class I was teaching. So I was feeling a little bit off. But I, you know, just chalked it up to butterflies in my stomach. It turned out that it was not butterflies in my stomach. It was appendicitis. And I missed that appendicitis for about four days until it actually ruptured. And when you train in emergency medicine, there’s a couple of things that you’re really never supposed to miss.
One of them is appendicitis. And yet I had missed it in myself for four days before I was able to go to the emergency department and get it diagnosed. So even when you have all the information in the world and, you know, reasonably good training, it’s still hard to make these kinds of diagnostic judgments and decisions.
CHAKRABARTI: Okay. So, you know, over the four months of reporting this series, we learned that while there’s a lot of AI currently in development right now, and the amount of money going into the research is growing, we’re still very far away from the idealized horizon that some people believe is possible with AI. But before we have to take our first break, Dr. Obermeyer, could you just give us, you know, in a nutshell, why you think it’s so important for patients to understand, people to understand, potentially what AI could do to American health care.
OBERMEYER: I think the potential for AI and health care is huge. I think it can improve a lot of decisions, but I think there are also a lot of risks. And I think I’ve studied some of those, the risks are including but not limited to racial biases, and other kinds of problems that can be scaled up by algorithms. So it’s an incredibly difficult area with tradeoffs. And I think we all need to understand them, and be informed so we can make those tradeoffs together.
CHAKRABARTI: Well, this is our first episode of our special series, Smarter health, and we’re talking about the potential, and why so many people see so much potential of AI in health care. So we’ll talk through more some more examples when we come back. And we’ll further discuss those trade-offs that Dr. Obermeyer just talked about.
CHAKRABARTI: Welcome back. I’m Meghna Chakrabarti. And this is the first episode of On Point’s special series Smarter health. I’m joined today by Dr. Ziad Obermeyer.
He’s a distinguished associate professor of health policy and management at the University of California at Berkeley. He’s also an ER physician and he helped launch Nightingale Open Science, which we’ll talk about a little bit later.
Now, today, we’re examining the realistic potential of AI in American health care. Dr. Steven Lin is at Stanford University. And he says there are already prediction models being used in, say, detecting skin cancer, brain cancer, colorectal cancer and heart arrhythmias, a whole range of specialties that are already able to outperform doctors.
DR. STEVEN LIN [Tape]: For example, in dermatology, in primary care, we have many companies and vendors now with deep learning algorithms powered by AI that can take photos of dermatological lesions on the skin of patients. And generate, with increasingly sophisticated accuracy, comparable or sometimes even more than dermatologists to help primary care providers diagnose skin conditions. And also provide the management recommendations associated with those conditions.
CHAKRABARTI: That’s Dr. Steven Lin at Stanford University. Dr. Obermeyer, I think we need to sort of establish a common set of definitions here. When we’re talking about the health care context, what exactly do we mean when we say AI?
OBERMEYER: It’s a complicated question to answer, because AI is so broad. But in general, what AI does is take in a complex set of data. So it could be images of someone’s skin, as Dr. Lin mentioned, and then outputs a guess as to what is going on in that picture.
And that guess is based on looking at millions and millions of pixels in those pictures and trying to link the patterns that exist in those pixel matrices to the outcomes that we care about, like skin cancer. So it’s all about pattern recognition.
CHAKRABARTI: Pattern recognition. Okay. So then how does that differ from another term we’ve encountered frequently, which is machine learning?
OBERMEYER: I think machine learning is maybe what the purists would call it, at least in its current incarnation. That’s generally the more technical term for the set of algorithms that we use to do that job.
CHAKRABARTI: Okay. So then tell us more about how what you’re specifically developing here. We heard Dr. Lin talk about basically imaging kinds of uses for AI. You’re at work on something quite interesting regarding the potential for cardiac arrest. Can you tell us about that?
OBERMEYER: Yeah. So we’ve got a number of projects that look at cardiovascular risk in general. So as I mentioned, one of the things that we are interested in is, based on my own experience in the E.R., is helping emergency doctors diagnose heart attack better. So that situation, when a patient comes in with some symptom, do I test her or not?
We’re building algorithms that learn from thousands and thousands of prior test results. And tries to deliver that information to a doctor in a usable form, while she’s working in the emergency room in a way that’s going to help her make that decision better.
We wrote a paper on that task, and the paper looks good, but ultimately the proof is in the pudding. So we’re trying to roll that out into a randomized trial in collaboration with a large health care system called Providence, which is all up and down the West Coast.
So I think much like any new technology in the health care system, we need to have a very rigorous standard for what we adopt, and what we don’t. And I think that randomized trials are going to play an important role in helping us do that.
CHAKRABARTI: Okay. I want to understand this in more detail, though. So if, say, I came in to your E.R., with sort of any set of conditions or a set of conditions that might lead a physician to think, Meghna may be having a heart attack. Where would the algorithm be employed?
OBERMEYER: That’s a great question, because part of the problem is that when doctors make that judgment of, Okay, this type of person is more likely to have a heart attack, and this type of person isn’t. That’s the first place that errors can creep in.
And so one of the huge value adds of the algorithm that we developed, as we saw when we looked at the data, is that it could precisely find the kinds of people that doctors dismissed. They didn’t even get an electrocardiogram, or basic laboratory studies on them, because they were under the radar. Those are the kinds of patients where AI can make a huge difference.
We’re not saying we need to test all of those patients, but we can hone in on those needles in that haystack, and help doctors see them better.
CHAKRABARTI: Okay. So sort of better pinpointing who really needs the actual sort of biological or monitoring test to see if there’s a heart attack going on. And what data is the algorithm actually sort of crawling over and looking at?
OBERMEYER: So we basically took data on every single emergency visit over a period of many, many years. And we plugged all of that into the algorithm. The algorithm looks at every test that doctors decided to do and looks at the test results, but it also looks at people that doctors decided not to test and looks in the days and weeks after that visit to see who has a heart attack later, that was missed by the doctor initially.
So we want to learn from both the cases where doctors suspect heart attack, and also the cases where doctors don’t, because those are just as important.
CHAKRABARTI: Okay. So at the end of the day, the vision is this. Someone could come in to an emergency room and the algorithm would assist a physician in saying, Yes, this person probably needs to have follow up testing or not.
OBERMEYER: I think of it more like a little angel sitting on your shoulder that’s nudging you in the right direction. So I think, you know, I’m sure you’ve talked to many people who suggest that we should not be in the process of replacing physicians.
We want to help physicians do their job. And so I think this algorithm is very much in that line of work, which is nudging physicians to just think about heart attack or to say, Well, you might want to test this patient because I know they have chest pain and I know they have high blood pressure.
But look, their blood pressure is really well-controlled over the past three years and they see their primary care doctor regularly. So you might not need to test this person, but ultimately it’s up to you. So the algorithm is just providing this information and helping to focus the doctor on the things that matter, but ultimately letting that doctor make her own decisions about what she wants to do.
CHAKRABARTI: You are an emergency room physician. Walk us through for a second how you would use this very technology. I mean, at what point in your thought process as a human physician do you think, Well, I’m going to need to leave a little bit of room to question the algorithm, or to listen to that angel on your shoulder, as you said.
Because ultimately, you’re right. Everybody we talked to, no matter where they are in this big field, we’re saying that the algorithms aren’t meant to replace the judgment of human physicians, but enhance it. So how would you actually incorporate it in your practice?
OBERMEYER: First, I’ll tell you how we currently do it in medicine, which I think is the wrong way. So when I was working in the E.R. and I would see a patient and think, Oh, I’m worried about a blood clot in this patient. I would walk out of the room and I’d go to my computer and I’d type in the order. Because I’d already decided to do the CT scan to look for blood clots. And then an alert would pop up and it would say, You shouldn’t do this thing, but I’d already decided to do the thing.
So then I just checked whatever boxes I needed to do to make sure I could order the thing I had already decided to do. What we’re trying to do instead is to get the physician very early in her thought process. So, before she ever sees the patient, we want something to nudge her in the right direction. Whether that is to towards thinking about testing, or towards thinking that she should be reassured that the patient is low risk. So before you see the patient, you want to present the information.
… Here is how you might be thinking about this patient. If you wanted to focus on the variables that really mattered or don’t matter, for making your judgment of risk. So shaping that thought process, rather than annoying the doctor or telling her what to do is really where I think these algorithms should be heading. They should be helpful adjuncts to decision making, rather than enforcers or mandates.
CHAKRABARTI: Okay. You know, it’s interesting because the skeptic in me always tends towards, Well, will we produce brand new blind spots, with the the added influence of technology? Could we produce new data blind spots? But we spoke also with Dr. Isaac Kohane, who’s the director of the informatics program at Boston Children’s Hospital.
And he said, Well, you know, that’s a possibility about those data blind spots. But take a take a deeper look at how AI tools should be evaluated in the context of what American health care looks like right now.
DR. ISAAC KOHANE [Tape]: We should always ask how these algorithms will behave, relative to the status quo. And there’s an argument to be made that for a certain class of physician performance, you may be better off with some of these programs, warts and all, just like you may be better off having Tesla switch on autopilot than having a drunken driver.
CHAKRABARTI: Dr. Obermeyer, what do you think about that? Is that realistic or too Pollyannaish?
OBERMEYER: I think it’s a very astute comment, and I think it highlights the importance of doing that rigorous evaluation that we apply to any other new technology and health.
When a pharmaceutical company produces a new drug and wants to market it, we don’t just say, Sure, go ahead. We say, Well, why don’t you test it compared to some acceptable standard that we currently use. And that’s why we have big randomized trials that pharmaceutical companies do before that drug ever makes it to the market.
And I think similarly, when AI is being deployed in very high stake settings, we need to compare it to what we’re currently doing. And I think that can expose some of those data blind spots that you mentioned, which I think is a real concern.
But it can in general just tell us, are these technologies doing more good than harm? And should we be investing in them, or should we be applying a much more cautious approach, and not? It all needs to be judged on the basis of the costs and the benefits that these algorithms produce in the real world.
CHAKRABARTI: Well, you know, obviously, the far horizon of what AI could do in health care captures the mind. Helping better understand if a heart attack is actually happening. Some of the things we heard about a little earlier in the hour about pattern recognition in cancer and things like that. Very, very alluring possibilities.
But reality check, right? Dr. Obermeyer? Because those technologies are actually quite far away. What’s more probable in the near future is AI’S impact in, you know, what seems like a potentially mundane aspect of health care. Mundane, but critically important. Things like tracking when health care workers sanitize their hands before interacting with patients.
DR. ARNOLD MILSTEIN [Tape]: That tends to be about 20 to 30%, which is on the face of it, indefensible and crazy.
CHAKRABARTI: So that is Dr. Arnold Milstein, who was talking about the failure rate of health care professionals to actually sanitize their hands. It is about 20% or 30%. And so Dr. Milstein and his colleagues at Stanford University are developing an AI enabled system that reminds medical workers to sanitize their hands.
So algorithms are also proving to be unrivaled medical assistance, as well. Here’s another area. Natural language processing, which can crawl through patient records. Radiologist Dr. Ryan Lee at the Einstein Health Network told us that logistical AI systems can automatically send notifications to patients for follow up care.
DR. RYAN LEE [Tape]: This is a real opportunity to close the loop, so to speak, in which we’re able to directly notify and know when a patient has actually done the appropriate follow up.
CHAKRABARTI: There’s also another example. Dr. Erich Huang, chief science officer at the company Onduo, says health care has a huge paperwork problem. By some estimates, time doctors spend on clinical documentation can cause anywhere from $90 to $140 billion in lost physician productivity every year.
DR. ERICH HUANG [Tape]: Algorithms can lift some of the sort of grunt work, documentary grunt work of clinical medicine off of the physician’s shoulders. So that he or she can actually spend more time taking care of the patients.
CHAKRABARTI: Dr. Obermeyer in Berkeley, California, tell me a little bit more about these, again, mundane but actually critically important aspects of health care that AI could have a really profound impact on.
OBERMEYER: I love these examples. Because when you look at where AI has had impacts in other fields besides medicine, it’s often these very similar things that are like back office functions or, you know, routing trucks a little bit more efficiently. But those kinds of things stack on top of each other, and make the whole system much more efficient.
So I love these examples because, you know, the health care system does a lot of things besides curing cancer. And I think AI can really help with those simple tasks. I think one of the challenges is trying to make sure that the things we think of as simple tasks are indeed simple tasks. If you think about the task that a physician is doing when she’s documenting, when she’s writing a note.
Part of that is mundane grunt work. Because you have to check a lot of boxes. But part of it is you have to put a lot of thought into summarizing, Okay, what is going on with this patient? What do I think? And those are things that algorithms are going to have a much harder time doing. Because those are things that rely very heavily on human intelligence in ways that we haven’t yet figured out how to automate.
CHAKRABARTI: Okay. So that’s a really, really interesting point. And it links back to this broad range of estimates in the impact that AI could have, even in something as seemingly simple as clinical documentation, right? That $90 to $140 billion annually in lost physician productivity.
Presuming that the truth falls somewhere in that range, I mean, how much of an impact could AI have in the delivery of health care overall, say, if physicians were freed up a little bit from the burdens of clinical documentation?
OBERMEYER: I think it’s a fantastic area of study because I do think that physicians are not only wasting time on doing a lot of mundane tasks, but it’s also almost certainly one of the big causes of burnout. You sign up to be a doctor, but then you get to your job.
And most of your job is doing paperwork, and making phone calls and being on hold with an insurance company trying to make sure that your patient is getting what they want.
And so I think that these kinds of technologies, by freeing up doctors to do the work that we’re trained to do, have huge potential. Just in the same way that the historical example of the ATM machine was very transformative, it freed up the bank teller to engage in much more sophisticated work with clients, rather than just dispensing cash.
CHAKRABARTI: It seems to me that one of the takeaways here is that however we want to judge the potential of AI in health care, that potential is proportional to the problem that any particular algorithm is asked to solve, or analyze. And the risks that come with applying an AI or machine learning tool to that problem. What do you think about that?
OBERMEYER: Absolutely. And I think, you know, clearly, the benefit is going to be proportional to the size of the problem. I do think that the examples you just mentioned also have this nice illustrative feel, that we also need to make sure we’re targeting the problems that machine learning can solve, the data problems.
Many problems in medicine are problems for which we don’t yet have data. And we need to be very careful to only aim AI at those questions where we have data that can help answer them.
CHAKRABARTI: Well, when we come back, we’re going to talk in detail about the tradeoffs. With all that potential that could come with artificial intelligence in American health care, what are the tradeoffs and what are the particular areas of concern?
CHAKRABARTI: Welcome back to the first episode of On Point’s special series ‘Smarter health.’ And today, in episode one, we are taking a look at the potential for artificial intelligence and machine learning to change, even transform medicine. Here’s Dr. Kedar Mate, CEO of the nonprofit Institute for Health Care Improvement.
DR. KEDAR MATE [Tape]: There is tremendous, tremendous potential in AI, machine learning that goes along with that AI, to augment and improve our capacity as clinicians and as humans, frankly, to be able to do the mountain of diagnostic work that we have to do to manage the information flow that’s coming at us at all times as clinicians.
And to be able to provide just in time absolutely critical, precise, personalized care to the people that we’re taking care of. But there’s also, like any technology, considerable risk. Unless we mitigate those risks with deliberate design, we won’t necessarily solve for those problems.
CHAKRABARTI: I’m joined today by Dr. Ziad Obermeyer. He is a distinguished associate professor of health policy and management at the University of California, Berkeley School of Public Health, also an ER physician as well. And Dr. Obermeyer, one of the areas of concern — and there are several which we will be exploring over the course of this four-part series here.
But one of them is, you know, how much do people actually understand right now between accurately regarding the state of AI in health care? Do you think patient perception matches the current reality?
OBERMEYER: I think one of the things that’s probably underappreciated is how widespread these algorithms already are. In some work that we published a couple of years ago, we studied a set of algorithms that are used for what’s called population health management.
So this is the function of health systems where they try to get an overview of all of their patients and figure out which ones need help today so that we can prevent deteriorations in their health tomorrow.
So we studied one commercial product that was being used to make decisions for about 70 million people, every year. If you look at the industry estimates, those algorithms are being used for between 150 and 200 million people per year in the U.S. So essentially most of the population.
OBERMEYER: Already. And so the scale of these things already has gotten huge, and I don’t think that’s very well appreciated. Unfortunately, that study that we did also showed that these algorithms suffered from a large degree of racial bias. So I think that’s another thing that’s not very well appreciated. Is that there are both reasons to be incredibly optimistic about AI, as all of the examples you already mentioned convey. But there are also reasons to be very, very careful.
CHAKRABARTI: Can you just describe briefly what kind of decisions the algorithms that you just talked about were making or assisting with?
OBERMEYER: So what health systems have to decide is, well, you’ve got a bunch of patients in your population that you’re responsible for. Some of them are going to get sick tomorrow from things that we could have prevented, had we known about it today. So what algorithms are being used for, which is a very good use of algorithms, is looking into the future and trying to predict, OK which patients are going to get sick?
Which patients are going to have an exacerbation of some chronic condition that I can help them with today? And so the patients that are identified as high priority get a bunch of extra help from the health care system, extra primary care visits, extra visits from a nurse practitioner, a special phone number that they can call for help any time. So it’s very, very helpful. But we can’t do it for everybody. We have to prioritize. And that’s where the algorithms come in.
CHAKRABARTI: And those algorithms already, as you said, are being used on hundreds of millions of people.
CHAKRABARTI: Amazing. Okay. So I have to tell you that the next episode of our series, really goes in true depth to these ethical considerations. The concern about bias in the data that’s being used to train algorithms in health care. That’s the whole hour next week. So we will examine that closely.
But I wanted to just stick for a moment with, again, patient perception of what’s really going on in health care right now. So we spoke with Dr. Richard Sharp. He’s the director of the bioethics program at the Mayo Clinic. And he and his research team conducted 15 focus groups to try to understand current patient perceptions of AI in health care.
DR. RICHARD SHARP [Tape]: When most people hear about artificial intelligence, things that come to mind for them, are, you know, science fiction movies where computers somehow take on an aspect of our lives. The machines become sentient and rebel against humanity and those sorts of scenarios. In health care, though, those sorts of tools are a lot more mundane.
CHAKRABARTI: So Dr. Sharp says right now he sees a perception gap. The research team found, though, that they could narrow that gap by giving patients real world scenarios, using very neutral language about specific applications of AI in health care. And that did indeed help, but it didn’t completely allay patient concerns.
SHARP: The folks that we talked to mentioned self-driving cars multiple times. And what they told us again and again was that they were uncomfortable with a self-driving car, but they definitely did not want a self-driving clinician. They did not want a self-driving doctor. They wanted to be sure that they had the ability to talk to the real deal and make sure that there were appropriate safety checks in place.
CHAKRABARTI: So what patients really wanted? Transparency. Everything from how algorithms were being deployed, to who had access to the information used by the algorithm, to maintaining the ability to make decisions with their doctors, even if that decision defied an algorithms recommendation.
SHARP: They were worried that an AI algorithm might recommend a particular treatment or drug that would be more expensive than maybe a drug that they’re currently on. That’s really the promise of AI, is to be able to identify early on in the course of the disease, those treatments that are likely to be most effective.
With that capacity, though, it can create a situation where maybe that ideal treatment is simply too expensive for an individual patient, or not covered by a particular insurer. And patients were quick to point out that they saw that as one of the major downsides of these tools.
CHAKRABARTI: So Dr. Sharp says that successful treatment really hinges on patient compliance. But the patients in his focus groups were clearly saying that compliance hinges on having confidence in the new technologies used to treat them. So that leads Dr. Sharp to a clear conclusion. Patient education about AI, and addressing the concerns they have must be rolled out in parallel with the tools themselves.
SHARP: I think it would be a mistake for the future of health care if patients discovered after the fact that the care they were receiving had been influenced by AI algorithms.
CHAKRABARTI: That was Dr. Richard Sharp, director of the bioethics program at the Mayo Clinic. Dr. Obermeyer, what do you think about that? Do you think that what Dr. Sharp said there is actually happening?Concurrent patient education, along with the development of the tools used to treat them?
OBERMEYER: I love the idea that Dr. Sharp proposed a concrete example. So let me try one from a completely different field, which is that I’ve been traveling a lot now that lockdowns are over.
And I was reflecting on the fact that when I get on an airplane, I actually have no idea how the autopilot was trained, evaluated, deployed. And I think that, you know, if I think about everything that happens inside the hospital today, there are algorithms that have been operating for decades that help MRI machines process the image, that help laboratory analyzers process the single cell measurements that they do.
So algorithms are actually being used all around us, and either we have no idea, or we don’t care. But I think that that’s because we have confidence in a set of practices, and procedures and regulations that guide the deployment of all of those algorithms in high stakes settings.
And so I think that a useful complement to the things that Dr. Sharp was proposing is developing that regulatory structure from the government, but also developing the procedures and practices that the health care system uses before it ever deploys an algorithm to test it and make sure that it’s safe.
Algorithms are actually being used all around us, and either we have no idea, or we don’t care.
CHAKRABARTI: Yeah, so the regulatory structure is going to be episode three of our series here. Now, in the last few minutes that I have with you, Dr. Obermeyer, look, we have to acknowledge that one of the screamingly unique things about anything regarding American health care is the fact that we are living in the country that spends the most money on health care than any other nation in the world. I started off the hour by highlighting that.
And the numbers are actually just like jaw dropping, right? That the Centers for Medicaid and Medicare Services says in the next couple of years, next five years, the U.S. is going to be spending $6 trillion on health care. So it’s still going to be 20% of our economy. And that’s, I think, one of the things where, you know, the technology evangelists are really excited about the possibility of AI because they say it could bring down costs.
You know, bringing in those algorithmically driven efficiencies into health care could bring down costs. But here’s what Dr. Kedar Mate, again, CEO of the nonprofit Institute for Health Care Improvement, says about whether we know anything at all about … AI [reducing] the cost of health care in America.
DR. KEDAR MATE [Tape]: Virtual care, just as an example, virtual care has likely done little to reduce total cost of care. In fact, during the pandemic, you’ll probably recall that we collectively argued for pay parity between virtual care and in-person care. And you can just imagine if we’re arguing for pay parity, then even if we have all of our care being virtual, it’s going to cost exactly the same.
This doesn’t necessarily lower the cost of care. I think a lot of AI enthusiasts, tech enthusiasts, more broadly believe that all of this will reduce the cost of care. But we haven’t seen substitution for in-person care. We haven’t seen reduced frequency. In fact, in some ways, technology enables increasing frequency of interaction with people, and it hasn’t lowered the cost basis necessarily of providing that care. So for all those reasons, I’m not sure yet. I don’t think anyone is sure yet whether or not AI and attending technologies will lower the cost basis of care.
A lot of AI enthusiasts, tech enthusiasts more broadly, believe that all of this will reduce the cost of care. But we haven’t seen substitution for in-person care.
CHAKRABARTI: That’s Dr. Kedar Mate at the Institute for Health Care Improvement. So, Dr. Obermeyer, I mean, even just increasing touch points in health care. Well, you know, it might feel good because you have more information, more access to the health care system. Every touch point is a billable moment. And in overall, the United States in a for profit health care system. Is there any possibility that the end result of AI in health care would be anything other than costs continuing to rise?
OBERMEYER: I think I’m more optimistic about this particular question. Because I think we’re we’re just incredibly early in the curve of AI being applied to health. And so I don’t think we can generalize from anything that we’re seeing today.
Ultimately, you know, if you look at our paper on testing for heart attack, the potential of AI there is to take all of these tests that we do on people who come back negative, who didn’t need the test after all, and eliminate those. And take a portion of those tests and reassign them to people who are genuinely high risk, who should have been tested, but that currently aren’t.
And I think that’s a good general principle for AI, is we do a lot of things that don’t make sense today and that becomes very wasteful. So we can reallocate some of that waste to the people who are losing out today. And everyone does better. We spend less money on testing, and we get tests of people who need them more.
And I think that that’s going to be the playbook for AI in medicine over the next few decades. So I’m very optimistic that we’re going to be reducing costs for all of the things that we are doing today that we shouldn’t be doing.
CHAKRABARTI: But haven’t we heard something similar for other technologies that have been introduced into health care? You know, electronic health records are supposed to make information sharing more efficient. Any other sort of big system that was talked about as a revolution in health care. And yet the costs still keep rising. We still keep spending more and more.
OBERMEYER: I think that’s right. But I think that’s because electronic health records haven’t fundamentally changed anything that anyone is doing in health. In many ways, it’s a lot like how the power plants that were electrified, but that were still fundamentally organized, like steam powered power plants, actually had no productivity gains from electricity.
And it was only the new factories that were reorganized around electric power. So I think medicine’s very similar. Once we have all of this electronic data, it doesn’t actually do as much good if we’re stuck in an old system. But now that we have the tools to build up a new system, I think things are going to get a lot better.
Now that we have the tools to build up a new system, I think things are going to get a lot better.
CHAKRABARTI: Well, Dr. Obermeyer, we have 30 seconds left here and just send our listeners off today with a thought or or a tool that you would add to their toolkit to understanding how AI might have an impact on their health care. What do you want them to know?
OBERMEYER: I would like them to know that AI is not the solution for all problems in medicine, because so much of this in human enterprise, where human doctors are doing really, really good things for patients. But there are some parts of medicine that are incredibly complicated from a data and statistical point of view. And I think for those parts of medicine, AI is going to be transformative.
CHAKRABARTI: Well, Dr. Ziad Obermeyer is an emergency medicine physician and Blue Cross of California, distinguished associate professor of health policy and management at the University of California, Berkeley School of Public Health.
He also helped launch Nightingale Open Science, which is taking a look at how to provide high quality data to AI systems. And again, we’re going to talk about data later on in the series. But Dr. Obermeyer, it’s been a great pleasure to have you on the show. Thank you so very much.
OBERMEYER: Thank you. It was such a pleasure.
DR. STEVEN LIN: As exciting as AI and machine learning are, there are many ethical and also health equity implications of artificial intelligence that we are now beginning to realize.
CHAKRABARTI: That’s Dr. Steven Lin, primary care physician and head of the Stanford Health Care Applied Research Team. So next week, we’re going to talk about AI, health care and ethics. And we’re going to do it through the story of what Lin calls the advance care planning model. But you and I might better understand it as the death predictor.
LIN: AI can actually pretty accurately predict when people are actually going to die. It raises the question of how accurate are these predictions? How do patients react when they are flagged by the model as being high risk of X, Y and Z, or being diagnosed with X, Y and Z?
How do human clinicians handle that? And then very, very importantly, what are the equity implications of data driven tools like artificial intelligence when we know that the data that we have is biased and discriminatory. Because our health care systems are biased and discriminatory.
CHAKRABARTI: That’s next Friday in episode two of our special series ‘Smarter health.’
We want to hear from you
Got a question about how AI will impact how you receive health care? Or maybe you’re a scientist, doctor or patient with an AI story to share? Leave us a voicemail at 617-353-0683.
This series is supported in part by Vertex, The Science of Possibility.
Vitamin D supplements linked to 40% lower incidence
Heart-healthy habits linked to longer life without chronic conditions
Hoda Kotb Returns To TODAY Show After Handling Daughter’s Health Matter