Dr. John Halamka and Greg Corrado | Dialogues on Technology and Society | Ep 4: AI & Health

Dr. John Halamka and Greg Corrado | Dialogues on Technology and Society | Ep 4: AI & Health

Show Video

Today, I'm going to be talking to Dr. John Halamka from the Mayo Clinic. John has been an engineer.

He's been on the front lines of clinical care. And now at Mayo, he's trying to organize efforts to understand how we can bring artificial intelligence into healthcare and medicine in a way that's bold but responsible. It's a pleasure to meet you, John. I'm really delighted to have this conversation about AI and healthcare.

I'm a Distinguished Scientist at Google, and I've been working on the fundamentals of AI systems and how they work. In the last few years, I've been directing my team to focus on applications to healthcare because I believe that this is one of the greatest opportunities for the technology to deliver real benefits in the near term. Well, and wonderful to sit down with you, Greg, because as a physician, I have worked at the intersection of healthcare and information technology for almost 40 years, and I've come to realize that clinical practice and academia alone cannot solve some of these major societal issues. We have to work with distinguished scientists.

We have to work with industry if we're going to improve patient care worldwide, and that's ultimately what we want. I think we've both had unusual paths to this strange intersection of artificial intelligence and medicine. I was trained as a neuroscientist, and part of the reason that I got into AI at all was because I had worked for years trying to understand how the brain represented information, how it stored information, how it retrieved information, how it constructed information, that maybe we could learn something by trying to build it. If you can build it, you can understand a little bit more. We've both been interested in this space for a while, and I think that the promise that AI holds for medicine is something that people have been discussing for some time. But I feel like this change from predictive AI to generative AI is something that's really new.

Sometimes I say there's a perfect storm for innovation is when you've got industry and academia and government all aligned and we're at a unique time in history. Yeah. You could say some of it's the tech, it's just got new possibilities. But the interest we're in society and potential applications is extraordinary. I've never seen this much attention paid to a new technology.

So, I agree with you. I see change in the last six months. And I think one of the things that seems very promising about these technologies is that opportunity to reduce burden, right? I feel like clinicians and the medical system overall has been weighed down by so many things. But historically, adding technology hasn't always helped. And you'd be completely correct.

In the Obama administration, I was part of this meaningful use movement, this notion of digitizing medicine. I was told, Well, the goal is to go from paper charts to fully digital. This way you can have full electronic recording of data. Oh, we need epidemiology. Oh, okay, good.

You know, we need safety. Okay, good. By the time we were done, 140 data elements that every clinician would have to type for every patient visit. Now you have 11 minutes to see the patient, enter 140 data elements. Oh, and be empathetic.

It's not possible. So we digitized medicine, but we lost the hearts and minds of doctors and nurses along the way. And I'm hopeful that AI can actually bring some of that humanity back, that the capabilities of assistive AI, of technologies that are really there to augment to help clinicians and physicians can strip away some of that extra work and allow them to focus again on really caring for each other as people. And I know that to some folks, I think it's surprising that the idea that bringing artificial intelligence is a way to get back to more humanity in medicine.

You're totally correct. Just yesterday I was talking to a GI leader who said, do you know, I could actually do four more procedures, and these are detailed procedures in the field of GI, if I had an assistant documenting what I was doing. Because right now the amount of typing I have to do is such an overwhelming administrative burden, I have less patient time.

It's all about workflow. So I tell my colleagues, I know I'm going to create a brilliant app. It would just require you to pick up your phone, log in, use the app, break your workflow.

How many clinicians will do that? None. It needs to be a side effect of the workflow you're already doing. So for example, if you had ambient listening, let's say the doctor and the patient had a conversation like we're having, and then Generative AI produced a summary. It actually figured out what was salient and summarized it in a way that would be very useful that you could edit and then file. The point about having ambient listening to me brings up the question of privacy. Madison is an area where we hold privacy extremely dear.

How do you feel like the what's the right way to think about privacy in a space where fundamentally, It's why I want AI to help us care for future patients better by learning from past patients. How do we learn from past patients in a way that really preserves privacy? This is such an important question. I have worked on regulation and privacy for three decades.

You need to tell the patients what you're doing. It's really up to exactly as you say. It's making sure that the patients and the providers and working together have confidence in whatever solution you put in place. They have credibility. We tell patients, as you sign a disclosure, here's a notice of patient privacy rights, we're going to deidentify your data.

If you don't want your data used to create future models, that's okay. Just tell us you don't and we will remove your DID data from model creation. In terms of how systems learn from data, have you at Mayo clinic, have you actually been able to measure the impact of systems that are trained on real-world data improving outcomes for future patients? We've developed 160 models, deployed about 40 of them. One of the things to be careful with is there's a lot of model building out there, but not a huge amount of deployment and adoption.

A lot of that has to do with safety and testing and such. What do we do? Randomized clinical trials. We'll put 500 docs with no AI, 500 docs with AI. We'll compare how they did. If I can diagnose your disease faster, make an intervention earlier, reduce burden of disease, pain, cost, that's a pretty good outcome. We've done this in the fields of cardiology, neurology, radiation oncology, and proven the efficacy.

There's no question this is a direction we all must travel. One of the things that really brought me to the idea of wanting to work on medicine and healthcare anyway in the AI space was that I'd been working on sequential prediction tasks, which is part of where these large language models came from, trying to predict the next word in a sentence, trying to predict what's a good translation, what's a good answer to a simple question? It seems like a crazy idea, but if you actually saw what had happened to hundreds and thousands of patients in their real care journey, real-world evidence, could you build systems that could forecast likely outcomes that could maybe actually suggest not just from trial data, but from real-world historical evidence, what's the right course of action? Do you think that's possible? And what stands between us and a future like that? I think it's possible. Let me give you a personal example.

I have what's called a superventricular tachycardia. Sometimes my heart rate goes from 55 to 170. It's irritating, not life threatening. For 20 years, I've had this irritating condition. Oh, by the way, I also have primary hypertension because my father had primary hypertension, and despite diet and exercise, I have primary hypertension.

I was put on an ACE inhibitor, a drug that helps my hypertension. Didn't really do much for my superventricular tachycardia. I went to Mayo Clinic and said, Hey, could you evaluate me? They ran 14 algorithms on my 12 lead ECG, and they looked at various conditions I might have, and they said, You don't have any of these bad conditions, but you do have an electrical conduction problem in your heart. Here, instead of that ACE inhibitor, try a calcium channel blocker. It'll cost 25 cents.

Do you know my SVT is gone? Why did I have a 20-year circuitous journey when all I needed to do was change the medication I was on? That's what the predictive models looking at millions of patients in the past and their journeys will do for patients in the future. I really believe that this is something that's going to elevate the quality of care for people broadly. But medicine isn't really the same everywhere and for every person and for every population.

It may even be different for different people's value systems about what good living is for them. How do we use AI without having it be something that is homogenizing? What a really good question. It has to start with data. All these systems, whether predictive or generative, they're all dependent on the training sets. Mayo has signed agreements with health systems around the world in a federated, decentralized, highly distributed model. Instead of sending all their data to one location, we leave it where it is at country scale and look at model creation and validation against global data sources with more heterogeneity.

To your point, it's going to be different demographics, incomes, educational levels. That's the data set you need if we're going to do fair, appropriate, valid, equitable, and safe AI. Yeah, and medicine is always changing. We learn new things, there are new discoveries. Doctors are working all the time to try to stay current with what's the best practice. How does that interact with these AI systems? If they're learning from the past, how do they get new information about what's the right thing to do now if guidance has changed? I went to medical school in the '80s.

A few things have changed since then. My field is emergency medicine. There are 800 articles written every day in my field. I'm a little behind on my reading. You can imagine generative AI, let's take that, it is pretty good at being able to take a body of information, credible information and summarizing.

Through techniques like retrieval, augmented generation, you could say, Actually, these are credible sources of emergency medicine information in a particular area of study, summarize it for me. Suddenly, I now have a two-page. This is the articles of last week that you need to know about. That I can digest. When people think about AI in medicine and healthcare, I think a lot of times they jumped to the idea of an AI that's trying to be a physician.

To me, that doesn't seem, it seems neither close nor desirable. It's not going to replace the human. It's really got to augment human decision making. When you get cancer treatment, my wife was a cancer patient, she's doing fine today. You typically get radiation, and radiation turns out to be a pretty complicated thing.

You affect the tumor, but not nerves, arteries, and veins. It takes a lot of human and physics time. I feel like there's often a lot of new opportunities for innovation when you can build a bridge between two fields where folks don't naturally speak the same language, where you can translate technologies. What has been your approach to trying to bring technology to medicine in new ways? We looked at what are some of the major clinical problems to solve in common diseases and what is the data that is needed to solve those problems. Much of what we've done over these last couple of years is make available to all our innovators inside and outside Mayo this privacy-protected data set so we can create models really rapidly. Could we reduce cycle time from ideation to production to two weeks, and we did.

Well, we established in the cloud a set of highly repeatable data sets and components that anyone could assemble like Lego blocks. And the legal agreements to do it? Templated. You have an idea? Here, here's a template. Check a couple of boxes within two weeks.

You've got a product. We've done this now 150 times. This drive to democratize the technologies and to make it easier for people to innovate and to build on each other's work, particularly in medicine. Much of how we've thought about how to regulate medical devices and technologies were based on technologies that we had in the last century that were evolving maybe more slowly than technology is evolving today.

Do we need to change our approach? We need guidance and guardrails. Let me give you a case example of generative AI. One of our cardiology leaders said, Boy, I have a really difficult case. I am going to go ask generative AI, and maybe that will give me some advice. The case is very simple.

A 59-year-old gentleman had a recent sternotomy, a big incision in the chest. He now needs a pacemaker. Is it safe to insert a pacemaker wire one inch from a fresh scar in the chest. Generative AI said, Oh, completely safe. In fact, there's a clinical trial looking at 5,000 patients. In fact, there are two papers in the Journal of Electrophysiology you should go read.

Here are the page numbers 411 and 412. Total fabrication. No study has ever been done. The journal doesn't even exist. There's a problem, which is, yes, generative AI is amazing in the reduction of human burden and the synthesis of credible information, but it fabricates and hallucinates.

And if you're not careful, you're going to make clinical decisions based on fiction. Yeah, I mean, it's something that because the systems are trained in some ways to produce what sounds true, it lends that they are actually in some ways, they lend themselves to this natural trajectory of fabrication. In some domains, that's actually one of the most valuable things about them. When you think about trying to take generative AI and using it to expand people's ability to be creative, to write, to make new music, to write new poems, to write new kinds of poems, I think there's a lot of opportunity there. Part of that comes from its ability and its willingness to go beyond what has been exactly said before.

It's not just parating the past. It can make little excursions. Really welcome your thoughts on what do we do to get more credible generative AI or at least measure the generative AI for credibility and repeatability.

Yeah. There are two aspects of it that I think are really important. One is that in the history of predictive AI, we've often thought of training data as the most important asset in terms of governing, making systems work well.

In the generative space, it feels like we need to have that same respect for the ability to evaluate. It's not so much about the new opportunities for how the technology can be applied are so numerous that I feel like there needs to be a way for us to measure the performance and the real-world situations that it's going to be used in. I think that that's going to take some rethinking. I don't think that we can build generative AIs and then call them done and just let the system work. I actually think that real-time continuous monitoring of how the systems behave and review of what's going on and is it making a positive change or not is part of what we're going to need to do.

We're going to need to focus more on real world evidence. I think the other thing is, I think that sometimes people try to ascribe too much to the generative elements of AI. I think that there's a lot of value in trying to think about how do you use generative AI as one part of a system that pulls in other tools that don't fabricate in order to square things up? And that the problems that we think about right now, they're real problems. I do think that they're things that we need to think about, but some of them are overblown by imagining, Well, you're just going to have a monolithic Generative AI bot and ask it a question.

I think it's much more likely that, well, let's use Generative AI to help me interact with this very reliable database. I think that something like that might be something that we can build sooner rather than later. What do we have to do in order to to help human clinicians feel like Generative AI is their colleague or their partner or a tool that they can trust. I think Generative AI has to have three characteristics: credibility, transparency — how does it work and where can we use it? And reliability, relative predictability that it's going to be a good thing next time I try it.

I would say if you look at medicine, there are a variety of low risk areas to start. As a practicing clinician, I can tell you that the use of the electronic health record, although, of course, it digitized medicine, is very burdensome. There's something called an inbox, which is, Oh, review this lab result. Oh, here's a patient with a question.

Here's a colleague with a referral. Managing the inbox takes a lot of time. If you say, Hey, we're going to use generative AI to create draft, just draft answers to the thousand messages you receive every day, instead of having to author the message, you just read and edit and send. You could save hours a day.

That may be a very good place to start. Or as we chatted about clinical documentation, higher risk but worth exploring, summarization. Mrs. Smith has just arrived in the Emergency Department, and she has a 4,000 page chart here. If General of AI could say, Oh, I'll read the 4,000 page chart and show you the two or three pages of what's most salient, potentially going to be revolutionary. Harder, things like differential diagnosis.

Oh, a patient who has a fever and a cough. Here are the things that could be. Well, I think, again, we have to be at this early stage of Generative AI, a little circumspect. I would be very careful right now of using generative AI for diagnosis. I think even when we bring AI into the clinic, thinking about the impact that it has on the patient is really important.

An example of a study that we did recently in collaboration with some folks at Northwestern was a randomized control trial in terms of using AI in breast cancer screening just for the very simple purpose of if the AI thought that eventually a radiologist was going to order someone to come in for a follow-up study, let's just flag that right away so that maybe we can do that follow-up study before the patient even leaves. This actually affected somebody in my family who they had an abnormal mammogram. Turned out everything was fine, but they had this 10 days or 12 days of terror where they know they have an abnormal mammogram, they're trying to get a follow-up appointment. If you can collapse those times just by giving someone a heads up to be able to like, Okay, we'll get the complete studies, all of the studies done altogether for these higher risk cases. I think that that human impact is really, really important. I think that there are going to be thousands of these little places in the workflow where either the clinician or the patient is dehumanized in the current system and they can feel more seen, they can feel more respected if there's a way to have these systems fill those gaps and do things that just make sense.

That triage and the care planning will be invaluable to patients. Yeah. There are a lot of opportunities to use AI on platforms like mobile phones to help people with the triage and screening or prescreening. Going back to the interdisciplinary nature of your journey into this field, in order for us to build a better future for AI in medicine, it's actually going to take genuine co-innovation, co-development.

I feel like you've been trying to facilitate that. What do you feel like are the current blockers to bringing engineering and medicine closer together? Do we need more people who are trained in both disciplines? The bulk of healthcare in this country does not have engineering talent at the bedside. So what is our collective responsibility? Which is those in industry and academia produce software systems and tooling and we give it to the frontline clinicians for clinical evaluation and understand what's useful, what's helpful, and look at outcomes. I think we have to, as a society, come together to do that. There's the old curse.

May you live in interesting times? We certainly do. But what are you most optimistic about in this moment of AI and its possible impact on medicine? I'm so optimistic that tool sets that we have have democratized the production of predictive and generative AI so that a high school student could create a model in a morning that might be highly useful. That's a pretty unique time in history.

I think these next five years we'll look to changes in healthcare such that practice without augmentation will be malpractice. That's a fascinating statement. So you're saying that you see the tools of AI as such a critical support to being a better doctor that eventually it'll be a necessary part of being a good doctor? There'll be a cultural expectation that says, Oh, patients will, as you point out, summarize information so they can navigate the system better.

Doctors will have the benefit of augmentation so they can look at rare disease, not miss interpretation of all this sea of data that is coming at them. That will just be considered standard of care. How is that going to change medical education? I mean, what we teach as we help people become doctors in new generations, how are we going to change what we emphasize in medical education? Since I was trained in the '70s and '80s, I have a traditional or classic medical education which includes things like the Crebs Cycle. Now, not that you've thought about that in a long time, but it's a biochemical cycle I've never used in life. Why are we training people in this stuff? Shouldn't we train them in data science? Shouldn't we try critical thinking and evaluation and not biochemistry that you could look up if you ever needed it? Education will have to fundamentally change to focus on being a knowledge navigator and not a memorizer.

I think we agree that this is a moment in history. What are you most concerned about? What are you most afraid of? My fear will be there will be a series of products offered by a series of companies that actually do harm, and that will affect the credibility of these systems for all of us. Yeah, and being able to assess the utility of these systems in the real world, as you say, entails some capabilities that aren't available today in many medical settings. I think there's going to need to be an exploration, but doing that safely is, I think, what we should all keep our eye on. I'll give us marching orders, which is no one institution can do this on an island.

Academia cannot do it. Industry cannot do it. Collectively, we can do it.

Yeah, I completely agree. But I also feel like it's going to require the engagement of government and the public, real populations that are going to be served by technologies to get good health care, you need to trust your doctor, right? And if your doctor is working with an AI, you need to trust your doctor's AI too. And I think that building that trust has to be something that we drive towards. One of the things about being at Google is that I get to be close to the science and developing the underlying pieces of technology, but we also have the opportunity to really bring it to people very democratically.

So my team has been... We've worked on tools that actually help developers locally build Android applications that are appropriate for the use cases that they want to deliver in the healthcare space because they know the community, they know what the problems are that they need to be solved. That shouldn't be dictated from the outside. It's about providing the tools that allow them to build these things so that you make it easy. You make it easy for people to build solutions that are stable and safe and trustworthy.

I feel like that's so much more useful than trying to build one perfect app that does all of these things. It's just not possible. I really think it's about enablement.

As we try to solve some of these problems around the world, you always ask the question, do you build? Do you buy? Do you partner? Almost always the answer will be partner. Yeah, I completely agree that the technology of AI, it could widen the gap or it could narrow the gap. I think it's incumbent upon us to do everything that we can to try to use the technology to close those gaps and to make the world fairer and more equitable. Problems that we solved over these last four decades have sometimes been technological and sometimes been policy. But some of the hardest issues have been cultural change. We're at a time in history where our patients are demanding these innovations.

Patients are asking for products and services that are different than their medical experience in the past. And we're at a unique time where working together, we can meet their expectations. It was a pleasure to meet you, John. Indeed. Great to see you in person.

2023-10-11 03:26

Show Video

Other news