Exploring the Implications of ChatGPT in Healthcare, Business, Research and Art

Exploring the Implications of ChatGPT in Healthcare, Business, Research and Art

Show Video

[MUSIC] It's my pleasure to introduce the panel on implications for health care business research and art. We have Dr. Chris Longhurst joining us via Zoom. Dr. Longhurst is the Chief Medical Officer and Chief Digital Officer of UC San Diego Health.

We have Tiffany Amariuta Patel, who is a professor in HDSI and the School of Medicine. Robert Twomey, who is a professor at the Johnny Carson Center for Emerging Media Arts at University of Nebraska. Then Vincent Nijs, who is a Professor and Associate Dean of Academic Program in the Rady School of Management.

I'd like to start with a question for you, Dr. Longhurst. The latest public version of GPT-4 has famously done very well on professional exams, including the bar exam and the US medical licensing exam. This has led to provocative headlines such as can ChatGPT be a doctor so my question is can ChatGPT be a doctor? It's really an exciting time.

I'm a cautious optimist and we've been implementing electronic health records and other digital health tools for 20 years. We really have a preponderance of digitally available data about both patient care and how we deliver patient care. That is exciting to think about how we can use what these tools at the same time. This is a high-risk environment.

We're caring for patients and sometimes hold their very lives in our hands. We have to be very thoughtful and measured in terms of how we implement these things. As an example, just this week we were announced as one of the first partners in the country with our electronic health record vendor, Epic, who is built of course, on the Microsoft stack.

We have operationalized GPT inside of our electronic health record for a very specific use case. I'll just share the slide for a second. The use case is very specific, which is we become overwhelmed with messages from patients. Particularly since the pandemic, as we delivered more and more virtual care, our patients contact us for asynchronous questions.

Of course, our clinicians don't have time set in their schedules to respond for all these messages. The use case specifically is that we're going to where we are using GPT now to help draft messages that our clinicians can then edit before they send. That helps to address a couple of issues. One is our previous speaker was talking about the errors GPT can make when it hallucinates.

We don't want any hallucinations going directly to our patients and so the editing of course is critical. The second is that our clinicians know their patients in ways that our systems won't always have the data that suggest. Our clinicians will be able to edit the tone of those messages to make sure it's appropriate for those individual patients.

Then finally, we always want to test things really thoroughly and in a rigorous and disciplined way before we turn them on writ large. We're working with a group of primary care physicians that are part of the pilot. We're measuring both quantitative and qualitative feedback to make sure that this is both improving patient care response times, clinician satisfaction and patient satisfaction. We'll learn more over the coming days. Again, I'm bullish and cautiously optimistic, but we have to be careful and measured in our implementations. Fantastic, thanks. I want to go to Vincent now.

We've seen how ChatGPT is an excellent chameleon of sorts and that you can write an email in the style of Shakespeare, and it'll do that. It can generate texts tailored to a specific audience or purpose. How do you foresee this changing marketing and advertisement? Well, so David had a good example about the ad agency that was going to use ChatGPT to generate ad copy and then have the copy editors be literally editors. I see that there's a famous example already of Stitch Fix.

Stitch Fix is a customized fashion company and they are doing something very similar, which is for their ad copy they're having it being generated by AI. Then the copy editors editing, they do the same thing with product descriptions. There are many thousands of product descriptions they have on their site. They need to be carefully edited to be consistent with the brand message and the values of the company.

They now have that generated by AI, again, edited by the editor. There'll be significant implications for the number of people that they need to employ as copy editors. I think it's not just marketing obviously, if you think about finance, if you think about business analytics and data scientists and companies, all of them have an opportunity to be much more effective and efficient. Also have opportunities to expand beyond the skills that usually we're using and the technologies and tools that they were using. ChatGPT makes that possible. There are definitely going to be implications.

I don't think we're anywhere near yet where it could literally take over an entire person's job. But it can take over tasks and it can enhance the ability for people to do tasks more efficiently and more quickly. I think there's widespread concern about the implication for the job market and then there's going to be a lot of reshuffling of positions and unchanging of scope of positions. Again, I don't think it will replace a specific person, but I like this quote from somebody I follow on Twitter who said AI will not replace you, somebody using AI will. I think that's incredibly likely. Robert, this next question is for you, we've mostly focused on GPT, which is assistant for generating text.

But there are other models such as DALL-E for generating images and soon as David's video. These models have rapidly grown in quality. I saw just this last weekend. The judges at the Sony World Photography Awards awarded a prize to AI generated image. They didn't know it was AI generated.

This has raised concerns about the place of AI in the arts. Do you see generative AI and the creative arts as necessarily being at odds? Or is there a place for AI in art? Can AI generated art be considered authentic in a sense? Yes. I'm speaking from the perspective of a practicing artist working with emerging technologies and also a researcher who's building some of those technologies.

I think the first part of that like our AI and art at odds. First, I just want to say I think these contemporary like large language model or deep learning-based generative techniques are just kind of the last point in a long arc of technologies that artists have engaged with or emerging technologies. Oil painting was at one point an emerging technology that allowed for a more persuasive representation of light, of flame, of transparency. Same with photography.

Some sense was an emerging technology that replaced like painting, drawing, other kinds of traditional representation. We see this again and again and I'd say that the tools that are available now are in some ways just a continuation of being the latest emerging technology that artists will readily and happily adopt. I think the more complicated question though is, in what way are these tools different in kind from those other ones that I mentioned, like Photoshop, photography? To some extent we're getting better tools but traditional with Runway ML is a company in New York that manufacturing web-based video editing tools that do things like background subtraction, depth of field at a level that would take a lot of human labor to do so it's that idea of automation, maybe a menial work or things that David mentioned earlier. Better tools are one thing that we are getting. Another thing I think really is maybe new tools or new modes of interacting with tools. Thinking about these language generation models and then also about these text-to-image models, those are intervening at the level of concept or ideation or language which is a different place for a computational tool to be for say, a visual artist or a filmmaker than some of these other like traditional technologies that are used in those practices.

I think also we have these things can play new roles and giving visual form. I mean, me back to the menial labor part, text image generation allows people who say can't draw or 3D render produce images to represent concepts. This can then be used for storyboarding, ideation, some phases of creative production across all fields.

To come back to this idea of authenticity, I think that's a pretty loaded term in the arts, but one that artists love to think about. Authenticity have to do with in what ways do these tools facilitate an authentic expression or exploration or critique. Hearkens backed Walter Benjamin and the idea of the aura, the handmade that has some presence, so are the outputs of generative systems authentic? I think it depends on how you use them. It also raises this question of attribution though. Whose data were these tools trained on? Whose style are we using as keywords to prompt images and also just like what biases exist, distributions in the training data? Because those things limit the space of what can be expressed with these tools and also raise all issues about copyright, attribution, creative work.

I think I'll leave it there. But I think as a visual artist, I might criticize the outputs of some of these like text image systems and things like DALL-E. Actually, I want to see who has used DALL-E, Midjourney, Stable Diffusion? Okay. Try them out. You can also find those online. But I'll say as an artist and as I might criticize these is wonder if they're a normative influence.

Are we just reproducing cliches, recognizable styles, trademark styles by artists? I don't know. That's an open question here. Do they expand the creative possibilities? Or are they reinforcing existing patterns, methods, tools? Interesting. I think a quick follow-up. I mean, I've seen ChatGPT called a blur tool for the Internet or blurry jpeg of the Internet. Is a DALL-E a blurry jpeg of art? Yeah, in what art? I mean, there's so many different art worlds. Like a lot of images were scraped from, I forget, like Deviant Art or something like these online bulletin boards style posting things. That's one art world.

There's like the art world of historic European paintings. There's so many art worlds, experimental videos, so that Ted Chiang article, is ChatGPT a blurry jpeg of the Internet? It's hard to imagine a totally inclusive system that could capture those nuances of style and origin. Thanks. Tiffany, next question for you. We've seen that large language models are models for predicting at the simplest level, the next word in a sentence and you work in genomics and genomic data is in a sense text.

It's a sequence of Gs, Cs, Ts, and As. I'm curious if the underlying technology of large language models has been put to use in genomics for maybe predicting the next gene in a sequence. Yeah, thanks, Justin. This is a great question. Actually, for quite a long time, these LLMs have been used in genomics and there's a few examples, at least one of which I'll give today that a lot of you have probably heard of.

But first, I just want to lay the groundwork for why applying large language models to genomics is actually much more complicated than applying it to human text. For example, the sentences of human language, we have words, they are units that I'm building block so we can work with. DNA sequence, on the other hand, is three billion base pairs with no spaces in between. At the level of regulatory elements and enhancers, us as scientists, we still don't have a good idea where this start and end. A lot of them are also context-specific and we don't really know all of their functions.

Aside from that, there's all these higher-order interactions between not only proximal spaces in the genome, but very distal connections and so that makes it even harder search space to deal with. But one example, and possibly based on this description that I just gave, making the jump from DNA sequence to amino acid combinations, which is much more similar to words in a sentence, LLMs have been used for protein folding. A lot of you have probably heard of open fold or DeepMind. This is effectively a model where the input is an amino acid sequence and the prediction is a full 3D atomic resolution image of the protein. This is incredibly useful because you need this for when you're working on drug discovery, you have to have a very high-resolution image, a concept of exactly where every single atom in that structure is located in order to understand the binding dynamics like what that drug might do in vivo and things like that.

Often experimentally when you try to generate this image now from an AI-based model, you can get a blurry image and so I actually like these models that have been trained and applied, trained on millions and millions of sequences, and have predicted many 3D protein folds have been highly accurate and also they save all the laborious experimental effort that. Just pointing back to what David was mentioning, this is a perfect example of where you can automate something that gets near experimental accuracy and is also has a lot of healthcare implications. Aside from that, more recently, there was a preprint on Bio Archive about predicting the next COVID-19 variant using large language models. This is just a pre-print but still , so just keep that in mind. [LAUGHTER] It was based off of a collaboration between NVIDIA and researchers at U Chicago and Argon. It's really interesting because you can also use large language models for modeling evolutionary trajectories in genomics and you can use that to predict where's the next mutation going to be and it might not just be in the spike protein.

This is a really helpful information that can accelerate the development of vaccines or also just have it ready in our back pocket for when these new variants do arise. Looking toward the future, I'm excited to see if large language models will be able to do this task that I think a lot of experimentalists would be really thankful for it. For example, if I asked ChatGPT, can you generate me a DNA sequence that has a specific functional effect that would be excellent because then you have a very controlled experiment, you could design all of these sequences that you can transinfect cells within, and observe if the functional effect that can be important for disease. I hope that that's where the future is going and we can automate that process. I think what unfortunately what comes to mind, I think David mentioned the scale of bad actions that large language model is opened up.

We could imagine a large language model being used to develop new vaccines or a DNA sequence that has a particular function, but those might not be good functions, so do you see a danger there? Absolutely. Definitely, I see a danger there. There's just from a standpoint of also privacy and I feel like there have been a lot of maybe a high media coverage stories about generating either viral sequences or other sequences that could essentially qualify as biological warfare. [LAUGHTER] Not to put too much emphasis on it, but yeah, there are examples out there and so there's a lot of careful regulation that has to be accounted for because you could literally order whatever sequence you want and then unleash very damaging effect.

Dr. Longhurst I'd like to go back to you for the next question. So on one hand ChatGPT seems to offer upside, so it offers patients the ability to access low-cost medical advice at all hours of the day which I think you were referring to earlier, but at the same time there are clear and obvious dangers especially given that it tends to hallucinate and make things up. But given the trade-off, what do you see as the role of ChatGPT in-between the doctor and the patient? That's a great question and we've seen a lot of examples of hallucination or GPT even in healthcare settings which is obviously concerning. So I mentioned previously that the way that we're dipping our toes in the water at UC San Diego Health System is by using GPT that draft responses to patients that will then be reviewed and edited by clinicians and members of the care team before that gets sent.

There is an article just published actually by Dr. Peter Lee from Microsoft showing examples of incorrect output by GPT. For example, when asked how do you learn so much about metformin; a diabetes drug, GPT says, well, I received a master's degree in public health and volunteered and diabetes clinics. And interestingly though then GPT-4 can be used to validate itself, and it recognizes that in fact GPT should not be answering in that way.

So this is something we've been thinking about even prior to the large language models. We've been working on a variety of different AI and machine learning algorithms in our healthcare system and so imagine that if you spoke with your doctor would you rather have them in a situation where they didn't have confidence make something up or say I don't know and I'll get back to you. Well, certainly we felt like the I don't know is the most appropriate approach and so our team has actually taught a sepsis prediction algorithm how to say, I don't know, in about 8-10% of cases where it doesn't have enough information to make a confident prediction. And so we've been testing this in production now for 6-10 months, we're getting very positive results.

And one of the things that we're hearing from our clinicians is the fact that the algorithm will say, I don't know, is actually increasing trust in the algorithm. And so conversely, obviously these large language models is hallucinating and just making up answers as decrease in trust as well. Thanks. I want to go to Robert now for the next question. So GPT is very good at imitation as we've seen and one of the striking applications of this that I've seen is you can for instance by giving a model and fine-tuning it, your text messages, maybe a text message conversation with someone, you can have it replicate that person and their way of speaking or yourself.

I've actually seen people use this to build an AI chat bot that imitates a deceased loved one, for instance, which I think has mind-bending implications. But your work is in this area, the intersection of empathy and human computer interaction. I'm curious what it can tell us about the good and bad of that application. Great. I noted chat bot conversant probably to me. So I call this all this realm of imitation, intimacy or desire for companionship, or even beyond that maybe a desire for communion, some kind of deeper, meaningful connection to another human or non-human. There's such a history in computing with this back to the early days of programming, Joseph Weizenbaum at MIT with Eliza; this therapist, really simple therapist robot bought that he wrote which got his employers and friends hooked basically, and this totally threw him for a loop.

You mentioned the Jessica simulation or more recently also like Eugenia could use Roman simulation which replaced a loved friend, which is now turned into this company replica.ai which their business model is really creating custom chat bots that you can have ongoing relationships with. So I'd say my question with this is always like what are we interacting with when we interact with the model that these things construct like effect loops? Especially if you're talking about like deceased loved ones and things, there's a self comforting or psychological self-stimulation that you're looking for from that interaction. I think also there's an inherent theatricality to it kind of athropomorphization humans want to read and tint, want to project a coherent subject onto the thing they're conversing with but it's all this theater. You've loosely skin something as a certain character, and so it's really a space for projection. I guess thinking about risks and rewards on the upside, upsides are possible applications, loneliness is a thing and has real health consequences so I think ideas of bots that could provide some kinds of companionship can be a wonderful thing.

Maybe also to some extent, maybe this problem of hallucinations or truthfulness don't matter as much in some kinds of casual conversation or some kinds of interactions although you wouldn't want a companion bought to go off the rails. I definitely think these things can serve some psychological needs and the question for me is clinician on the call some others might have more informed opinions, but there's a company called Woebot making a therapist bot. It can maybe serve some need, but does it have a clinical or therapeutic use? I think in the world of arts we welcomed the hallucinations, maybe it's just about the effects and the interaction. Maybe these things work best and kind of more performative or arts sort of frame.

Interesting. Thanks. I want to open up the next questions to the committee, to the panel, Academia [LAUGHTER]. It's likely that the introduction of AI is going to change the nature of work and a lot of fields. What skills do you foresee becoming less important because of these generative AI models and which skills do you see becoming more important perhaps.

Skills that are going to become more important in some ways, maybe creativity and the ability to go outside of your usual box. I think of AI in some ways as access to coding where now you would go and do a HelloWorld. Now we could do something like that, but on steroids . Anybody that isn't even familiar with coding or maybe wants to try a new language, they can get started and build real things very quickly which also has lots of risks if they open up their computer hard drive to things for example or networks, but I think there's lots of opportunities to expand outside of the box where you're usually are just because this makes it easier for you to get a step and it's also not judgmental.

If you make a mistake or you don't know something, you're not maybe as afraid to ask for something, whereas this will just tell you that information. For skills that we can probably lessen our attention on aside from the easy ones like spell check and code checking, I was really trying to give ChatGPT the benefit of the doubt when I was thinking about this one. I realized that a lot of successful interdisciplinary science has been done by linking concepts such as like I'll give the example of like 2000s computer science algorithms just in the last 10 years became really popular in genomics. I was thinking across disciplines where me as a geneticist or is not necessarily exposed to concepts from other fields this AI could make inferences about what could be useful methodology across disciplines and could maybe accelerate our ability to apply other methods to new problems.

I was thinking that maybe instead of waiting for the time to pass to think of these natural connections, it could suggest them to us much quicker. Then in terms of skills that we still need to make sure that we are top-notch, aren't critical thinking and interpretation are super important because just like David was mentioning earlier, I've had the same problems with ChatGPT about making up sources, making citations, papers that don't exist. I'm trying to cite authors who I had as previous coworkers and I knew that they hadn't researched those topics yet the ChatGPT said this is the conclusion from their paper and I said nope, they've never researched that. Just like making sure that we know how to use ChatGPT as a tool and act like a truth, and so using it as a search much like one learned how to use Wikipedia, one learned how to use Google, and let that be the tip of the iceberg in your search on whatever question you're asking and following up with actual sources and making sure that they're correct. [APPLAUSE]

2023-07-11 13:23

Show Video

Other news