Practical Generative AI #152 | Embracing Digital Transformation | Intel Business

Practical Generative AI #152 | Embracing Digital Transformation | Intel Business

Show Video

Hello, this is Darren Pulsipher, chief solution, architect of public sector at Intel. And welcome to Embracing Digital Transformation, where we investigate effective change, leveraging people process and technology. On today's episode, Practical Generative A.I.

was special guest Dr. Jeffrey Lancaster. Jeffrey, welcome back to the show.

Thanks for having me again. And we had such a great time talking last time about generative AI and what is it kind of and let's talk brass tacks. Let's talk What can I do with this new technology? You and I both agree this is a pivotal watershed moment. Whatever the buzzword du jour is, it's going to change a lot of things.

How so? What are we going to use it for? So what do you think? Should we go that direction? That sounds great. When I think about the question of kind of how it's going to be used. Know, I mentioned last time when we talked. The shift in mindset required to move from information retrieval, which is really the Google Bing, whatever, you know, the search engine view of the world to one where the tool is helping you generate something. You know, it's in the name generative A.I.

because it is producing some output. And so then you think about, okay, well, what are the areas where people are producing output, either in content creation, which I think is a huge area of creativity and creative endeavors, is a huge area. You know, a lot of the processes that organizations have and they use require a lot of content generation or even content aggregation as well. And so there's a lot of, I think, opportunity there to expedite things, to make it more efficient to all of us who've ever sort of had to fill out a stack of forms, have probably thought, you know, there's got to be a better way to do this.

And beyond just having something, take information from one database and sort of like populate it. So I think it's important to distinguish when we're talking about generative tools. Distinguish them from even robotic process automation. So there's a lot of great use cases for RPA as well, you know, but when I think about RPA, I think a lot about just kind of the rote mechanism of things where generative AI gets really interesting is the fact that you can almost tune how creative you want it to be. There's going to be some use cases where you want zero creativity.

You want it just. Like, yeah, you know, creativity. Not like I was just talking to a vendor and they may come on the show, they're going to actually control their infrastructure with the generative they are. I don't want creativity. They're not at all.

They don't want hallucination. So a lot of people have heard this term of hallucination and it means something a little bit different when you're talking about for humans versus for for generative A.I.. But a hallucination is really kind of a metric of how creative do you want the thing to be and how much do you want it to kind of stick to the guidelines and the framework versus doing something that you might as a human potentially not expect it to do? And so there's other times where you want something to be incredibly creative, you know, if you're using it to, let's say, expand an image, you might not want to have to dictate exactly what goes into that expanded image. And there's some great use cases from Adobe.

It's called Generative Fill is the feature that they have. And so, you know, I can take an image and I know what the borders of that image are, but if I expand it on my canvas, I can actually get generative fill to fill in what's going around me. And that's a case where you might want to tune how creative it can be because you might want it to not dream too big, right? So take the frame that I'm in right now. I'd want it to complete the window here and know that that's a ceiling. I wouldn't want it, but not a unicorn, not a name. Or put me in space or something like that.

You know, I'd want it at least make sense. And so with each of those, that to me is still where the human plays a part, because the human is going to have to tell the generative I, how creative do I want you to be and what are the guardrails that I'm going to give to you? Okay, let's touch on that a little bit more because we've heard the term hallucination before and I'm glad you differentiate. It's not the same as hallucinations that people should. Right? But a hallucination in the world means and I've never heard it explained the way you said it means creating something, being more creative, creating something that doesn't really exist.

I always saw it as just making stuff. Up a lie, you know, or something like the generative air is lying to you. It's not lying to you. It doesn't have any intent behind that.

But what it's giving you is information that may or may not be factual or truthful, which is fundamentally a creative exercise. And I think if. I like that approach, I really do. Because you say, now we can tune that creativity in the air. So if I want fewer hallucinations, I turn that creative creativity down. That's right.

And so for, you know, when, let's say, you know, somebody is trying to provide an information access point and you see this a lot with government, you see this a lot with wayfinding, you see this a lot. You know, there's different use cases where you might want to provide information to somebody. You probably don't want it to take a lot of creative liberties in where it's directing somebody. They still want to get there, but you might still take creative liberties in the language that's used to describe how to get there. And so that's where even within one single use case, you might still be tuning it so that you're not really cut and dry, which is going to be the MapQuest, you know, ways. It's just this is what the direction is, but you might want it to be a little bit flowery to be able to say, okay, well, you're going to go here, you know, you're going to go down about two blocks, is going to be a beautiful tulip tree on your left.

You know, that's how you're going to take a right. You're going to cross the street. You're going to go down into the park. That ability to recreate the human language of it requires some amount of creativity, because the prose that you get from most of the direction giving apps and things like that is pretty cut and dry, right? It's take a right there's a stop sign at the like the left, things like that. There's not that's not the way that you would give directions to somebody else.

No, no, not at all. Yeah you would have a much more kind of flowery prose based way of doing it now. It's neat.

I think about generative as a whole is, you know, we're talking about kind of text generation now, but you can start to think about creativity in other media too. So whether that's imagery, which I kind of mentioned, but what does creativity mean for code? What does creativity mean in sound? What does creativity mean when you start talking about creating genetic sequences and things like that, which is fundamentally text in of itself? That's where I think having a background in the ability to understand what the right level of tuning ought to be, given the outcome that you're trying to get to is really, really important. And that's, you know, the human is going to be kind of the the mediator or the moderator of the generative AI. It's going to be the prompter.

You know, you see a lot of news about prompt engineering, but it's fundamentally, I think, going to be kind of the whether it's the ringleader or the handler or whatever word or analogy you want to use, I think the human is still going to have a role in guiding the outcome of whatever the tool is for using. Well, it sounds to me like the human has more than a role. It sounds like they're the expert, because ultimately I have the knowledge and I'm using the AI and I love the term came up Augmented intelligence. Yep, I love that term. I'm using that augmented intelligence to do the mundane things for me. Right? Gathering information for me, putting it in in a more descriptive language that I can't necessarily get out of my head.

But the expert knowledge, the subject matter expert is still me. Yes and no. And so and this is why I say yes and no to that. Yes, you're absolutely right. In terms of I as a human know what outcome I'm looking for.

I know when I get there, I know what what I want this thing to be. But I, as a human, have really bad recall about large amounts of information very, very quick. Right. And that's where the generative AI is going to be really useful. So for a tool to be able to draw on the collected knowledge of the World Wide Web through a certain point in time, that's something that my brain, my teeny little human brain can't, can't comprehend.

But that comprehensive knowledge also doesn't know what thing I'm looking for either. And so I'll know it when I get to it. But I'm going to. But you may not know it up front. I don't have an encyclopedic knowledge of every discipline that's out there. And so using that tool to kind of say, okay, let me go out and pull together the right information or what I think is the right information for me, I guess is kind of this perego principle ish thing.

You know, predator principle is like 8020 rule typically used for time management. I think about it as if I can get the A.I. to do 80% of the work. I still have 20% left over to do, but that's now made me more efficient at whatever I'm trying to do because it's gotten me most of the way there.

I like that. But do you think then, that as humans in this symbiotic relationship with augmented intelligence, do you think that we become more knowledgeable, not more creative ourselves? I try. I'm trying to see where we play in this. We're not doing the heavy lifting.

We're doing this strategic thought. We're doing it almost reminds me of the Industrial Revolution 3.0, right where or even even the first Industrial revolution where we started mechanized things for the first time. And people said, Oh, you're going to destroy people's jobs. No, it's shifted their jobs, right? And people started living longer. Why?

Because they weren't getting killed in factories or they weren't, you know, getting burned at a blacksmith or or having chronic back problems from being a blacksmith their whole lives. Now, I had machines that were doing things that humans were doing. This sounds a lot like the same thing. But for information workers, is that similar or. I think I think that's really fair. I think, you know, you can even extend it beyond just information workers.

But this idea and why I think a lot of people are scared is because, well, one change is hard and it represents, you know, people maybe having to tap into their brains in a different way. But I do think what's going to happen is you're going to see the skills necessary to do things shift. So whereas once upon a time you might have needed somebody trained in technical writing, well, now if I feed in a lot of technical writing that's been written, I can get GPT to write in a style of a technical writer. So do I need somebody exactly to do that? Maybe I don't need the technical writer anymore, but I still need the human to have the expertise about the topic of the writing to make sure that people writing is actually accurate and correct and, you know, is what's being sort of captured in that writing applicable to the case that I need.

It may be in a style and it may sound technical, and this is where people are getting in trouble nowadays. But they're going to have real content. There's the, you know, the meat of what you want it to have. And so you see this in the law. You see this in other disciplines where people write in this style of something, but that only gets you so far. And so, you know, some other cases that I've seen be really interesting.

And I think one of the best ways of prompting at least, is to tell it to act like a particular persona. So in my world, you might say act like a CEO and generate a strategic plan, you know, that accomplishes this, this and this. And I've talked to CEOs who are doing this not to replace their job, but what used to take months and months to get to a starting point of a strategic plan now is ready in a matter of minutes, if not seconds. And so what it does is it shortens that cycle upfront, but it's going to extend the kind of editorial cycle because instead of starting with that knowledge, you're starting with kind of a frame, a skeleton taking something on.

And you just have to make sure that that skeleton matches. Let's talk about specific use now, because I think we kind of said it's aggregation, it's generative, right? It's. You brought up a great one, which is get me unblocked, start like I need a strategic plan. Great. That's a good starting point because I know a lot of times you're sitting there going, strategic plan. What do I do in in the you call it information gathering that we that we learned how to do in the late nineties and early 2000s with Google Yahoo, AltaVista.

I ask for for you new for you new people those are really good search engines back in the day we went gathering information so if if we if we play the role of the CIO and I'm going to work on a strategic plan, I may go to Google and say CIO, strategic plans, scholarly articles or best practices, whatever, and I'm going to get 150,000 hits. I'm going to search through all those things, find something that matches similar to what I'm thinking, or I'm going to refine my search to go, All right, Strategic plans for midsize manufacturing business. Yes. Well, and that's what most people would do, is they would go to their peers. So in higher ed, you go to your peer institutions, you said these are the other schools that we compare ourselves against. Let me go find their strategic plan and I'll bring that. Out because it's public knowledge.

Public knowledge, and I'll use that as a starting point. And that works really well for higher education. But you're still having to do that kind of legwork to go and aggregate and make sense of it where, you know, I think a lot of the generative tools start to get really powerful is I can still do that same thing.

But instead of me being the one to have to make sense of it, what if I could feed it into an engine that takes that content and spits out almost a synthesis of it? Well, it's taking that kind of synthetic brain work and making it happen much, much more quickly. I don't even have to read all of that. I can just say, you know, here are ten strategic plans and my peers feed those into CBT and say, okay, you know what do they have in common and what's different about them? So I could get it to do a compare and contrast.

And that's a really popular way of using these tools. I can tell it make me a table where in the table it, you know, give me the top ten ways that these are similar. Give me a top ten ways that they're different.

What are the themes that are present in each of these? You know, is there anything that, let's say institutionally specific or something like so I can use it to begin to ask questions that I would otherwise have to You know, I can't Google that. I can't Google. How are these ten strategic plans the same or different? That's not a Google level question. But if you can give me an answer that will digest all of that for me. And that's really, I think, the crux of shifting people's thinking so that it's doing that kind of knowledge formation work on your behalf.

So I like I like that because now you're still interacting with the tool. You don't just send it a paragraph and say, Oh, and whatever it spits out, I'm just going to it's. Going to take an ever decreasing. That's where people get in trouble and not looking over it, because at that point you've given up kind of your autonomy to a certain extent and you've given up kind of that. That's the point that you don't need the human anymore if you're just going to take whatever it gives you, you're cutting yourself out of the process.

Right? Right. That and that makes sense. So so the difference is in the shift because I really want people to understand the difference. The difference is, is the aggregation of data and comparison of data can now happen with the generative API where Google I really can't do it. No. So you're you're moving yourself up the value chain in a lot of respects, right? Because now you're saying you go do that mundane comparison work, give me the results so I can choose what parts of that I think are going to be best for my for my specific situation in and for who I am. And to me, this is a concern I actually have that people just start taking whatever we already have this problem with with with Google, right.

Ask ask your kid. Your kids are still small. My kids are adults now. Ask him where truth come from and they'll say Alexa or they'll say Google.

Yeah. And there's another one. And I go. Oh, well, but it's also not the Encyclopedia Britannica. And it's also not, you know, I mean, an answer to that question is actually pretty hard. And so another framework that I really like to use is the decay pyramid.

And if you've ever seen this, but it's data. No, I have not Data, information, knowledge and wisdom. Yes. Yes, I do. Yes, I do. You know, and so I think what Google gives us is information and what some of the generative tools give us is knowledge.

Now, whether that knowledge is wise or not still requires the human. So the human is still sitting up in wisdom, but it gets us to wisdom more quickly because it does some of that synthetic function of the data and the information to get us to knowledge more quickly. I really like that, and I see I would have put Google down in data and information at generative AI, but I like where you've put it. It makes more sense and we're sitting on the top with wisdom and you hope.

I mean, yeah, so some, some more wise than others. But yeah, and that's where I. Think all you have to do is look at tick tock and you know what? But I think, you know, when people are thinking about use cases for this stuff, you don't have to get to the point where you're saying, let me give you wisdom, let me give you knowledge.

Even these tools can be used for the creation of information or that communicate notion of information. And the reason is that a lot of the the generative algorithms have a perception of empathy that says, you know, there's there's a human like quality to the way that they're spitting information back at you because of the way that they're built. And there are times where you might just want a more human interface to the knowledge base that you've already got sitting somewhere. But to access that knowledge base and to search and query it, it's not particularly friendly. It's not particularly like multilingual. It does it doesn't connect to humans in the way that humans need to connect.

So can you use some of these generative tools then to provide that interface so that it feels like you're talking to a human, it feels like you're getting knowledge, even if underlying that knowledge, it's sort of just knowledge being synthetic information. So I like that because I've seen a couple of cases now where they're using generative A.I. as the interface. That's right. Right. Including a replacement for Alexa. I have Alexa in my house, so my kids have another tool and some are no way. I'm never putting it in my house.

I'm like, okay, whatever. Everyone already knows everything anyway, because you shop online. But but the new interface is much more friendly. I don't have to be so prescriptive in the way that I say things. I'm trying to get that exact album I want to listen to.

I have to say it exactly and that's to say album in this year. I don't have to do that with the generative. I can have more of a conversation. So I like that idea, not just in home automation but in user interfaces. Yep. Another company I worked with, I think I mentioned a little bit they're putting a generative AI front end on Infrastructure Manager.

Yeah, what a great idea. Meaning hey, reboot all the machines that have this version of the bios and update update the bios on all these machines with this version done, I mean before, what would I have to do? I'd have to go and run a query against everything. Make sure with this it's more the way that I interact with with the world. That's great. I think it's I think it's cool.

And the question is, where does that end? Right? So do I ever get to a point where I've now trained this system, monitoring AI to to act in the case of certain conditions? And then I say, okay, well, now from here on, whenever you perceive those conditions, you know how I want you to act. That's not so far fetched now. No. And where I think that the value in doing that, or at least setting up the building blocks to get there is it's not. So you can get rid of the network manager or, you know, the people who had been providing those instructions. But if you think about how often they had to provide those instructions, how often they had to write that query today, or how often they had to do things, you're not saying, okay, that's going to free up your time to deal with edge cases, to make sure that everything is running the way that it's supposed to, to make sure that we're keeping up with the way the world is changing, to do the things that you really want to pay somebody to do. You don't really want to be paying somebody to do the rote checking to make sure, you know, certain numbers of machines are up and what to do when they're down.

That's something that the machine can handle. Okay, So that sounds a little bit like robotic process automation a little bit. Well, and you can combine the two. So it's not to say that they're mutually exclusive. So what you describe to me is a generative A.I.

for an end and maybe back a maybe interface with some kind of an RPA layer to it where in order to interpret what I'm saying, I need the large language model to convert the way that you and I would speak about it into the instructions. To run here to do the back end. Stuff. And so, you know what, what you bring up is a really good point, that these things don't need to stand alone and they don't have to stand alone.

And in fact, if you think about the way that a lot of the systems that I see are built, they're built from a composite of different A.I. models put together. So we're talking about generally AI. There's a lot of other models, too.

You know, there's a lot of natural language processing, there's a lot of sentiment analysis, which is a subfield of that. You know, there's a lot of machine translation, there's a lot of entity extraction, there's dialog tracking, there's all of these pieces that go into something that looks like magic. And, you know, one of my favorite quotes is the Arthur C. Clarke quote, Any sufficiently advanced technology is indistinguishable from magic. Well, a lot of these things are looking like they're magic.

And because of that, it's putting up a barrier to entry because people are saying, well, that's magic. I can't I can't possibly do that. But I think if you look under the hood, what you see is typically a handful, probably three or four or five different algorithms working together to make this human experience really engaging and really compelling. And that's why I think it's sparking people's imagination, because for the first time, you're seeing how these pieces that previously existed, how they fit together and how they can do something that seems really magical. Okay.

Let's talk about some some practical use cases that you see people using every day, everybody using not not sysadmins, not the first one that pops into my mind is communication, written communication specifically, probably email or PowerPoint presentations or papers, memos, whatever. Or even you are, even if you have to write a love, a love letter to your significant other, you're Do you see that as I to me, that's that's probably the number one case that I see people moving to first, which is I need to write better emails. And it's the tyranny of the blank page. Well, it's it's two sides of it.

One is the reality of the blank page. Which is the classic picture looking over the author's shoulder and there's the blinking cursor. And, you know, they just don't know where to start.

So I think, you know, getting a kickstart for an email, a chapter, a white paper, you know, a wedding speech, whatever it is. These tools are allowing people to do that much, much more efficiently than, again, going out in aggregate. And, hey, give me all of the wedding speeches that have ever been written or given. You know, I have to go do all of my research before I get started writing the other end of it, which I think is really interesting and compelling, too, is I've written the thing page that you can you make it, can you improve my writing or can you, you know, and it's the old kind of like Microsoft Paperclip thing, but on steroids, which is not just can you fix my spelling, but can you now change the voice of what I've written? Can you make it longer? Can you make it shorter? Can you change the way that it's presented? The ability to kind of modify after the fact is as powerful as the getting started piece? And so I might say, you know, I don't know about you.

That's I know a lot of people who are very verbose with their emails when they maybe don't need to be. So this would be one way that you could start to say, okay, the email doesn't need to be three pages long, print it out because no one's going to read all of that down to a paragraph. And this is now a tool which can do that in a way that is still very intelligent so that it's not removing kind of the intent of what you've written, but it still provides you with a way of slicing when a lot of people have a really hard time slicing the very important critical thing that really.

Yeah. Okay so here's here's here's probably the big question a lot of people are going to ask around all the all these use cases we've talked about. Is this going to cause a loss of skill and knowledge that people rely on today? I mean, yeah. And any crystal ball question, it's always, always hard.

I don't think so. And I don't think so. I think it's a new skill that people need in order to be able to function in the business world. And a lot of people use it in their personal lives too. So I don't think it's going to be anything to take away from.

I think it's actually going to augment. And that augmentation really is it'll be interesting to see how it bears out, whether employers expect people to be able to use tools like this. I definitely think you're going to see some of these tools change particular fields. So it's going to change photography. It's going to change, you know, because I can now get mid journey or stable diffusion or daily to create an image. You know, that's one layer, two things which do I need the stock photography company to do that for me anymore? Maybe not, maybe not.

But then if I get GitHub copilot to be able to do a website layout for me, do I not need a developer anymore to build that for me? Or can I build that into something like Squarespace? Potentially? So in some ways you might say, well, instead of taking away from people's jobs or specialist jobs, does this now open things up where more people can do more things and it turns people into more generalists, kind of polymath people, as opposed to requiring the person with the deep knowledge about a very narrow area to be able to do that thing. So I'm more of an optimist. I think. That's a. You know, I think it's going to open up the possibilities of closing down. Yeah. So that's interesting

because at the beginning my career, I was a specialist in clerking and I loved it because I was good at it and there weren't a lot of people that were right. It was. And I studied it really hard. I knew a and it did really well for my career because I knew something that no one else knew. I spent the time to do it. But what you're talking about is more generalist, which means now I as an individual, I could start a company, I could take an idea and take it to full product with full e-commerce, with social media, the whole thing as an individual, instead of having a full team to do it.

So it is going to shift me into doing something different. And it's not going to get rid of the specialist. I don't know. I think there's always going to be less for needing the specialists and you know, I. Just won't be as many specialists. Right. Or or you know, it's almost a specialization of the specialists to a certain extent that I think I consider myself a specialist in a couple of different areas. Am I going to stop doing those things that I'm interested in because other people might be able to do them too? No, I still want to, you know, I'm still going to be coding by hand.

I'm still going to be taking photos, I'm still going to be doing those things. But does it now give me potentially another outlet for those that specialization? Absolutely. You know, and I think so it's going to move.

I think everybody I or, you know, whatever direction you want. But no, I like to up the value chain. So we all move up the value chain.

And so, you know, you might have a photographer who's really good at taking photographs who couldn't make a website for their life. Well, does this now allow them to make the website that they want to demonstrate or display their artwork? Maybe it does. Is it make it so that you know, somebody who has a musical inclination or is really interested in music but can't play the piano for anything that they can now create what's in their head? Absolutely.

Does it mean you're also going to get a lot of bad things? Absolutely. This is the flip side of all of that, which is it's easier for everybody to write emails. Are we going to be getting more emails that are kind of junky or spam or, you know, that it's harder to kind of look through and say, yeah, this was written by a human or this was automated or This is real or something I should pay attention to or not. So the ability to expand all of that has a really positive side.

But there's the negative side too, which is we're going to have to filter through more stuff to get to the things that we actually want or need. So it sounds like more work for us in some respects. Well, you know, if you've if you've played it this way and you're letting the machine do 80% of the work that you're currently doing, you're almost just shifting that work to a different type of thing.

And so, you know, it's I think it's an opportunity. I think it's an opportunity that not everybody is going to take advantage of. And that's okay. But I think it is going to be something that in ten years it's going to be a ubiquitous commodity.

I don't think it's going to be as special as it is now because it's going to be built into everything and it's going to be just about everywhere. Yeah, just like Google, right? Just like Google is. And so the big question I have and we'll end on this, what verb is going to be used is, is it I've got to get that or instead of Google it or is it going to be I got a gen I that only time will tell on that one. On who gets who gets the verb named after what's her name. You know, there's some interesting studies that have been done about the gender of personal assistants and certain fields, like legal fields and, you know, very authoritative things. The gender of these AI assistants tends to be male.

And in others where it's, you know, more, it's more like kind of a subservient sort of gender of yeah, they've, you know, people have tended to make them female. I think what what'll happen is I think we're going to end up with like a name, you know, my putting my sci fi hat on. I think it's going to be more along the lines of there being a personality associated with a lot of it, as opposed to thinking about the underlying technology of it. And so, you know, like you said, time will tell. But no, quite So what's your what's your name? Give me your prediction for your name of the generative AI that we're all going to be using ten years.

Now, if I, if I could tell you that, I think I'd probably be a rich man. But yeah, I don't know. I think I think the person who invents that is probably in middle school right now and and maybe hasn't even been exposed to some of these technologies yet. And that's, I think, where we're going with a lot of this new think about other emerging technologies, quantum computing, other things.

The people who are going to be doing that are currently, you know, 8 to 15 years old because by the time they get out into the workforce and the technology is mature enough. And so the question is really what would a, you know, a ten year old name that's going to give you a better answer than that. Because I know ten year olds that. I think, you know, when you think about who's going to be leading this stuff in ten years, that's the people who are going to be age.

That's it. That's it. Hey, Jeffrey, as always, it's so much fun talking to you. And I can't wait till we talking it important. Thanks. Here.

Thank you for listening to Embracing Digital Transformation today. If you enjoyed our podcast, give five stars on your favorite podcasting site or YouTube channel, you can find out more information about embracing digital transformation and Until next time, go out and do something wonderful.

2023-08-26 00:23

Show Video

Other news