Assessing The Risks Of Emerging Technologies | Forrester Podcast

Assessing The Risks Of Emerging Technologies | Forrester Podcast

Show Video

- Hi, I'm Jennifer Isabella, your host for Forrester's podcast, "What It Means" where we explore the latest market dynamics impacting executives and their customers. In today's episode, we're going to hear a rather unique session from our Security & Risk event that took place last fall. In this session, principal analysts, Renee Murphy and Sarah Watson act out a scenario where the CIO of a company has to go to the chief risk officer for advice on implementing an emerging technology. It's a great way for executives to think through their own conversations about technology risk in the future. Let's take a listen.

- Hi everyone. I'm Renee Murphy, principal analyst at Forrester Research. I cover governance, risk, and compliance which makes me the fun one.

Actually, I'm here with Sarah who's equally fun, I've come to learn. Sarah, tell us who you are. - Hi everyone.

My name is Sarah Watson and I am a brand new principal analyst on the tech exec team. So I am really focused on emerging tech and questions about where tech executives should be focusing their energy to become future fit. So I'm excited to be here and excited to talk about emerging tech with you today.

- So what we decided today to do, Sarah and I, was to have a conversation about what it's like for a CIO to come talk to a chief risk officer about an emerging technology they wanna implement and one of the things we're gonna do is sit down and talk about like what that means to all of us, how we're gonna manage that going forward. So with no further ado, we will go live in our scenario. Sarah, welcome to my office, friendly CIO.

What can I do for you today? - Thanks, Renee, it's good to see you. I come to you with an exciting opportunity. We have been getting a little bit of pressure from the business to think about modernizing our chatbot experience. It is pretty rudimentary right now.

You know, we've already done a lot of investment into trying to get some really basic answers and questions into our chatbot experience, but we're running into some issues with that being, you know, very limiting customer experience. We're also trying to, you know, answer increasingly more complex questions for our clients and our customers. So, you know, there's still pressure on the call center side of things to reduce costs, et cetera. So I, you know, have been reading all about GPT3 and NLP and all kinds of AI enabled chatbots and it feels like that's maybe the next place we can actually start to modernize and differentiate our experience rather than just doing this like basic out of the box stuff. So that seems like a good experience or good opportunity.

However, I've, you know, also been reading the headlines about the concerns we have with bias in AI. I remember Microsoft Tay, so, you know, I'm trying to figure out how to avoid that conversation. Don't wanna be in the headlines.

So coming to you, I'm in research mode. What should we be thinking about? What should we be doing to make sure these kinds of emerging AI applications actually don't get us into trouble and actually do what we want them to do? So can you help me? - Yay, right, well first, yes, thank you for coming to me. So the first thing we're gonna do is we're gonna figure out like there's two places that risk can come from, right? It's internal, it's from inside the organization or it's external, it's outside the organization. Outside the organization tends to be stuff like, you know, changing regulation, that kind of stuff.

The stuff I can't help, right? I can't, you know, move us around there, but I can say that the stuff internal to the organization, that we can do a lot with, right? So if it's internal accidental, it may be we chose the wrong tools or our own risk assessments didn't quite lead us to that end in the end. We wanna make sure we do that appropriately. So I'm gonna start with you by talking about first, the impact on our customers. So if we get this stuff wrong, let's say we accidentally configure the bot to insult customers.

Is that a legitimate like risk? Is that something we should really think about like something that could really happen and if so, how? - Yeah, I mean, certainly and I think that was the concern with Tay, right? Like this system that's learning to, you know, learning from inputs from users is putting out bad, you know, Nazi propaganda, I think in that case, right? So, you know, how do we think about- - It was, right, and it didn't take a really long time. - Yeah, right, so is that really, you know, we have to think about our inputs and our kind of training module for training the AI. We've been thinking a little bit about, you know, do we have transcripts from existing call centers that we could use as our training data? Like how do we contain what is going into this as an input, right? I think, you know, so much of what is exciting about GPT3 is that it's a huge corpus of training data but, you know, how is that actually kind of blowing out the potential outputs, right? How do we make sure that we know what's gonna be put in front of customers on the other end? - Right, that's always been the thing for me. Can we curate that data? 'Cause if we can curate that data, we take the risk that it'll be insulting or it can be driven to insult people, thereby giving us a lot of breathing room over in the call center, right? I mean, if that's what we're after is getting some breathing room in the call center, then you could see how that could actually pull it off, right? So we're gonna do a risk assessment on that thing in particular. Well, I'll do the deep dive later, but, you know, that risk in particular, it sounds like one that is a legitimate risk and it is.

It does pose a big problem for our reputation, but if we're able to curate that data and find a way to have a conversation that's always politeful, meaningful, and helpful and never deviates from that, then that's what we're after. We don't wanna be in the same situation as, let's say, Amazon when they failed to hire like, you know, women for two years because they relied on all of their own data. So yeah, I wanna make sure we don't make that mistake because once somebody made that mistake and then we make it again, we're just bone heads, right? Like I don't wanna be, you know, can't you learn from someone else's mistake? Yes, actually I did, right? So that one, I definitely wanna take a look at, right? The other thing I wanna say is like, clearly that they're coming at this from we need to update. There's opportunity to get better at here. Do we know where in the old process it was failing? Do we have any research on that? That's some information we might wanna get back so that we can put the effort toward the business problem and not toward we just wanna update the chatbot for the sake of chatbot update, right? Like that's never given anyone a good reason to spend money, but say we're gonna make this transaction faster, we're gonna make it more secure, not less secure, and we're gonna do this in a way where an offload stuff from the CX folks and from the customer, you know, call center, then we did what we meant to do all in a way where we're not insulting our customers or it doesn't get outta hand, right? The one I usually think of is the one I talk about in my presentation, The Wall Street Journal where you couldn't unsubscribe.

You literally couldn't. It's as you hit unsubscribe, the chatbot would be like, pick your favorite news things. Like, no, I unsubscribed.

No, pick your favorite news thing. Like, no, I unsubscribed. Like I don't want our customers finding themselves in that situation, right? So misconfiguration. - Yeah. - So what happens if frustrated customers start to hate it too, right? - Yeah, I think that's a great point and I, you know, talking to the business, I don't think we have done the kind of quantifiable study to say, like, where is it in the decision tree that, you know, customers are getting frustrated or at what point are they calling the call center because, you know, they can't do what they're trying to do in the chatbot. So I think we should definitely go back to the business and ask for kind of an assessment so that we actually have some benchmarks to understand where we've been able to actually improve upon those experiences.

So I love that idea. - Okay, perfect and then the other thing I keep thinking about is how do we make sure that we're taking a full advantage of the data across the enterprise, but we're doing it in a way that doesn't lead us to misusing customer data, right? So if we're gonna push out of the call center some, you know, stuff like what is my balance for the customer or what's my next loan payment or can you tell me my last checks that I wrote from this date or the last transactions from my ATM or whatever from this date to this date? Like if that's the kind of stuff we're gonna push out of the call center and into the chatbot, all for it, but that's a lot of data that would be part of the customer experience that was never part of that chatbot before and I wanna make sure that number one, we're leveraging the data that we have across the enterprise because seriously we should be, but two, that we're doing it in a way that is incredibly secure and if that means interrupting the customer experience with multifactor authentication for the chatbot, then that's something we have to like seriously consider. Have you guys thought about that? - Yeah, well and that raises a question to me is like how much do we need to be encrypting this if it's gonna be in the mobile experience, you know, via the app? I think that's great questions and yeah, how does that actually impact the flow of the user experience? - So in order to mitigate this stuff that we've come across, do you think you're going to have to make a substantial investment in that in the future or? - I'm mean, I think so and one, I mean, there's one last question to me is, you know, whether we're gonna build this stuff ourselves or if we are gonna partner with any one of these kind of plug and play providers, whether it's, you know, GPT3 or some kind of Google product or some upstart that is building, you know, a chatbot back end. I think we're at the point in which we can make that decision, but, you know, we are a bank. We're pretty regulated so do we need to actually think about doing this in-house or not? - Yeah, so here's what I would say to that. The where we'll mitigate that risk if you decide to go that route and it shouldn't be risk management that's forcing you to go one route or the other, right? Let it be the technology that speaks to you, the future growth, like whether we'll have to invest in that going forward, do chatbots gain technology at a rate that would like make it really difficult for us to keep up? And I'm gonna argue that, yes.

I mean, go look at any brand new technology where do we actually figure out how to make it viable? Chatbots, usually like AI chatbot, machine learning chatbot, natural language processing chatbot. We refine it there. That's where we do that kind of work.

So if we, as an organization, would like to take advantage of that kind of forward thinking, then yeah, we might wanna go the third party who built the back end 'cause it's their job to take advantage of all that. If we're gonna slow down and say, "Every five years, we'll go back and address the chatbot, but we're not gonna see it as a differentiator or anything like that for our mobile device or anything else. we're just trying to offload from one group to a bot."

Well, I get that, too. So I guess that's what we would have to make our mind up about there. So don't let it be the risk to the, just let it be that and once you decide that, then we'll go after who owns the risk because if we're bringing them in as a third party, the only time we have to do the due diligence is in procurement and procurement knows what to do with this kind of risk management stuff. So we should be able to get a platform that meets your security requirements before it ever gets in here. So you just worry-from that perspective, you just worry about that. I then have to go figure out what that means from an intellectual property perspective if we're gonna use data like that on the back end to inform data on our front end and it thereby create new data that should be ours, but who knows, right? So I probably have to go talk to legal and stuff like that before I have a better understanding of that, but don't let that, again, don't let what I'm doing like interfere with that in any way, shape, or form.

You figure out from the technology perspective and what's best for the bank and then I'll figure out what that means to the security piece of it and when it all comes together, we'll have that talk. So yeah, I would say go for that. Figure out which one of those this is, right? And we have all the same concerns no matter who owns the backend, us or them. We still don't want it misaligned to the strategy.

We still don't want it insulting people. We still don't want it creating problems for CX that we didn't have before just because it doesn't work right. Like that's the kind of stuff that whether it's in-house or it's somewhere else, we still have to deal with it, right? So that we'll worry about that we'll set aside and worry about later, but no matter what, we're gonna have to deal with this no matter how we do it. So when I think about like what's our next steps gonna be? Here's what I want you to do.

I want you to go think about- I'm gonna send you a spreadsheet 'cause I'm gonna really think about it. I'm gonna try to really think about it. I think I can come up with about 75 different risks we'll be looking at then you and I are gonna sit down and go through them and see which ones are really a threat. Yeah, I'm not gonna make you, but we are gonna see like going through that list, you know, how they impact, like I'll have it sitting right in front of me, how it impacts and everything like that.

Once we go through that list, you're gonna let me know which ones are actual threats and which ones are my just ridiculous criminal mind 'cause I swear, you do this long enough and you start to think you're a criminal genius. So like, I don't know if it's my criminal mind like tell me what's really real here. From there, we'll go and do big assessments for it and what I mean is big assessments is I'm gonna sit down and figure out what the downside of that risk is in a dollar amount and I want you to go figure out how much it's gonna cost in a dollar amount whether that's resources or software you need to buy and then we're gonna both come together and let's hope, let's hope upon hope that my number is like seven to nine times bigger than yours 'cause if it is, then we have a really good case for buying this software, getting rid of this risk because the downside of it is just too, you know, unbelievable or unpalatable for us to actually wanna do. We shouldn't be $15 million in debt because of a chatbot implementation.

Like that should not happen, right? So we're gonna figure that out. You're gonna come back to me and say, "It's gonna be a $400,000 investment to get this thing up and running and continue running." Great, perfect because if we did this wrong and we didn't pay attention to any of this risk and we didn't try to make mitigate any of this stuff, what we're really talking about is we build a new chatbot, we're looking at $17 million of downside risk. We leave the old one the way it is, we're eventually looking at 10 to $12 million of downside risk. So we really don't have any chance.

We're gonna have to change it. So do you wanna invest the $400,000 to get rid of the $17 million? I bet you would. That's the case you and I are gonna go make, right? And we're making it purely on risk and we're making it a really easy financial decision for people. Pay me now or pay me later and when you pay me later, it's gonna be millions, not hundreds of thousands, right? - That's gonna really help me kind of help build the case and really articulate what the business value will be for this initiative, right? I think, you know, we understand the pain point right now, but we need to like actually put it into these calculable trackable metrics. So let's do it.

- Right, yeah and when we're done, you're we're gonna come back together one more time. Your chatbot's deployed. Everybody's happy.

Our risk is mitigated and I'm gonna have you come back one more time so I can show you what your dashboard looks like because you will now be the proud owner of the chatbot risk dashboard that's gonna tell you how the controls are going, whether they're failing, what controls really are failing. That's the kind of stuff that you'll be able to see so that they don't come to you next time saying, "Everything's falling apart and we need a forklift upgrade." This is the stuff where you'll be able to proactive and lean into the business and say, "I've noticed your chatbots doing some interesting stuff and customers are getting stuck here a lot and that's turning into a high risk.

Do you guys wanna take a look at that now?" Like I think that's a conversation that we can't have now because it's not how we run anything, but we can definitely have in the future. We'll talk about what metrics I can put on your dashboard to show you how it's going so you can stay ahead of it not, you know, not behind the eight ball on that and that's how we do that once all of that stuff's put together. So you'll be able to track it and you'll be able to track it all the way through the program. The other thing I'll tell you is as we're deploying this, even if we decide we're gonna give it to a third party, we need to talk about risk at the software level.

So as we're going through like the configurations and as we're going through the development and as we're going to, we're starting with a certain amount of risk and that risk dollar might be $400,000, but maybe there's things we can do in our software development life cycle management process, even if it's agile that'll allow us to like take a $400,000 number and spit it out as a 312 or maybe a $212,000 argument once we figure out what they can do to help us secure that stuff before it goes live. So there's a lot to talk about there. We won't be done once we're live. We're gonna still talk about it, but this is pretty exciting and what a great opportunity, right? This is a great opportunity and we all should look at it like that and I'm not here to get in the way at all and I hope we can tell a compelling story and I think we can.

I think show that the downside of this risk, of not doing this is so great that it doesn't make any sense not to do it and not to invest in the security the way we need to because we know there's about 75 risks that come out of this, that we're gonna have to mitigate. - Well, thank you, Renee. This has been incredibly enlightening and I think you're really kind of helping me think about how I should explore this opportunity from the the research side, from how to communicate it, and actually build the case for it, but it, also, talking about the dashboard, you know, you're really kind of setting me up for thinking about how I can prove the emerging tech potential and keep track of it, right? You know, all of these AI implementations are really gonna require hands-on monitoring and feedback loop monitoring and so I think talking about what we can do as a partnership in that kind of prove phase of emerging tech, I think that's gonna be hugely important for us. So thank you so much and, you know, I'm really looking forward to continuing to have lots more conversations with you.

You know, chatbots today, but, you know, let's talk about, I don't know, 5G tomorrow? - Blockchain in the mainframe tomorrow. - Oh, God, yes. - I'm waiting for the blockchain conversation about the mainframe 'cause it's gonna be brutal. Yes and thanks for coming to me.

Again, we're after the same thing in the end, right? And if I can help you get there from here by telling the value story like that, I'm all for it. So, okay, good luck, Godspeed. I will see you again. We'll set up regular calls. That way, we can get through this all and I can help you, you know, make the case and keep going and when we're done, I'll have all the risk which is all I really wanted in the end was the risk registry to be right.

So yay, we all win. - Thanks to Renee and Sarah for walking through that scenario in such an engaging way. If you wanna learn about this year's Security & Risk event coming up in the fall, visit forr.com/sr22. That's F-O-R-R.com/sr22. If you like what you heard today, subscribe to Forrester's "What It Means" podcast on Apple Podcast, Google Podcast, or your favorite podcast player.

To continue the conversation, follow Forrester on Twitter, Instagram, and LinkedIn or drop us a note podcast@forrester.com. Thanks for listening.

2022-03-26 22:58

Show Video

Other news