Security as Part of Responsible AI: At Home or At Odds?

Security as Part of Responsible AI: At Home or At Odds?

Show Video

>> ANNOUNCER: Please welcome panel moderator Ram Shankar Siva Kumar. >> RAM SHANKAR SIVA KUMAR: Hey. Hey. Hello, everyone, and welcome to the keynote panel. I'm really excited for this because we have been planning this for a little while.

And one of the interesting things is, to give you a taste of how fast things are moving in the AI space, when we were all planning this panel, ChatGPT did not exist. GPT-4 did not exist. And if you said like, oh, I'm interested in LLMs, people would have thought you're talking about a law degree or something. So, everything has changed since then and with AI taking center stage, I want to really set the table really fast before we jump into the conversation. And we have three experts with us from very three different vantage points. Firstly, we have DeepMind, which is really at the forefront of AI research.

And to talk about how all of this has to do with safety, security, and privacy, we have Vijay Bolina right over there. He is a CISO for Google DeepMind. Very interesting news from yesterday, so we're going to ask him all about that as well. And then, at a time when everybody is talking about responsibility, if you go to Twitter and you open responsible AI, you've got like experts talking about it and thinking about it. But Rumman Chowdhury is the OG responsible AI person.

She has been talking and thinking about this longer than anybody else. Dr. Rumman Chowdhury is a responsible AI fellow at the Berkman Klein Center at Harvard and she holds affiliations at University of Cambridge, NYU, and she's a perfect nexus of responsible AI and security. We're going to talk to her about like the bug bounty, industry first bug bounty actually, focused on AI. And then we have with us, bringing a very different point of view, is Daniel Rohrer from - the VP of security at NVIDIA. You know, NVIDIA is really powering the AI revolution.

Forget Lamborghini. You really want A100s if you are in the AI space. And really, Daniel has the amazing responsibility for coordinating security activities across a full range of NVIDIA offerings, from the cloud to embedded systems, of course now to AI. So, with that framing and that kind of lay of the land, I want to give you a quick note on logistics. So, the panel has really three sections.

At the end of each section, we will take audience questions. And there are three microphones if you see in the middle of the room and I will kind of like cue you all up so you know if you have any questions, I can kind of line them up. And you can also tweet your questions @ram_ssk, and I will also get them over here and we'll keep a running tally of that as well. So, with that, I'm going to like stop talking, hopefully for the rest of the panel.

And I want to start with our first part is going to of course be about ChatGPT and the whole large language model space. So, Vijay, I want to start with you. You're the CISO for arguably one of the most influential AI labs. So, what's your guiding principles, Vijay? Like, how do you think about releasing these models responsibly? And what has really changed, you know, since large language models have come out? >> VIJAY BOLINA: Yeah, sure. Thanks, Ram - sorry, Ram.

So, Google and DeepMind have had AI principles and operating principles to govern the way that we think about responsibly researching and developing breakthroughs in artificial intelligence. And we have a set of principles that we've defined and we've published that highlight some commitments that we have of things that we will do and things that we won't do. And at a high level, some of these things are ensuring that the research and development that we do provides silo benefit. As an example, they are aligning with scientific excellence and integrity. They are safe and ethical, with dimensions of security and privacy included in that.

We are thinking about ensuring that these emerging technologies are aligned to people. We are thinking deeply about the timing and the releasing of the information, and particularly around publishing norms. And then also, aspects of diversity, equity and inclusion as well, and broad aspects of sociotechnical impacts that some of these technologies could have. We also highlight in these principles, some of the things that we won't do. And these are areas that can cause downstream impact and harm, with examples like technology that could be applied for and/or from a weapons development standpoint or a surveillance standpoint, technology that can be used to violate international laws and/or human rights, as an example.

Now, so what has changed? So, it's been really exciting the last few months especially to see the adoption of technologies like large language models. These are a class of models that both Google and DeepMind have been working with for quite some time. So, it has been really exciting to see the adoption in an industry and the exciting applications that people have been using these technologies for. But it's also highlighted, I think, a lot about some of the limitations that these technologies have too. And we've seen things like distributional bias which I'm sure Rumman will touch on a little bit.

We have also seen things like hallucinations as well. So, it highlights the limitations of some of these technologies. So it's been interesting to see how industry has adapted to those, had a dialogue and discussion and a conversation around these types of limitations as well. >> RAM SHANKAR SIVA KUMAR: Yeah, that's kind of interesting because you know, obviously as a CISO, I would think like, oh, you know, you are going to talk about like remote code execution. I love how you led with like bias and hallucinations. And just to create some like artificial tension here, Rumman, I'm going to kind of like, you know, ask you, because you have been thinking about this.

You have heard this, like, story multiple times. How do you see the space? And just to decouple, you know, I heard AI safety, securing AI systems, responsible AI. What is all of this? >> RUMMAN CHOWDHURY: And let's not forget alignment. >> RAM SHANKAR SIVA KUMAR: Alignment, yes. Align on alignment.

>> RUMMAN CHOWDHURY: Aligning on alignment. I actually don't even know what the appropriate term to use. Don't forget trustworthy. >> RAM SHANKAR SIVA KUMAR: Trustworthy, got to have it.

>> RUMMAN CHOWDHURY: I genuinely don't know what word to use anymore. And it's interesting because I think many different communities are coming to the same problems with their own unique perspectives. And that, I actually think it is a good thing, right.

So, you know, in this panel, we are here to kind of talk about this merging of ethics and security, which are similar but not the same. We're kind of like cousins. There is one very specific distinction that I often - that I actually use as my starting point. So, just as a bit of a preface, some people in the audience know me from my time at Twitter, leading the Machine Learning Ethics, Transparency, and Accountability Team.

As part of that team, one of the last things I was doing was starting to build out an AI red team, which is different from an infosec red team. And here's - they overlap, so here is the branch. Most of cybersecurity thinks about malicious actors, right? So, you are thinking about bad people trying to do bad things. A lot of responsible AI is actually around unintended consequences.

Well-meaning people who accidentally implement bad things. And why that matters is your approach to addressing it changes. So, in one sense, in one case you are maybe looking for people trying to create bots or spread misinformation intentionally. Misinformation is actually a really good one to talk about. There are the nodes of misinformation, in example, like, people intentionally making bad things.

And then there are the people who spread misinformation unintentionally because they believed it, they were duped in to it. And both of them need to be resolved. People can make deep fakes that are malicious but if nobody shares them, it actually doesn't have a big impact. But if you just address that problem, you are not addressing what we saw, for example, in the Christchurch shooting where they were taking down videos but new videos kept popping up and people just kept sharing them.

And they are very different approaches. But again, they are cousins. So, to solve the problem of, for example, misinformation, you have to address both problems. >> RAM SHANKAR SIVA KUMAR: Yeah, and that's kind of interesting because when I look at the, at least the landscape, and Daniel, you should correct me, there is like a proliferation of these models that are coming out. NVIDIA just came out with NeMo and Picasso.

Everybody here is thinking about like, ChatGPT. I want to bring it into my organization. But there's all of these problems that Rumman just laid out. You know, you're the VP of Security of NVIDIA.

I am sure a lot of, you know, people in NVIDIA want to use these models. >> DANIEL ROHRER: For sure. >> RAM SHANKAR SIVA KUMAR: How do you think about this? >> DANIEL ROHRER: Well, I suppose where there are problems, there are also opportunities. I think for us certainly looking at it, I think we clearly see there is a going to be a broad proliferation, which I think is great. It actually has huge capabilities for sort of partnering with humans.

But sort of in the democratization of AI, it brings it a lot closer to individuals, you know. When we think about autonomous AI vehicles, right, there is sort of a narrow scope. But as these move closer and closer to everyone, we need to be a lot more sensitive to these unintended consequences of these biases and things and building, I think, not just the models but the tools to manage and control for that.

I think there is sometimes an overemphasis on the model itself and only what the model can do as opposed to the systems they're embedded in, right, because there's a lot of opportunity for processing in the front and back of those models and building systems that even if the model may have some challenges, especially I think we see this in sort of the early release of research models which aren't quite products, right, but you can buffer some of those consequences with just really good engineering. And I think that's consistent with any really complex system, taking those steps and being deliberate and asking the questions in a design phase. >> RUMMAN CHOWDHURY: Right. I love what you're talking about with the system based approach.

Here is where ethics and technology really start to come together because so much of an ethics approach is understanding the context in which something is used, who is using it, what is it being used for, and even analyzing output based on again, social/cultural context, versus saying, oh, we are just going to flag malicious material. Any social media company is a really good example of challenges of having a pure sociotechnical system and trying to address these problems, thinking both of how to scale from an engineering perspective because you have to address these problems at scale, but also trying to marry that with this kind of context specific aspect of appropriate use. And I feel like that's going to kind of forever be a tension, and it's fine that it is. This is a journey. >> RAM SHANKAR SIVA KUMAR: Just to kind of like pull Vijay into the conversation, so, Vijay, I'm sure, like, Google's DeepMind employees are also like looking at these conversations. Laws part of day to day use.

I liked what Daniel touched on with tools and processes. What kind of like guidance would you give to an aspiring CISO over here? Like when the odds are kind of like bringing this new technology in? >> VIJAY BOLINA: I think there is - internal usage of these technologies, I think there is a wide array of concerns that we have, whether it is, you know, like a PII or user data that may be sensitive to certain components within the organization. There are certain departments within an organization and there's, you know, a need to know when it comes to access to that information. And developing these systems, employing access control and what these systems may be able to disclose to you from an individual usage standpoint, is a hard problem. And a lot of folks are thinking deeply about what does access control look like when it comes to a large language model as an example. So internally, I think these systems can be quite useful.

And it's important to kind of understand what those limitations are too when it comes to the data access aspect of it. I think a lot of folks worry not only from third party usage of these types of models but also internal use of these models; Where does the line get drawn when it comes to access of that data? >> RAM SHANKAR SIVA KUMAR: Yeah, it is also very interesting because like, obviously, there are certain companies that have like completely banned these sort of technologies. There could be even schools and governments. So, as companies are kind of like thinking and investing in this space, I like how we can like point out access controls.

Rumman, like, with all these problems that are still like proliferating in this like - especially in this conversational LLM space, what kind of tactical guidance would you give consumers who are kind of using this technology? >> RUMMAN CHOWDHURY: I think it's really a lot of these technologies are putting to the forefront the need for critical thinking and analysis of the output you are seeing. We have all kind of been aware of this. Don't believe everything that you read on the Internet. But now it's, you know, you actually have to be more critical.

The thing I worry about a lot are hallucinations. Very realistic sounding information that's completely false. In some cases, it is shown to be actually malicious.

Will Oremus of the Washington Post had a really great article about large language models hallucinating sexual harassment allegations against a law professor. So, like, kind of malicious things that if somebody were searching something, and let's say, for example, it gets incorporated into a search engine, then this may actually be the search result that happens. And something similar but a little bit more comical happened with me when my friend Jaho was asking ChatGPT - and this is an older version, this is like the original ChatGPT, about me. It said I was like a social media influencer and I had a lot of shoes and I had a $5 million net worth, to which my answer was, if I had a $5 million net worth, you would literally not see me again. >> RAM SHANKAR SIVA KUMAR: Yeah. Give me a million

more please, yeah. >> RUMMAN CHOWDHURY: Like, no, I would just be gone. I wouldn't be sitting on this stage. Like, I love you all but I have an island to buy. But it was funny but also in a sense, like speaking of harms, it was a little malicious in the sense that because I'm a woman, it was hyper fixated on things like my appearance. It kept talking about my appearance.

And it positioned me as a social media influencer, like an Instagram influencer. Which I have a pretty significant online presence; literally none of it is about my appearance. So, where did that come from? And these are the kinds of things that are actually hard to figure out, what it's hallucinating and why it's going to hallucinate.

And actually, from the space I think all of us are in, we try to be proactive about identifying harms. I think the thing that we should be thinking about and I don't know how to address it is, okay, we know hallucinations are real, et cetera. We are still in the space of identifying a harm and then retroactively fixing it. What does it mean to move into proactive harms identification? I literally have no idea why an old version of ChatGPT thought I was an online influencer.

>> RAM SHANKAR SIVA KUMAR: It is also very interesting because it cannot like go back and try to - and we are going to touch upon the transparency aspects of this. And I want to ask Daniel the last question. And if folks have questions, they can start like, you know, we've got to go to the next section. If you have questions, you can go to the mics or you can also tweet at us and it's going to come over here. So, I'll ask Daniel; one of the interesting things that - many things that Rumman said is the word malicious and harm.

And for most of the folks in this audience, malicious means attacker in a hoodie, you know, and harms means like data exfiltration. As you are hearing these sort of things that are being spoken about, a very different sort of malicious, different sort of harms, how do you as a security person wrap your head around it? >> DANIEL ROHRER: Great question. You know, I think, first, you know, meeting folks like you all who - and we have been talking for years. If you looked at, you know, told me ten years ago I would be spending a lot of time with legal and ethics folks, I would have looked at you a little funny. But I really do spend a lot of my time doing that.

And I think what I have learned in those conversations is a lot of the things that we do and certainly the way we think in security actually applied very well and very directly. You know, the exploratory, the anticipatory, looking at risk versus harms and impact analysis. A lot of the patterns we look at in autonomous vehicles in terms of how the - getting ISO 26262, the way you break those problems down are exactly the same things you want to do here. I think, you know, getting to your comment, I think aligning on alignment, there is actually a lot of art and social sciences and safety and others that can all be brought to bear if we can get to a common codex. I think a lot of what I do at NVIDIA is I'm like sort of the code switcher of like okay, legal person said this.

Engineering, this is what they mean. Security guy said this. Legal, this is what this means. And really just helping to bridge those divides to bring sort of the whole team together to really go at these problems because they're very complex.

>> RAM SHANKAR SIVA KUMAR: Well, I want to, like, go over to some audience questions that's coming up here. First off, Rumman, I think this is very interesting for you. Should we be worried about AGI taking over the world? Like, is a terminator scenario actually real? >> RUMMAN CHOWDHURY: Oh no. >> RAM SHANKAR SIVA KUMAR: Yes. >> RUMMAN CHOWDHURY: My least favorite question.

This is also a bit of inside joke. >> RAM SHANKAR SIVA KUMAR: By the way, a senator did like tweet about this, right? So. >> RUMMAN CHOWDHURY: Of course.

Of course a senator tweeted about it. >> RAM SHANKAR SIVA KUMAR: So, this is a real question. >> RUMMAN CHOWDHURY: I am - okay. So, I will preface this by saying when Ram was joking about how I'm one of the like OG people in algorithmic ethics, actually it's true. I'm one of the first people in the world working on applied algorithmic ethics and moving it from research into practice.

And I used to start my talks six years ago, seven years ago at this point, I used to start it with a slide that said there are three things I don't talk about. Terminator, HAL, and Silicon Valley entrepreneurs saving the world. And it was a picture of Elon Musk in an Iron Man suit which is as if I like predicted my own future. So, it's funny that we're back here and I'm like, is Terminator going to be sitting on a panel, having déjà vu of six years ago? So, the short answer is no. Because the longer answer is there are many, many things to worry about before we have to worry about AGI.

And like let's just - >> RAM SHANKAR SIVA KUMAR: Can you give me an example of a thing that we need to worry about before AGI? >> RUMMAN CHOWDHURY: Oh goodness. Well, you know, I thought the OpenAI paper on economic harms was really good. And it is a call for something to be done about the potential for joblessness in particular fields.

And they very clearly lay out these are the industries that should be concerned. But great, who is going to pick that up? Somebody needs to pick that up and do something about it. Hallucinations was a great example. The spread of mass misinformation. At Twitter, we worried a lot about election misinformation. Great, now take that and not - and amp it up an insane amount and also add to the fact that the upcoming U.S.

presidential elections, Donald Trump is back in the game. So, now we have a politically contentious situation in a world in which it is very, very simple to make and spread misinformation at scale. Before we worry about AGI, there are so, so many things that I think we should be concerned about.

But actually, to plug something I am working on, I'm curating a journal issue for the Journal of Social Computing on the concept of sentience, because I do find the idea very fascinating. The thing I find really fascinating is that everybody talks about it but nobody knows what it is. So, what? Like we're just going to like know when it's sentient or not? And interestingly, in other fields, like - >> RAM SHANKAR SIVA KUMAR: Can I interrupt? You said ascension. >> RUMMAN CHOWDHURY: No, sentience.

>> RAM SHANKAR SIVA KUMAR: Sentience. Okay, gotcha. Okay, sorry.

>> RUMMAN CHOWDHURY: And in fields like animal intelligence, there are ways in which scientists measure the cognition of animals to determine if they are first alive versus conscious versus sentient, et cetera. And I think it would be very fascinating to draw from those fields and other individuals who have been thinking about insects, mycelial networks, animals, and even exobiologists who think about the potential of finding life in outer space and draw from that as we try to make this roadmap of what sentience means. I mean, it's a - for me, a purely intellectual exercise. But I also do think, kind of related to what Daniel was saying earlier, if we don't have these milestones to meet towards this goal of let's say the goal is making AGI. I don't think that's actually a smart goal but that's just my personal opinion.

>> RAM SHANKAR SIVA KUMAR: So, we are not going to have Terminators take over anytime soon. >> RUMMAN CHOWDHURY: No, that's the short answer. The long answer is it's intellectually interesting to think about it. >> RAM SHANKAR SIVA KUMAR: Totally. And I like the job loss aspect that it kind of like pointed out.

I just want to like ask the audience, like raise your hands if you think generative AI is going to take your job as a security responder in the next five years? Let's put a time box. Wow. Nobody in this room. >> VIJAY BOLINA: No one. Wow. >> RAM SHANKAR SIVA KUMAR: Nobody in this room thinks about that. Vijay, sorry. >> VIJAY BOLINA: As a former incident responder, I'm closely aligned to that.

>> RAM SHANKAR SIVA KUMAR: You're closely aligned. >> VIJAY BOLINA: I think if anything, it will make our jobs a little bit more efficient. And a little bit more creative. I think we'll be able to do things much faster. And yeah, I think - >> RAM SHANKAR SIVA KUMAR: That tracks. Because you were at Mandiant before this.

So, I don't know how many cups of coffee we are going to feed - sorry, go ahead. >> DANIEL ROHRER: No, there are thousands of notables a day. Our responders cannot go through all of them.

Please bring me the AI partner that can go through that for me. Yeah, absolutely. >> RAM SHANKAR SIVA KUMAR: Okay, there is also one last spicy question and then I think we can kind of like move on.

And so, are LLMs all powerful? Are they just akashic parrots? And who wants to take that one? You know, I love like, Daniel, why don't you kick us off? Like, do you see this as an all-powerful like foundation models, iPhone moment, or like, oh no, you know, just a word repeater? >> DANIEL ROHRER: I think certainly the very, very large models are interesting because I think just exploring sort of the emergent properties and I think we have seen, as the progression of chat models has gone, sort of new capabilities sort of empirically emerging from these things. But I think for the vast majority of use cases, you actually don't need or even want because they are very costly to run, these huge models. Right? You can have a much more narrowly constrained, have a specific model that I think will perform very well for a lot of use cases and is actually more appropriate. Because there is this dynamics, and we have talked about hallucinations up here, of like, hey, if I ask the model the same thing three days apart, do I get the same answer? Like, if I've got an AI copilot helping me in the SOC, I really want it to be consistent in its evaluation models of how it is behaving in the system. Likewise, accuracy, lack of toxicity, these sort of other things where you have to really curate it down with RLHF and other sort of methods to get you there. So, while yes, I think that's very interesting, I think there is going to be this big shift to sort of more tasks.

>> RAM SHANKAR SIVA KUMAR: Yeah, and clearly like the audience kind of like - nobody even raised their hands thinking that parts of their job is going to be - nobody here seems to be worried about it. So, clearly, there is some part of it being like - >> DANIEL ROHRER: I think it is a great workforce amplifier for a lot of disciplines, right? Just finding the right way and you know, being conscious as we design these systems for additive and minimize the harms that go with it, right, because that's just system design. >> RAM SHANKAR SIVA KUMAR: We're going to move to the next part. I want to start with Vijay, and probably like, actually all of you, but I know that all of you, you know, have AI red teams.

Combining two of the hottest buzz words, security and machine learning. I'm part of one. I see like our talented team member from my team sitting in the audience here as well. But really, how do you go about constructing an AI red team? Is an AI red team even needed for every organization? Vijay, start us off. >> VIJAY BOLINA: Yeah, I think if you are working in the space of large foundational models or general purpose AI systems that will be able to do a multitude of things and have downstream access to other types of applications or integrations, I think it is extremely important to stress test those.

One way to go about doing that is using a red team mindset. So, DeepMind has an AI red team and we collaborate quite closely with the Google machine learning red team. And we focus on the outputs that come out of DeepMind. And it is an important way that we challenge some of the safety and security controls that we have, using an adversarial mindset to really stress test what we are building, not just on an algorithmic level but also a system and a technical infrastructure level as well.

We know that a motivated adversary is not going to probably find a novel machine learning attack to gain access to an AGI-like system. They will probably employ a multitude of methods and attacks to gain access to the underlying system. So, it's important to put together a mix of folks with computer security backgrounds but also machine learning backgrounds in the same room to really put their heads together to think about what the vulnerabilities could look like. And for us, we try to optimize to identify novel attacks and/or effectively 0-days that are not being identified in the public domain for the emerging models that we work on.

>> RAM SHANKAR SIVA KUMAR: I like novel attacks that you mentioned there. And I know Rumman, you kind of have a very similar take on this with your bias bounty, where you are kind of curating a community of like red teamers. How did the first ever bug bounty come about? >> RUMMAN CHOWDHURY: The first ever bias bounty happened at Twitter. So, we - some of the folks in this room may be familiar with our image cropping model, and my team did an audit of it, identifying that there was actually bias in how the model was cropping, and it was fundamentally due to the fact that the original model was trained on human eye tracking data.

So, it incorporated human biases of tending to look at women more than individuals who did not appear as women, and to look at lighter skinned individuals. And it wasn't actually necessarily a byproduct of lighting, et cetera. And that's because it's based on, you know, human biases and human eye tracking data. And overwhelmingly, white women are used to sell things and as icons of beauty.

So, that's what we gravitate towards. At the end of it, we decided to just eliminate image cropping, which actually, interestingly enough is the ethics community, but a lot of the photography community, design community were very happy about because it was an auto crop. So people, you know, they were like, I spend hours perfectly composing a photo and Twitter just chops it into parts.

But that aside, we recognized as we did the bias inspection that taking these blunt instruments of gender, race, blah, blah, blah is hard. And frankly, all of us, even if we curate a team of experts, will have biases even in the teams we curate. To have the privilege to be on the team at DeepMind or NVIDIA or Twitter means you probably have a PhD. You speak English. You've gone to an accredited western - very western university or an accreditation university.

And that already is like a one-percenter, even of the community that we consider to be very small and fringe in our little 0.1% world of our people. So, how do we curate a larger audience? We opened up the model for essentially public scrutiny and we said, you know, do your worst. Like, find problems with this model. And we were so impressed by what we got.

I mean, people thought about religious head coverings. My favorite was actually somebody looked at military camouflage and how camouflage was being - people in camouflage are being cropped out. Disabilities, people with disabilities were often cropped out of photos because they're just at a different level or a different height or they are composed differently from able bodied people. These are not things that my little team would have thought of. So, that's evolved into a nonprofit.

And my latest iteration, this group called Humane Intelligence, what we're looking at actually is the - who do you publicly crowdsource useful and novel hacks and biases on context specific issues that are not within the realm of information security and AI expertise but actually people subject matter expertise and lived expertise? Like, how do we get a panel of doctors hacking large language models for medical misinformation? >> RAM SHANKAR SIVA KUMAR: Which makes sense because like, you know, as we are releasing these models to say, specific verticals, not every organization is going to have these sort of verticals. And that also like, again, maybe Daniel and Vijay, I would love for you to weigh in, have you found it easier to take security people and teach them this like responsible AI mindset? Or how what Rumman is doing seems to be a little bit more on the other side. So, what is the composition of your AI red teams? What do they look like? >> DANIEL ROHRER: I think it is certainly both. And again, you know, talking on aligning on aligning, how do we get all the right people in the room? And I think the most successful teams are the sort of composed teams and I think that's where the magic often happens. I think my concern at the industry level is there is not enough security practitioners ever.

There is certainly not enough AI security folks and the intersection of those two is even worse. So, for me, you know, where I'm deploying my red team, and I'm very privileged to have a fantastic red team who are amazing at what they do, but there are a lot of organizations that really say I do not have those capabilities. And for me, the one lesson I hate learning is the one that I learn more than once, right? Every time you find a new method or novel thing, you know, my question to the team is how do you encode this? How do you share this? And we're releasing some training later in the year, tooling and really, how do we elevate the ecosystem in this space? Because I think all of us, this is sort of a - you know, maybe this is a security thing.

You know, it's always been sort of a shared defense mentality. And I think that same can very much apply to those ethics and bias spaces too where we can all lift together and really amplify the impact that each of these individual institutions are doing, which is why we hang out out all the time. How do we go about this type of work? I think that's really where the focus should be. >> RAM SHANKAR SIVA KUMAR: Vijay, would you want to weigh in? >> VIJAY BOLINA: Plus one to all of that.

>> RAM SHANKAR SIVA KUMAR: Plus one. I love you're doing Reddit style now. >> RUMMAN CHOWDHURY: Vijay is hitting the like button.

>> RAM SHANKAR SIVA KUMAR: Yeah. Okay. We'll - I'm seeing one more audience question kind of flash in. Oh, please go ahead.

>> AUDIENCE: Thank you for your insights. I have a couple of questions. >> RAM SHANKAR SIVA KUMAR: Go for it. >> AUDIENCE: The first one is the red teaming is absolutely essential. But there's also a different type of checking, like what - what it should not do, which is different than trying to break what it should do.

So, I wanted to get your opinion on that. And then the second question is about, akin to drug recall, like if you discover a problem way later, how would you recall either the data or the processing that may have been poisoned in some way that is factored into a product or a service that was contributed with an AI component? >> RAM SHANKAR SIVA KUMAR: That's a great question. Rumman? >> RUMMAN CHOWDHURY: That second question is actually really great.

I love the parallel also, right, between drug discovery and kind of pulling it back. I think that is one of the dangers of generative AI models where it is very hard to decipher where it came from. You can't very easily with frankly most AI models, trace back to, oh, this is clearly the data that led to this adverse outcome.

And kind of the silly example I gave earlier, I genuinely don't know why ChatGPT was hallucinating about me and where it got that information from, so I don't know if rollback is possible. And what companies are doing today, and maybe a more abstract thing I worry about. So, on one end, of course you worry about no safeguards and just, you know, mass misinformation or hallucinations, et cetera harms. On the other end, you know, the best instrument we have today is just blocking output on a case by case basis.

So, for example, if you try to ask most large language models medical information, it will say, you know, I am not able to give you medical information; please go talk to a doctor. And that makes sense. But here's another example.

You can ask ChatGPT about heterosexual marriages and it will tell you about wedding practices. If you ask about it homosexual marriages, it actually clams up. And I understand why, because often people try to do this to incite or say malicious things about queer marriage or homosexual couples, but in some sense, that's weirdly erasing an entire subset of society that exists by being almost overcareful. So, I think there is a bit of a like pendulum swinging and I don't - we haven't figured out what the answer is yet. And back to like Ram's very original point he made, literally when we came up with this panel, ChatGPT did not exist. We live in a totally different world and it's actually like six months old.

>> RAM SHANKAR SIVA KUMAR: Yeah, this was in November, ya'll. >> RUMMAN CHOWDHURY: I know. >> DANIEL ROHRER: If I could maybe also respond a little bit. I think it is important for anyone who is delivering models or systems to the world is have a response mechanism. Like you know, NVIDIA has stood up, we have a security portal. We can take security.

We also have AI questions. If you have an ethics or AI or bias concern, you can submit it to NVIDIA and we will go investigate it, right. So, build those mechanisms, those feedback loops for you so that the teams can go and exercise judgment, right? If the systems are misbehaving, we certainly want to know about it and we want to be responsive to it. And again, this gets to transparency model. Transparency, I think in models, there are Model Cards that say hey, this model is good for this, not good for this has never been tested for this.

So, to your question of like the negative cases, like hey, we don't know how it's going to behave in this case. And being very clear about that. So, if someone chooses to deploy a model in this context, they know they have some work to do to close those gaps.

>> RAM SHANKAR SIVA KUMAR: Also, it is quite interesting because I feel all organizations, including the one I'm part of, like has responsible AI practices. Interesting question, and maybe Vijay, I would love for you to weigh in, how do you operationalize this and enforce it from a security standpoint? Do you see any role as a CISO for you to enforce it? >> VIJAY BOLINA: Yeah, I think the way that the question was constructed, or just framed, that the way that I think about this is from a preventive mindset. From a computer security standpoint, we think a lot about detection and prevention.

And if we know, as an example, that large language models could disclose private information if prompted correctly, well maybe we should think about how do we prevent these large language models to be trained on private data, or data that's regulated like PHI or other types of PII. So, you know, when you think about that from a preventative mindset, then it's like well, what can I do across my data sets and my data pipeline to ensure that type of protected information is not present or that we have a good understanding if sensitive information is part of that foundational layer of what the model would be trained on? And so, things like that I think are helpful when you - when you think about it from a traditional computer security standpoint. And especially a preventative standpoint.

>> RAM SHANKAR SIVA KUMAR: Sorry. Go ahead. >> DANIEL ROHRER: I was just going to say, just riffing, you know, that I think there are two models in security too, is to preventative, defense, but assume breach. There is is this language that we understand that we will never have a perfect defense in all the cases, and building the systems to respond to that. And we were having the discussion of sort of the AI version of that is assume drift, right? Your models, even if they were perfect today, and I see this in my teenager who is constantly bringing me new words that I didn't know, LLM, right, today, will need to learn new things.

It will be wrong, inevitably, at some point. And if you shift your mind to that set, you start building the monitoring, the feedback loops, the human interventions into the system, which makes it much more robust and resilient in the face of what might be otherwise inconclusive. Because it can't be perfect. It is always moving. >> RUMMAN CHOWDHURY: Actually, there's an interesting aspect of that that's even a more fundamental question, which is when do you even know a model is ready? That is a not a solved problem, especially for generative AI models. We don't really have benchmarks to say okay, it passes.

And it goes back to a point you made earlier about like having a codex or some sort of way of - you know. And this is the world we're exploring right now. >> RAM SHANKAR SIVA KUMAR: And I want to kind of like - >> RUMMAN CHOWDHURY: He has been very patient. He's got a question. >> RAM SHANKAR SIVA KUMAR: Oh, sorry. Please go on.

>> AUDIENCE: Thank you. So, my question is kind of related to vendor onboarding. As we are onboarding all of these language learning models, a lot of them just don't have any security reviews at all. A lot of them are mostly related to open source and they're not really - they're not safe. But most of the time, people in security will say either go ahead and pass it or they don't know how to review these things that are coming in. And most companies, they say just don't - we're going to take out our data consention and then you can't use any of our data and then just onboard.

Just, it's free, just onboard it right now. But what's the balance between letting everything in or not - or trying to be a blocker, I guess, as a security person. But making sure that we are only onboarding safe and secure language learning models.

>> VIJAY BOLINA: So many ways to answer that. >> RUMMAN CHOWDHURY: I can start from one perspective is like this is that slippery slope between research and product that I worry about quite a bit, right? Research can be put out into the world very incomplete and that is the nature of research. But now because of the rapid pace of AI adoption, literally things that are called research are productized overnight and pushed out as if they are product but still called research. And to be very cynical, it's because someone sat with their legal team and their legal team said absolutely do not call this a product because then you have product liability. Once you call it a product, then you have to worry - you know, back to your joke earlier but like agree, you have to talk with a lot of legal people because they will tell you these things. Once you call it a product - just call it research.

It's fine. Then it can screw up as much as you want it to screw up and you're like, oh, great, thank you. Give me your feedback and we'll fix it. I think like that question is really great because it touches on a fundamental unresolved issue. And again, it actually does predate large language models.

We're just seeing it explode with this adoption. It's open source code being out in the world, being pushed into product with insufficient review, oversight, understanding, and it is allowed to be out there and allowed to be pushed under the hood into product because it is called research. >> RAM SHANKAR SIVA KUMAR: I mean I feel like we cannot pass this question without like having the forefront of AGI research company in our panel. Vijay, how do you think about this? Do you kind of like concur? >> VIJAY BOLINA: Yeah, I think the question kind of transcends vendor risk management or third party risk management. I am just looking across the crowd and putting on my - thinking about my previous roles that I have had.

I can imagine the questionnaires that you all have to deal with, have gotten larger because of AI and the vendors that you all use employing some of the these technologies and their products and services. And you all being concerned maybe about what they're doing from a data processing standpoint, a retention standpoint, and how they're using your data across various different types of use cases or maybe even across other clients and the potential for leakage and things like that. And so these are, you know, these are concerns that I think most security practitioners have to deal with with a wide array of technologies, whether it is a cloud service provider or a very specific type of vendor.

So, I think, you know, everything's a business decision. Taking a risk-based approach to assessing whether or not it makes sense for your organization is usually what we do as practitioners, right? We have to gauge whether or not the risk is worth the reward and you know, whether or not we're comfortable with having one of our vendors use data in a certain or in a specific way, whether it is for AI or not. >> RAM SHANKAR SIVA KUMAR: I mean, I cannot imagine what calculations you have to do as a CISO when you are putting out like - these big models out. >> DANIEL ROHRER: Honestly, I don't feel like it's different. I think the OSS comparison is an apt one.

It's supply chain, right? It's, you know, how are you evaluating what you are bringing in, what are the risks. And I mentioned Model Card. I think NVIDIA released Model Card ++, which is sort of an expanded Model Card view which talks about ethics, bias, security, right. If you have an opportunity to bring in a model that you know a lot about or a model that you don't, I mean, for many, the answer is obvious.

And certainly I imagine there are vendors in this space who do a lot in model supply, you know, OSS scanning and other things. That will emerge for AI as well with the tools to help you scale these fundamental questions as you are importing these technologies. >> RUMMAN CHOWDHURY: Can I ask a question of the panel? >> RAM SHANKAR SIVA KUMAR: Yeah, oh, please. Absolutely. >> RUMMAN CHOWDHURY: Actually, you should answer this too. >> RAM SHANKAR SIVA KUMAR: I don't want to play Oprah all the time.

>> RUMMAN CHOWDHURY: Yes you do. So, it's really interesting because there is this line between sort of cultural practices and enforceability. So, I like the parallel to the OSS world because in the OSS world, there isn't regulation. A lot of good behavior is community enforced. Right?

So, there is one world in which yes, we are getting the Digital Services Act and in the EU, the Digital Markets Act, and the U.S., something will pass at some point. We'll get some sort of regulation. But there are a lot of interested parties talking.

So, that's one end. That's like that's the sticks, right? And then until then, there is the responsibility of all of us as a community, right? So, putting good documentation is a really good example but no company is obligated to use Model Card ++ or actually care about filling out the bias section. That is what a lot of the responsible AI people say is we're just not given that weight.

So, the bigger question to all of you is like, what are the practices that can be enforced in the security community culturally rather than as an enforcement mechanism? >> RAM SHANKAR SIVA KUMAR: That is a great point for Daniel. >> DANIEL ROHRER: Do it culturally. I think, you know, the thing that I - certainly my experience at the organization is a lot of people willing to do the thing. It's teach me how. How can I do this easy, fast, efficiently with the stuff that I'm already doing? Because inevitably it's, you know, we're going to add two more things to your list of things to do between product launch. So, for me, the emphasis has always been like, and this is true for security because it is the same dynamic there of like hey, you know, how much security hardening do I need to do? It's like, how do I scale you? How do I make it fast? How do I make it efficient? Automate it where we can.

Human in the loop where we must. So, the emphasis for me is really how do I, again, encoding red team learnings and things and the tools and things to scale that out. If I can ship that with the product out to the field for the partner, even better, right? Because now I'm not just scaling my organization, I'm scaling my ecosystem. >> RAM SHANKAR SIVA KUMAR: Just to like ask a similar question to you, Rumman, because I feel you are the figurehead for all of the responsible AI, representing all of them now.

Obviously, there was a huge letter like stop developing ML systems. This is the responsible thing to do for the next six months. Is that - like, do you think - I mean, I don't think that any security person would be like, you know, whatever, think that way to the best of my knowledge. Because again, to Vijay's point about risk-based systems.

So, the question back to you is how do you bring this red team mindset, this assumed breach mindset, assumed drift mindset to responsible AI folks? >> RUMMAN CHOWDHURY: I think this is how we already think anyway, right? There is no perfect model. There is no like platonic ideal of any system, period. We don't have that expectation of software or, you know, toothbrushes or whatever. But there is a reasonable expectation that we have and then beyond that is kind of above and beyond.

But we know that nothing is perfect. But yeah, I mean, so you've referenced the letter. The thing about the letter first is that it was signed by some people who literally two months later turned around and said they are going to make their own large language models, so there's that. The other half is I don't - six months is not a long time.

So, I looked at the letter. I jokingly was like, man, imagine like - and everybody here works at some company where you're under an immense amount of pressure to like build products and ship them. Imagine you had six months to deal with your tech debt and your org debt? Man, you'd come out of that six months like blazing fast. Wholly efficient team. So, jokingly.

Not jokingly, I don't know what a peer moratorium does because there was nothing in that letter that said here is what we will do in those six months. So, what? We are just going to stop, take a breather, and then Sam Altman gets a six month vacation and then he comes back and then business as usual? So, the thing that was missing for me is what is the remediation? What is happening in this six months that's solving the problem? >> RAM SHANKAR SIVA KUMAR: That's a great point. Almost like the six months, again, some of the top folks in AI space kind of like signed off on it. So, you've got like even more fuel to that fire.

>> RUMMAN CHOWDHURY: I think it is appealing just as like a concept. Like it's kneejerk, it's very appealing. But then if you think about it, what is it doing? >> RAM SHANKAR SIVA KUMAR: Perfect. We will take one last audience question before we wrap up.

>> AUDIENCE: Yes, thank you. As decision making has become optimized by usually using a human plus some sort of AI technology, do you think that as the technology that's behind AI evolves, that we can continue to use that in security? Because then we'd have to actually trust the technology behind the AI to tell us the truth. Or do you think we'll actually become more human centric in at least the security of AI? >> RAM SHANKAR SIVA KUMAR: That was a great - Vijay, do you want to take that quick thirty seconds? >> VIJAY BOLINA: Yeah, I think there is always going to be a human in the loop and it's going to be an expectation for the foreseeable future for us to ensure that we are validating what comes from these systems, whether it is a recommendation or a summarization. I think it's something that we should all make sure that we're doing correctly. If I were to just to do a spit fire response to that. >> RAM SHANKAR SIVA KUMAR: I know we have three minutes.

Thank you so much for your question. As we wrap up, I wanted to do an RSA style wrap up. I'm going to go down the aisle. Rumman, I'm going to start with you. What can the audience do one week from now to build AI systems securely and responsibly? You've got ten seconds. >> RUMMAN CHOWDHURY: Oh, uh.

>> RAM SHANKAR SIVA KUMAR: Yeah. Time starts now. I'm kidding. >> RUMMAN CHOWDHURY: I will go back to my cultural question. Absolute regulation.

What can you do culturally on your team to cultivate this sense of responsibility? >> RAM SHANKAR SIVA KUMAR: Thank you. Daniel, one month from now, what can people do? >> DANIEL ROHRER: Yeah, I think identify your team. Know who are the people who have the ability to help you solve these hard problems. You know, if you're not talking to your legal folks - please, if you have an ethics person, be talking to them. But also, assemble the feedback loops in your organization and outside of your organization. Like I mentioned before, we have - NVIDIA has the portal.

If you are deploying these systems, find ways to take feedback. You build trust by having interactions and if you are not interacting with your audience who are impacted by these systems that you are building, you are missing half the conversation. >> RAM SHANKAR SIVA KUMAR: Totally.

Vijay, as the person who traveled the furthest for this panel from London. What about one quarter from now? >> VIJAY BOLINA: Yeah. I think plus one to what everyone just said. Again, I think championing responsibility is extremely important. AI is something that will require cross collaboration with various different expertise. And I think that's going to be extremely important.

I think from a quarter - or within a quarter's timeframe, I think it is extremely important for all of the folks that may be exploring these technologies or developing them from a product and service standpoint to understand what their threat model is and take appropriate risk-based approaches or a risk management approach to addressing what could be considered near-term emerging risk - excuse me, near-term risks that need to be addressed versus long-term emerging risks that maybe could be tabled for a bit. >> RAM SHANKAR SIVA KUMAR: Well, thank you all so much for joining. If you want to hear more of this, like we are going to have a follow-up at eleven o'clock at the RSA bookstore for -- and I. So, if you are interested, please do join us.

Really, a big round of applause to our amazing panelists who did this. Thank you all so much. Really appreciate it. Thank you.

2023-06-12 07:23

Show Video

Other news