MITRE Grand Challenges Power Hour: Sensible Regulation for AI Security

MITRE Grand Challenges Power Hour: Sensible Regulation for AI Security

Show Video

(futuristic music) - Good afternoon and welcome everyone to MITRE's Grand Challenge Power Hour. Today we'll be addressing Sensible Regulation for AI Security. Over the past 10 years and more visibly, over the past six months, AI has gone through tremendous technology advancement, leading many to believe that artificial general intelligence or super intelligent AI is just around the corner. With any new disruptive technology, humanity looks at how to shape its development and its application. From the geopolitical to the existential, a debate about how to regulate AI has captured the nation's attention, and there is no shortage of approaches attempting to address concerns. Among other topics, today we will explore potential options for AI regulation, and explore if or how to place guardrails for development and use of AI.

Before we begin, just a few housekeeping notes. This event is open to the public and to the press, so please keep that in mind when commenting in the chat or posing questions for our speakers. If you're interested in helping MITRE solve some of these big challenges, we'd love to hear from you. MITRE is hiring, you can reach out to recruiting help at for more information. Also a link with more information about each of our speakers, will be put in the chat kinda as we progress through the agenda. We look forward to a robust and engaging conversation today.

Please put any questions that you have for the speakers in the Q&A section, and the moderator for each section will make sure to weave those questions into the conversation, make sure we have a rich discussion. Also, I wanna highlight that MITRE recently released a paper entitled A Sensible Regulatory and Framework for AI Security. And that paper is now available on MITRE's website, and a link to that will be posted in the chat as well. But to kick off our event, we are honored to have introductory remarks from US Senator Gary Peters.

As Chairman of the Homeland Security and Government Affairs Committee, Senator Peters has recently introduced bipartisan legislation to create an AI training program for federal supervisors and management officials. The training program would help improve the federal workforce's understanding of AI applications and ensure that leaders who oversee the use of these tools, understand AI's potential benefits and risks. Many thanks to Senator Peter for opening up our event today. - Hello, I'm US Senator Gary Peters, chairman of the Homeland Security and Governmental Affairs Committee. And I'd like to thank the MITRE Corporation for holding today's very important discussion on how we can ensure that our nation is benefiting from artificial intelligence systems that are quickly being adopted all across industry and government.

These technologies have the potential to be transformative, helping to develop lifesaving drugs, driving advanced manufacturing capabilities, and helping businesses and governments better serve the public. There are also potential risk with AI that could impact our safety, our privacy, and our economic and national security. And that is why I've been working to make sure the United States is a leader in developing artificial intelligence to not only maintain our global economic competitiveness, but to also ensure that these technologies are being used in a way that aligns with our American values. I'm proud to have authored several laws to ensure the federal government is using AI systems responsibly. And I have convened an ongoing series of hearings to examine the potential benefits and risk associated with these technologies and what guardrails need to be put in place to both encourage American development of AI and to ensure it is being used appropriately.

Today's discussion is a critical part of that effort, and I look forward to working with each and every one of you to ensure that we are prepared for this technological revolution. Thank you and enjoy today's event. - Excellent. Thank you to Senator Peters for helping advance the work of sensible AI regulation through legislation and his comments today. It is now my pleasure to introduce our next guest, the honorable Susan M. Gordon. In addition to serving on MITRE's board of trustees, Susan's the former Principal Deputy Director of National Intelligence, a longstanding and respected leader within the nation's intelligence community.

From 2017 to 2019, she advised the president on intelligence matters and provided operational leadership of the agencies and organizations composing the US intelligence community. She focused on advancing intelligence integration, expanding outreach and partnerships, and driving innovation. Welcome Sue, and thank you so much for being with us today for this timely conversation. - Thanks, Charles.

Great topic, great timing. - Excellent. So let's, I guess get into it.

I'm curious your thoughts on all of the enthusiasm over artificial intelligence. We've seen some pretty significant technological breakthroughs, but accompanying those has really been a much broader public appreciation and engagement in the advances that we're seeing. And I'm just kind of curious from your perspective, how do you see the current spot we are in AI, both from a technological perspective, but also sort of a societal perspective? - Yeah, great question. So it's funny, the other day someone asked me for my reflections on 2022, and I said, "I think 2022 will be the last year we talked about AI in the future."

And I've been at this a long time in technical analysis. The first time I attempted to engage what was called AI was in 1982. So, we've been waiting a long time, and I think most people would say the promise just while there are some advances, and we can talk about those, it has fallen short of what we thought its potential would be. And I would say that's until now.

There are, I think the enthusiasm is warranted. The concern is one, we certainly understand more now today than we have previously, maybe because we've seen uncontrolled advances have negative effect. If you just think about social media and communications, we see what happens if things just advance without regulation, but it is going to advance, it is going to have an impact if we humans stay engaged, it can have incredible positive impact.

And that's on everything from arcane business practices to imagining how we can have totally different insight and ability to address new challenges. So I'm bullish on it, and I'm bullish on this moment, and I think that we've been waiting a long time and whether you're in national security or in commerce, this is great stuff. That being said, I do think that we have to attend to its potential misuse and its potential limitations, as kind of a governor on some of that enthusiasm.

So I love the paper you wrote, and I love this topic because as great as the moment is, and we shouldn't retard the moment as if we could, we really do need to think about what it needs to be in order for us to realize the benefit and not so much of the deleterious consequence. - And of course, this is happening at a time where we as a nation are thinking a lot about strategic competition and the advances of technology in the United States and against, or as compared to the advances within China. And so is sort of this moment of Cambrian explosion, if you will, and AI-- - Good word. - Different because of the tensions we have with China and the technology competitiveness that we are facing today. - I think it's a different certainly because of technological ubiquity.

I mean really and truly for so long we had such an advantage that things could happen, and we were in kind of control of them. I think you now can see that almost any technology is available to anybody and certainly to China, certainly in compute. So I think that's one thing that makes it different. The other thing is just data abundance. So it's not just that we have these large language modules, but there's a lot of data upon which to work.

And then if you throw in adversaries and competitors who don't have the same kind of governor that our laws are on privacy and access to massive amounts of data and opportunity to apply compute and models against that data that we would not necessarily use. I'm not thinking the 2014 OPM breach as just one example. I think it is concerning because they are at least equivalent in terms of capability with potentially a bigger data set and without the natural governors on kind of what those lines are between public and private. - Yeah, no, that's a good point. And just a reminder to the audience, feel free to ask questions in the Q&A box, not the chat box. If you put it in the chat box, I don't see it.

If you put it in the Q&A box, it will show up, and we can can filter your questions in throughout the course of the conversation. So Sue, in the paper that MITRE put out recently, we kind of broke the AI regulatory conversation, I guess if you will, down into a couple different bins. And the first is kinda the more traditional application of AI where you have, let's say an autonomous vehicle that uses AI for computer vision and perception.

It's got image recognition, algorithms are identifying signs and cars and trees and roadways. And then it may also have another system that is being used for traffic planning and route management and for the higher level orchestration. And I think we have a lot of experience in the research community and now increasingly in industry of how do you like really do the necessary tests and evaluation to make sure that we can trust the AI that we're deploying into these systems.

But I wonder how should we think about some of the regulatory or legal frameworks around essentially just very sophisticated software that is able to do more than it used to be able to do, well, increasingly in safety critical systems. - Yeah, so I think, it's a great question Charles, and I'll try and wander into it with a little bit of alacrity. I do think that every time there's an advance, we think that new regulation needs to be established. Sometimes the new advance is the same thing, just done differently, and so you can take the framework that we had, the control framework, and apply it there.

So if I think about your examples, one, just safety and and assurance of the software and the models themselves, depending on what systems they go into, well, we know how to do that. We've done that. And so you have to have some way to know what is within each of your systems. Your subsystem example is a great one.

So that's one. Number two is can it do things that are for want of a better term, not desired, I'm gonna say illegal is another term. Well, well, I don't, you can put rules into systems, but ultimately the people responsible for that are the people who are responsible to follow the laws and regulations.

So I don't know that I can take the people outta the system. So I ought to assure the system, make sure that it's safe, make sure that it meets criteria and standards of performance. Then when I put it into play, does it do or could it be misused? Misuse I think is ultimately, disproportionately falls on humans. I think we probably have to come up with mechanisms by which they have assistance in knowing whether guardrails have been breached or not. And then I think the last one is, and you talk about this a bit in the paper, and I think this goes back to your China question as well, and that is we are not the only ones advancing. So if we decide that we are not going to allow advance in certain domains, we say, we're not gonna pursue that.

It will be pursued, and it will be pursued outside of our view, and then we'll be playing catch up to do it. So I think one of the things that our kind of regulatory community and the way we think about it has to be recognizing that we don't have the power to stop the way we used to. And so our responsibility now is to almost be leaders, leaders in standards. Something that we don't like doing in general from a governmental perspective, but I think we may have to in order to be the ones driving the advance, and in that driving advance, you actually have the best opportunity to regulate use. - Interesting, I wonder if there's some interesting analogies to how we've of the evolving regulatory landscape for cybersecurity, just as an example, the national cybersecurity strategy that the White House recently published includes sort of this new element of holding software developers accountable for vulnerabilities in their software.

Historically, it's been the people deploying the software sort of have potential legal liability. If they get hacked, people can sue them or go after them. But the White House was looking to try and extend that legal liability to the people who developed the software to try and motivate more secure software to be developed.

Are there similarities in the AI space as we think about the different entities involved and the way we might try and align accountability and incentives? - So I think there are certainly similarities between cyber and AI in terms of not only how you assure it, whose responsibility to assure it and the kind of multi-pronged way you need to address it. And that multi-pronged way includes, I mentioned humans before, in cybersecurity, we know, we've learned this. You cannot make technology do all the work.

You just can't. You need to have insider threat programs, you need to have physical barriers, you need to do all those things in order to have your best chance. You need to make risk decisions at the top about what you're exposing and what you're not. So that model that we have learned about how you actually best protect yourself from a cyber perspective all the way up to what risk are you managing is something that we need to apply here.

I do think it's a really interesting question about are manufacturers responsible for the safety of their products? I like that a bit, and I like it a bit because my view is that national security is no longer the sole purview of the government, that business leaders and citizens are making national security decisions every single day. And so I think that the idea that even someone developing code has a responsibility for what that is begetting isn't a terrible idea. But I say that with some caution because I think you can easily end up in a circumstance where who's responsible for how much in a chain and can they afford that cost and are you pretending that you have that responsibility? But in fact no one could actually carry that out in your business system.

So, one I think, cyber's a good model, and we've learned some things about that in terms of risk management, not absolute protection and understanding where the risk is. I do think responsibility of developers is a good idea. Maybe that looks like standards, back in 32 or 33, right after the stock market crash, we figured out that fraud was a national security problem.

And so we introduced generally accepted accounting practices. Maybe it's time for that kind of standard so that everyone has to meet some things, but making someone responsible for every single thing that happens, I think is a unreasonable standard and is likely to create less adherence to those standards than more. - Yeah, no, there's this whole discussion around this idea of duty of care and the responsible development of a lot of these digital systems. And you could imagine in the autonomous vehicle scenario, is it, I dunno, the people who curated the training data? Is it the people who designed the model? Is it the people who trained the model? Is it the people who integrated into a bigger system? Is it the driver who's using it. There's a lot of different pieces and certainly no significant case law yet on who's really responsible for which piece of a very complex chain. - Yeah but let's pretend continue on this gap thing for a minute, so just because we have audit committees and accounting standards doesn't mean there's not fraud and fraud is a different issue than a company not being responsible because it didn't have policies and practices in place.

So I do think there is probably a way to thread this needle where we set a standard that is what is responsible versus saying you are responsible for every accident or everything that could ever happen in the world. But I do think I remember Bran Ferren who was the head of Disney Imagineering, probably now was talking at the CIA probably 30 years now, it feels like yesterday. And he was talking about the problem with assurance, and he said, "No one knows what's in a computer." (Charles laughing) Right? It's built on so many things that existed so many times before. And so I think if you think about that explosive problem, I think there's probably a good argument to say this is a really good area for government research because how you assure these things is probably not something best left to the commercial standard, but actually something we need some real research in.

- Yeah, yeah. Well, I think much of this conversation is, the conversation that we've been having for several years now, really since the deep learning AI summer that kicked off in 2012, though I think in the last year, the tenor of the conversation has changed because of the explosion of large language models. And suddenly we no longer see AI as this discreet, testable component in a system, but rather this, amorphous thing that can live on the internet, and has its own agency. And I think a lot of the, it was in the paper, this we treated this sort of this second bucket of the sorts of things that you might wanna regulate and think about.

I mean, two of the big concerns at least near term is that these large language models will expand the scope and scale of mis and disinformation and offensive cyber operations. And I'm curious, I mean, do you see that as a logical outcome for the growth of these models? - Yeah, I think one of the unfortunate realities is, misuse tends to accelerate faster than proper use. It just does, if you wanna stay with our cyber example, most of the really difficult cyber tools originated with cyber criminals. And so I think one of the problems is that these emergent capabilities on uncontrolled data sets in a data economy that we don't control in environments that are largely outside the government can just explode and find purchase without any oversight. And weirdly, potentially not even recognition that it's happened because it just changes the world around us in ways we don't see.

I think part of the reason why, so I think that's a valid concern. I think it's one of the reasons why I think that the government or entities like MITRE that have this really lovely space between the private sector and the government, and have the ability to advise, but also are living in part of the open environment. I think the government doesn't, is afraid of this, and doesn't leap into the fray in terms of how it can be used because it's in use that you discover disuse. I am so worried that it will move beyond our ability to do reasonable things with it before it becomes a commodity. I mean right now we're talking about as a commodity, but it isn't yet, it isn't that place. So to me one of the reasons why your paper and this conversation and much smarter people than I, you're one of them, it's important to have this conversation because if this moment passes, given the speed at which things are advancing, given what's happening, I was talking to someone in the valley, and they said the rate of advance is more than anything they've ever seen.

And if you launch a hackathon with LLMs, you will have it full of participants within the first minute of its announcement. So to me, if the government doesn't get itself involved in this space and in the advance of it because they are so concerned about misuse that they're almost guaranteeing that it'll move faster than we'll have the ability to come back and clean it up. And this is something where we can probably build a greenhouse, but I don't think we can make this house green if we let it go without it.

- So another interesting threat that one of the questions that has came up in the Q&A box is really asking about at least the models we have today, things like ChatGPT being sort of accidental insider threats. So like for example, here at MITRE, I held a big event earlier this year, probably in February or March, we were talking about it with the workforce. And one of the questions that we got was, "Well, I spend my days putting together complex acquisition documents for big department of defense programs, wouldn't it be great to have ChatGPT be able to automatically generate this sort of boilerplate language?" And my guest for the call immediately chimed in and said, "Well, please do not put any corporate information into ChatGPT. It's using that as a source data for training. Anything you put in has to have been approved for public release." Otherwise there's a concern for data leakage.

I wonder is this kind of a whole new area around training that people need to understand appropriate uses of even the tools that we have today? - So for sure I was at a university talking to a professor and the conversation that came up, I can't remember exactly how it came up, and I said, "Do you allow use of ChatGPT on exams?" And they're like, "You can't disallow it because if we say no, they'll cheat." Like there's just, it is there, it will be used. So I think then this is a moment, so we'll put aside all the things that probably need to be done in terms of assurance even internally and long-term research and say this is then a culture moment. This is where culture and training moment, who are we, what are our standards? What is your responsibility? Do you understand what's gonna happen? Do you understand the ecosystem enough to know the ramifications of your choices of what you put in there? My belief is all 93.8% would be doing it for all good reasons and could create absolutely horrific effects just because they didn't understand it.

I think the Morris worm wasn't intended to become the problem it was, guy just wanted to see what was on this cool new internet thing. So I think training, but I also think culture, going back to your cyber, it's making sure people understand their responsibility for the collective and how the collective is effected. I do say though to my professor friend, I am not sure that it's gonna be effective to just put out a blanket statement, say, you may not use these models at work. I think that will be problematic because in a minute it will be so common that it will be almost difficult not to use it. And so that kind of telling somebody that they can't do and not being able to monitor whether they have, I think is probably not gonna be an effective moment. So it's more culture, more training, more getting people to understand the limits of it, and then getting them to advance it within a safe environment, and having the wherewithal to advance it, I think is an equal part of protection against misuse.

- Yeah, I think it would be, the example would be a corporate policy requiring you to use a typewriter because Microsoft office could get hacked or something. - But this, I think there's a lot of literature on this, is that you actually get more safety if you allow more use. - Right. - Right. So I think that's something that has to be part of this.

- So one of the audience members asked a question really around the role of open source AI. So we've seen really since Facebook open sourced their model back in March, I guess, I'm sorry, Meta open sourced their model back in March, and the trained weights for that network were subsequently leaked, the open source community finally got their hands on a trained foundation model and really kind of have gone wild with extending it and innovating on top of it. They don't have the compute resources of the big hyperscalers.

And as a result they figured out how to be much more energy efficient in training. And we've seen now really this growth of open source sort of domain specific models proliferate. - Right. - And so how should we think about, it was sort of easy to regulate three or four big companies and their big compute, but if you've got this massive open source explosion, how do we wrap our heads around just the breadth of the challenge, or the breadth of the opportunity perhaps, to put it in a positive light. - So in your last statement, I always think of breadth of opportunity, and if I have one, whether I'm right or wrong, this is my general message is especially now, it's think about prosecuting the opportunity rightly. Not think about yourself protecting from bad things happening because it's just moving too fast and you'll find more as you proceed to try and do things.

I don't know what to do about the opportunity to misuse these models on the huge open data sets there are. I think it is a big problem in terms of the information, disinformation. I would love it if we could get people to turn these tools on the adverse of the misuse to say under detecting misinformation, detecting vulnerabilities, and if it were in Sue world, I'd probably come up with a giant cat and mouse game where I'm playing the tools against each other to find best offense against best defense, force the offense, and so I'm gonna get ready for the defense. But I think the biggest opportunity I see is what you said in, yeah, they're these big generalized tool for big generalized data sets, and they have big generalized problems with hallucination and a whole bunch of stuff just because of how general they are. I think there is really cool opportunity in specialized data sets that we tend to have in abundance in our community where we can understand and develop models to work against specialized data sets, and try and suss out some of the problems of hallucination and other problems on those specialized data sets that maybe we can give to the more generalized population.

Because I think tackling it on the big level is gonna be really difficult for us. But as we use our data sets and develop tools against models against them, and tools against them, I think we might find some of the better approaches to assurance with the large stats. - Yeah, I think some of the hallucination issues are gonna continue to be managed down. I think as we saw from GPT-3 to GPT-4, dramatically better performance with Bing, expectation of it using citable internet references over its own sort of assessments. And I mean the next generation of large language models are gonna have a whole bunch of new features within them that I think will make some of the concerns unique to 2023, I think.

- I think you're right. - But it'll be interesting to see how it all evolves over time. Some of the questions here in the chat, and some of the things we also have talked about recently is really what happens when we take one of these advanced large language models and put it into an environment where it's able to write its own code and execute its own code. It perhaps has access to a wide range of open source hacking tools, and it's made available to, maybe not even nation states, but I don't know, ransomware groups. How should we think about being able to put safeguards so that AI, large language model orchestrated cyber operations don't just, I don't know, completely overrun over on the internet. - I dunno, but it's a great question.

You know what popped to mind as you were talking about is, I'm old so I have conversations with a lot of people. I was talking from to someone from Cisco, and they said, "You know, if we had known how cyber was gonna advance, we would've developed routers differently." Right, if we had thought about how people would go from place to place, how they would move horizontally within the ecosystem, we would've done it. I'm wondering, just as you were talking about, we do know a lot about, or we could imagine a lot about how it can be misused.

One of the interesting things would be not just about how the models are developed, and how you control the models, but how we control what they operate against and what you see and what you monitor, and what you identify as misuse. I don't know how close we are to opening the pod bay doors, HAL. And if you're my age group, you understood that allusion and you're like, "Oh, '2001: A Space Odyssey'."

I don't know how close we are to that. I tend to believe that for now, I wouldn't let humans off the hook because I think we're the ones that are gonna think about the great uses, and I also think we're gonna think about how you would not want to use them, or how you would spot their use. So to me, I think it's a real concern. I think we're a bit away from it.

And if I were gonna try and counter it in addition to the advance of responsible use issues, like you articulate, I would spend some work being a little red cell and say, "Okay, what would it look like? How would it move? And how do we need to think about our ecosystem to in a way protect itself against those?" 'Cause if we've learned anything from cyber, one of the things we have learned is misuse will happen. And so how do we learn from that, and how do we spend some resources now in the ecosystem design rather than just trying to keep people from misusing tools? - Yeah, no, great point. And I think obviously there's opportunities for a lot more research here as we think about some of the assurance aspects, but I wonder as we kind of wrap up, when you were a principal deputy DNI, one of the big sort of IC wide initiatives over which you presided was the augmenting intelligence with machines initiative.

And I wonder if you might reflect a little bit on the opportunity for the intelligence community to build on a lot of the capabilities developed with sort of the older generation of new AI, and how should the intelligence community be thinking about large language models for their mission? - Yeah, so I think one of the things we did right with AIM was name it augmenting intelligence machines because that community's focus should be on the mission it needs to perform in the world in which we now find ourselves where we're increasingly gatherers, not hunters. And we can get anywhere, and so can someone else. So this was really about looking at what's coming and saying, "How do we augment our mission?" So I think we had that right. Gosh, I wish I was still there, 'cause I think now we have so much more clarity on it. From an intelligence perspective, straight up there is nothing more important to them than being able to know what the world knows. And if it's important, they know it.

And that goes through data, and it goes through the tools to be able to use and understand a little more, a little faster. I don't know any mission space that more requires and more should drive advance in this area. I think this is a really good domain for investment of research dollars, significant research dollars to answer half of the questions that you asked today, Charles, really need and aren't going to be commercially addressed as well as we need. And they are big questions that luckily aren't here today. They're five minutes from now, let's get some research against it. But from an intelligence community, they are perfectly positioned to say, "Man, if I'm gonna be successful at advantage in the future, I've got to be masters of the data domain.

And in order to do that, I've gotta be able to command these tools." And because we're intelligence officers, we can think about how they'd be used against us. So we might be on the forefront of using it. So there is nothing about the past four years that has diminished my belief that the intelligence community has got to be a driver of what we need of these things.

There are people that say you never can have the government, government should always be a first follower. I think this is one where it maybe is a first follower in terms of the tools that become available, but it ought to be a driver of what tools we're gonna have tomorrow just because it has got the kind of space that should allow pretty clear view on both sides of the equation. And just if I take myself out of the spy hat and onto the social good hat, clarity and truth and light and all the things that potentially can come when you understand what's happening in the data world, make the world a safer place. - Yeah.

- It doesn't make it less safe, it makes it more safe. And so I think there's a lot of goodness from that too. - Excellent.

Well, I really appreciate your optimism, and this is gonna be a fascinating area to watch as these tools just continue to exponentially take off. So thank you so much for your time. With that, I would like to introduce Doug Robbins, MITRE's, VP of engineering. In this role, Doug leads MITRE'S innovation centers and research engineering and prototyping of new disruptive solutions in the public interest, using world-class technical capabilities, solutions, platforms, and workforce. Doug will moderate today's panel discussion. - Great, thank you Charles.

And thanks to everybody for attending. It's an honor to moderate the panel with such an esteemed group of folks. We have joining us for today, Taka Ariga Chief Data scientist and director, Innovation lab Science Technology Assessment Analytics at the USGAO. Dr. Terry Sejnowski, Francis Crick Professor at the Salk Institute, and the president at Neural Information Processing Systems Foundation. Dr. Peter Stone, the Truchard Foundation chair

in computer science and University Distinguished Teaching Professor at UT Austin and the director of Texas Robotics. And last, but certainly not least, Dr. Benjamin Harvey, CEO and founder of Squared AI. I thought what we would do is start our conversation where Charles and honorable Sue Gordon started. And that is, I think the way Sue said it was, there's certainly enthusiasm that's warranted, but also concerned about misuse and limitations. So I thought what we would do is maybe start with Taka, and if you could introduce yourself, say a little bit more, and then just your thoughts about the general balance between the enthusiasm and the great good versus the misuse and risks that we're introducing with AI. So Taka.

- Yeah, thanks Doug for having me. And it's always such a pleasure to be speaking at a sort of MITRE event. My name is Taka Ariga, I'm GAO chief data Scientist. But the other role that I sort of to have that I wear is to direct the work of our innovation lab, and that is the sort of dual mandate that GAO has in sort of trying to figure out how might we conduct evaluation of something like AI implementation as the matter of oversight function, but also like any other entity.

We aspire to use artificial intelligence as a capabilities to augment our own sort of oversight capacity. So there's a very sort of specific methodological approach that we apply to try to modernize an agency full of professional skeptics. Happy to sort of talk more about that in a second, but as I testify in front of Senator Peters last month, I mentioned we're certainly in a era of algorithmic renaissance.

As a data science, I could not be more excited about the possibility not only the computing power, but availability of the data and what we can do with that information. It's certainly marvelous, but at the same time, I'm also sort of avid sort of reader of a lot of sort of coverage out there. Some are borderline historical in terms of what AI could do. So words like extinction has been tossed around that nature, in terms of that risk, I don't necessarily view AI as different than any other technology.

When Charles was talking about the notion of mandating people to use typewriter doesn't make any sense. I think same with AI, sure extinction could be part of the problem if we left it alone, do nothing about it, just like nuclear weapon could potentially pose extinction. But there's certainly a lot of conversations around how do we promote not only sort of regulatory guardrail, but I think a lot of effort, at least within the federal government, how do we promote digital literacy so that the users of these tool can recognize the characteristics and the output so that they can actually make sound judgment off of it. If we sort of blindly trust output from not only terms of AI, but general AI, there's always risk of us getting sort of caught in the idea of hallucination, caught in the idea of misinformation. So I think that's an important element.

But certainly on a governance structure, there's a lot of talk about congress perhaps need to take actions around putting regulatory boundary, but that the notion of putting in a sound governance structure is nothing new and you don't really need a regulatory regime for that. It really sort of gets into the idea of how do you make your AI decision, do you buy, do you develop, what kind of variable selection you process, what kind of monitoring regime do you have put in place? And so there's a gamut of conversation that could be had not only on the technical side, but also some of these equity, societal impasse of AI as well. So within GAO's AI accountability framework, which is something that we published back in '21 as a form of evaluative blueprint, we do highly emphasize the notion of having a sound governance structure in place is important like any other complex technology system, but as a sort of matter of trust, but verify a third party entity like GAO would come in to make sure that those evidence of governance and evidence of deliberations are being had not only from a federal government perspective, but from an industry sort of service and product provider's point of view as well. So as long as we're able to do that, I think I'm more optimistic than not that we can keep the humans in the loop, if you will. One of the concern that I just very recently had with congressional staff member is that there's a danger of over-indexing based on 2023's concern.

So, it's all about generative AI right now, but certainly there are risks associated with general AI, if you will. From my perspective, I think the next frontier challenge is really around instantiating the different types of risk profile. Sort of autonomous vehicle has different risk profile than a medical diagnostic, than a financial services, and we shouldn't treat all the same. Sue was mentioning about some of the intelligence ramifications.

So I think that's another area of substantiation to say now, we've been talking general AI as a sort of a high level construct. I think the next frontier is to be more specific around what are those risks look like, and how do we then individually within those domain, systematically address those risks. - Great, thank you. Let's see, Terry, why don't we jump to you and you're muted. (Doug laughing) - Sorry, yeah. This is an incredibly important moment in history.

I've lived through it. I was one of the pioneers in the 1980s developing learning algorithms for multi-layer networks. And one of the projects from my lab, I was at Johns Hopkins at the time was what might not be called today a small language model. It was a tiny network by today's standards, but it did something which was shocking for that time.

So text to speech was an unsolved problem in the sense that English is incredibly irregular. There are rules, but they're broken. So there are a lot of exceptions, but I was able to teach a very small network, one layer head units to pronounce English text to the point where you could understand it. It was real, and it was a counterpoint to the AI at the time, which was based on logic and rules and it was able to handle both the rules, the irregularities, and the exceptions.

at the same time it was a harbinger, and it actually suggested that these networks are really good at language, not too surprising because so is the human brain upon which the neural networks were aspired. So, that is part of the history, but where are we today? So what we have learned is that as you scale up these networks, they get better and better, and you reach these thresholds where suddenly you can solve a problem before you couldn't. And what's a little bit even more to take it now that we have large language models, we're taken aback because we often don't know what the capabilities are.

In other words, nobody expected that they could write computer programs. That was something that emerged. And there are a lot of other things like that.

So we're still in this very nascent period of exploration. It's going in many different directions. Someone mentioned that, well, it's going to be adopted regardless of whether or not you prevent, you try to prevent it by saying that you're not allowed to do it. Well, the cat's out of the barn, it's everybody's using it in industry and school and probably even in your own company. Where you may not be aware of what it's being used for. I'm using it by the way to write a book about large language models.

So with about three times the efficiency that I wrote my first book, which was the deep learning revolution. So, this is here to stay and the only question is, and as we're discussing today how to keep it from going in directions that we didn't intend. And I would like just to say that we don't know where it's going, so it's hard to steer it, it's hard to know, anticipate. And if you try to, for example, slow down as in some directions, you could be sure as Sue said, someone else is gonna do it for you. So you better be ahead of the game. Now, okay, just to give a sense for where I think things are going in the near future, in the next couple of years.

First of all, multimodal is already starting. In other words, it's not just words and language, it's gonna be images, it's gonna be already auditory where people are basically doing speech recognition, and the language translation is unbelievably good, but there are many, many other ways of integrating information coming in from the real world in real time that will be transformative because it'll be more than just a talking box. It'll be a moving target, so to speak. Now I'll give you just one concrete example I know a lot about, which has to do with the brain. So large language models are primarily modeling what's going on in the cerebral cortex.

The cerebral cortex is our knowledge store. And so it's very large language laws are very good at being able to extract information just like our brains are from what you know, except it can know a lot more than a single person. Now, Demis Hassabis, who's the president of DeepMind, which is now part of Google, recently announced that he was going to use reinforcement learning.

So reinforcement learning together with deep learning was used to learn, IT actually taught itself, it learned on its own to beat the World Go Champion back in 2017. That was a wake up call, by the way, for Chinese. The Chinese, Asia, especially in China and Japan and Korea, Go is almost like an art form.

And it's a very, very deep part of the culture. And the fact that this program beat the best human was shocking. And for Americans, what's Go? It wasn't that impressive. But what that means is that the Chinese really made it a national initiative. But to go back to DeepMind, what Demis Hassabis, now DeepMind has taken over the Google AI, in Mountain View and the Google Brain group, and they're putting together a deep learning model, which is gonna be another quantum leap beyond GP4.

It's gonna be called Gemini. And it's going to incorporate the reinforcement learning that was the magic sauce for the AlphaGo program. And that means that right now, these large languages, they don't have any goals. They just answer questions, but what it means is that you'll be able to teach or it'll be able to explore the world by creating goals. In this case was learning to play Go. But if you can give it another goal, it will be able to learn to accomplish that goal.

And it's gonna be, it's another part of the brain called the basal ganglia that's responsible for reinforcement learning. There's just so much more that we know about how brains work that are gonna be incorporated, like long-term memory, for example, real-time sensory input. All of the things that we know that are gonna be important for creating autonomous AI is gonna happen within the next 10 years. - Well, I think that's a good cue for, I'm gonna come back to that, and how we do responsible research in the area in a moment, but thank you. Let me pass it now to Peter. - Sure, thank you.

Thanks Doug, and thanks everybody. It's an honor to be on the panel, and world about myself, I'm faculty at the University of Texas Austin, director of Texas Robotics. And my research is in reinforcement learning as the area Terry was just talking about, and in robotics, I also, until recently was the chair of the standing committee of the 100 year study on artificial intelligence and the first author of the report in 2016 that touched on many of the issues that we're discussing today. And I'm also executive director of Sony AI America. And just my history doesn't go back as quite as far as yours, Terry, but my undergraduate thesis was implementing a neural network to do digit recognition advised by Jack Cohen at University of Chicago. And then my career has taken me through sort of many of the artificial intelligence paradigms.

As Taka said, right now it's all about generative AI, but I've done research in symbolic artificial intelligence, AI for classical planning, in probabilistic modeling, in Bayesian reasoning, which was sort of the next wave. And then also now, sort of everybody's working on these generative methods. But I think, and there's been a lot of great, great points that have been been made already. So I'll try to keep it brief, but just to emphasize some of the ones that I think are really important.

I think Taka said that AI technologies are in many ways, just like many others and in the past. And I think that's spot on that that there's many technologies in the past that have been, have positive uses and negative uses that we have history of having conversations like this one where we think about what are the ways to place regulations in place, or to put controls in place that maximize the chances that the positives will outweigh the negatives. There's no stopping people from still trying to use technologies for bad purposes.

And that still has happened all throughout history. I recently went back to the Frank report, which was the written by nuclear physicists advising President Truman not to drop the bomb. And they wrote in the 1940s that it was without doubt at that time the invention of flying of airplanes had caused more harm in the world than good. And there's been continued negative uses and negative implications of air flight, but that there's also been a lot of good uses, and there's been a lot of regulations in place to make that happen. One thing that's different, a little bit different is that the pace, the Model T Ford was introduced in 1908, which was when cars became widely accessible, and it took 50 years before there were a hundred million cars in the world, which gave time to come up with traffic signals and stop signs and road networks and parking garages and seat belts. It took even longer to come up with airbags.

That was not 'til 70 years or so. But regulations were put into place, and there's still remain, nobody would claim that it's all good, but there's positive implications of the technology and negative. ChatGPT got to a hundred million, not in 50 years, but within a month or two, a hundred million users worldwide, which doesn't leave much time to get all of that sort of regulatory framework in place.

And that's what we're sort of, I think everybody is scrambling to do right now to envision and to understand what are the good uses, and what are the possible bad uses, and how can we tilt as much as possible towards the good uses. And I do think, I share the optimism of Sue, I think it would be a huge opportunity cost to try to tamp it down, and there's too many good effects of artificial intelligence. And I think that there also was a parallel drawn to nuclear. That's another example where we've been living under the threat of nuclear annihilation for years.

There is an argument that all the regulation in place has sort of diminished one of our best shots at saving ourselves from climate change. 'Cause this is a source of clean energy. And so there's all of these technologies, there's positives and negative, as Taka mentioned, the one thing that sort of also, other than the pace which is different with AI, the pace of innovation, the other is this the sort of band and people are throwing around the words of extinction of the long-term existential threats.

And if that that is of real concern, then the equation changes from all just positive versus negative to this chance of complete annihilation. Personally, I mean, it's an ongoing debate. I think there's interesting arguments on both sides. Personally, I think right now that at least for the foreseeable future, not just 10 years, but for much longer period than that, that we're, that that's not one of the realistic risks, and that it's a little bit of a distraction to be thinking about that, that we should be having the conversation about what are the human uses and misuses of the technology, it's very easy to get caught up in the moment and say, oh, if all of this progress has happened in the last five years, then everything's gonna be solved within the next 10 years. But those kinds of prognostications we've heard many times that happened when Deep Blue beat Garry Kasparov, that happened when as was mentioned before when AlphaGo beat Lee Seedol, it happened when Watson won at Jeopardy. There's all, and then often when there's breakthroughs in technologies, people quickly, sort of say you overestimate how quickly it's gonna be, the near term problems are gonna be solved.

I think we're still a very, very long ways away. The only other thing, the other thing I'd like to emphasize, Taka also mentioned the need for literacy, AI literacy. And that I think is what we really need to emphasize. We need people to be able to understand what is realistically possible with AI tools right now, what is not realistically possible.

We need to get people who are literate in AI at all levels of government, at all levels of industry, whether they're high tech industries or not. And here at UT Austin, we are about to launch, we did just launch, an online master's in AI for that purpose to scale up the education of AI at a price point of $10,000, where people can, and can do remotely while they're doing other jobs. So we're hoping that this will contribute to the sort of greater pool of people with AI literacy that are available to be in policy positions, in government positions, and also available to industry. So, I think there's still a lot to do. Terry mentioned multimodal aspects.

I agree with that entirely. I think there's still a lot to do with figuring out where generative AI can impact robotics, and sort of real world kinds of implementations. But it is a really exciting and crazy time right now.

The capabilities of the models that exist are phenomenal, and I'm really looking forward to the ongoing, community-wide understanding of what the full scope of positive uses are, what the full things we need to protect against are, and how to do that from a regulatory standpoint. And I do also commend MITRE for putting out that paper. I think there's some really sensible thinking in there as well.

- Thank you. And I see some head nodding. Maybe we'll get back to it, is there near term concerns versus killer robots and SkyNet.

So, but with that, Benjamin, over to you. - All right. Hello everyone.

My name is Benjamin Harvey. I am the founder and CEO of AI Squared. I'm also a professor and director of the Trustworthy AI Systems program at George Washington University.

I'm also a resource scientist at Johns Hopkins University in the biostatistics department as well. So, a little bit about my background. So I worked at NSA for over a decade. My last two positions at the National Security Agency, I was the chief of operations data science. So I ran all data science for the operations directorate focused on how do you scale AI and machine learning across the entire organization. My second to last position, I was the head of data science for the Edward Snowden leaks.

In that position, I won the ODNI award for the top counterintelligence professional in 2017. I left NSA, I went to a Silicon Valley startup called Databricks. Databricks, if you all heard in the news, they recently purchased MosaicML, which is an OpenAI competitor for 1.3 billion. So there's this kind of race to see which one of these startups are gonna get to be able to create these large language models, foundational models at scale within enterprise organizations.

So there was a race for that, and Databricks was a company that's focused in that area that I worked for. And I left Databricks, was in proximity of Silicon Valley, and started AI Squared. So when you think about kind of where my research focus is at George Washington University and being the director of designing trustworthy AI systems, that particular program focuses on, think about how do you provide trustworthiness in AI.

AI that is trusted or think about that AI that you trust, as an individual or within an organization. So when you think about it, what we're teaching, and as far as best practices in the actual program itself is like, how do you start to, at every aspect provide that human in the loop, testing evaluation, validation, and verification. And from a industry perspective as well as in the federal government, what I've saw at NSA and some of the Fortune 100 financial services companies that we work with is really the ability to run these very well fine-tuned observational studies where you're starting to insert that human in the loop so that they can start to provide feedback on the accuracy, the performance, but also, when I was at the NSA, analysts would not report on the results of AI and machine learning unless they had four really core attributes.

Is it actionable? Is it timely? Is it relevant? Does the underlying result of the machine learning algorithm provide additional context so that when they're writing their reports, they can additionally add in the why behind the KPI, disseminate that, and have full trust in their ability, if there's attribution back to the reporter to be able to describe exactly how they came to that particular decision. One of the challenges during my time at the National Security Agency was really, 80% of the models that were being created in a data science perspective. So think about data scientists building world-class models in a sandbox. They never made it into a mission production application.

So, traditional timelines for that integration is about eight months. So there's a plethora of world-class models created by these thousands of data scientists that never make it into a mission production application. And ultimately, when you think about the organization, some of the highest level leadership on the AI side is really all about AI adoption. So how do you start to increase the adoption of the AI across the organization? And at NSA, there was four fundamental criteria of actionability, relevance, had to be timely from a performance perspective, and it had to be contextualized. So when you think about what this is actually causing in an organization? It's a balance between, really, innovation, the speed to market, and then handicapping mission in a sense.

So, how do you make that balance? And at NSA from a autonomous AI perspective, when you think about things like real-time indications and warnings, the NSA really relied on rules engines. And the main reason why is because of the attribution piece. So when you have these larger language models for an example that are extremely generalized, and it's hard to understand the context behind how they're making decisions. If you're in front of the office of general counsel in the National Security Agency, you don't wanna have to tell a story where you don't really understand who to attribute for this machine learning algorithm, making a terribly wrong decision that could lead to catastrophic second order effects.

So one of the things is at the NSA, they had really fine rules and guidelines for transparency, explainability, governance where you're in a training setting, but also that continuous testing and evaluation and validation and verification and admission production application. So when you're leveraging AI and machine learning and the analysts and the war fighter actually using it, being able to have a feedback loop where you're continuously testing with the human in a loop. So not testing from a sandbox perspective, but testing with the human in a loop to provide you with that real time information that can help you understand how the model is performing in a live production environment.

And then last thing the future is in AI augmentation. So one of the challenges with kind of the large language model construct is really right now there's a lot of applications that support providing a conversational capability with an application. But when you're in these organizations where analysts are making time sensitive decisions that are associated with disseminating information to your other intelligence analysts in the military war fighter, they need technologies that are integrated into their current workflow. So what the

2023-08-08 02:49

Show Video

Other news