Day 3 | Panel | Artificial Intelligence and International Security

Day 3 | Panel | Artificial Intelligence and International Security

Show Video

hi welcome to this morning's panel of the  Pearson global forum on ai and international   security that's obviously an incredibly  broad topic we've got four speakers here   one of them introduce themselves in turn they  bring a range of views and viewpoints here and   we're hoping to have a fairly interested and  engaging and hopefully informative discussion   Gregory why don't you start us off with a  quick rundown of who you are and what you do   hi I’m Greg Allen I’m the director of strategy and  policy at the U.S. department of defense's joint   artificial intelligence center And Raluca  hi everyone a pleasure to be here my name   is Raluca Csernatoni I’m a visiting scholar at  Carnegie Europe where I focus on emerging security   and defense technologies and European security and  defense and I’m also guest professor at the center   for security strategy and diplomacy with the free  university of brussels in brussels Belgium and   Kara what about you hi everyone I’m Kara Frederick  I’m a research fellow in technology policy at the   heritage foundation and I look at emerging  technology policy as well as big tech policy   and Herb thanks I’m Herb Lin I’m at Stanford  university where I’m senior research scholar and   hank holland fellow where I study emerging  technologies and national security issues   and I’m the moderator for today I’m Matthew  Rosenberg correspondent with the new York times   it's been 15 years overseas I’ve covered national  security in Washington and done investigative work   on tech so I’m really looking forward to this  conversation I wanted to get us started with   something that was in the news in the news this  week nick chalan who is the chief I guess one   of the chief technology officers at the pentagon  quit last week saying that he believed the U.S.   and I guess the rest of the west by extension was  falling behind China that China was pulling ahead   in its technology race in ways that were going  to be difficult if not impossible to overcome   and he said he did not want to sit by and watch  this happen now I suspect that that is not a   uniform view at the pentagon and Gregory why  don't you talk to us a little bit about that   what's the view on the inside of his critique and  comments and do you think there's any merit to it   sure well the first thing I would note is that  Nick Shalon was the chief software officer for   the united states air force and in that role which  he held for several years he actually accomplished   quite a few really important things platform one  is one of the more promising software development   environments in the department of defense so nick  was a real player in this community and did great   work while he was at the department of defense  he has come out recently with a clarification of   his remarks in the financial times which he posted  on his LinkedIn page and his claim is that he was   misquoted and the correct interpretation of what  he said in the correct framing of what he said is   that we are at risk of losing our technological  supremacy and the path we are on is losing our   technological supremacy and the specific aspect  of his remarks that he does not subscribe to as he   was you know as he was misquoted was that there  is no hope right that that nothing can be done   about the eroding technological advantage  but the actual fact of the matter right of   the United States’ position in military  technology is kind of obvious right we are used to   in the department of defense and this is even  reflected in the national defense strategy   that came out in 2018 the us military has grown  accustomed over several decades to operating in   environments that are largely uncontested we can  move our forces where we want them when we want to   and we can operate them in the ways that we want  to and that that that is no longer the world in   which we operate now every domain is contested  and similarly there are entire categories of   technology where the united states is used to not  only being the leader but is used to being more or   less alone and even having the capacity to operate  those types of technologies and that is simply the   case in fewer and fewer technological domains  of military relevance so for example precision   guided munitions which were foundational to the  extraordinary success of U.S. forces in the 1991  

gulf war the united states was where you found  precision guided munitions and essentially nowhere   else these are munitions that can hit a target  to an accuracy within you know one or two meters   from 300 plus miles away that we used to be alone  in doing that now China has these munitions in   large quantities Russia has these munitions and  large quantities and so that sort of that changing   situation in technology writ large nobody really  argues about that now there's the specific point   with where we are headed in terms of artificial  intelligence that I’d like to talk to   and here I believe the leadership at the  most senior levels both civilian and military   of Russia China the united states and the European  union kind of all agree on one big point which   is that artificial intelligence is going to be  foundational to the future of competitive military   advantage in terms of technology in fact China’s  most recent defense white paper which is sort of   their equivalent of the national defense strategy  identified artificial intelligence technology as   underpinning a military technology revolution  that is really the third such revolution in the   past century the first being mechanization right  moving from horses to tanks etc. the second being   informatization the adoption of computers and then  this third technological revolution being called   intelligentization which I realized translates  kind of awkwardly but that term is how they view   artificial intelligence as not merely you know one  more technology that's interesting and important   it is foundational to the future of con  conventional i.e. non-nuclear military advantage   it would be sorry please there's no point I’m  gonna jump in okay great and so the department   of defense absolutely recognizes this we have been  moving our direction moving in the direction of   increased adoption of artificial intelligence you  know for more than five years now my organization   the joint artificial intelligence center was  established in 2018 to accelerate the department's   adoption of ai because we recognize this this  technology is so important and so with respect   to the actual situation I would say you know the  department is not moving fast enough that's true   but we are also accelerating dramatically we now  have more than 600 projects across the department   of defense that are working on artificial  intelligence my organization which is you know   far from the only thing going on in the department  of defense with respect to ai we'll have a budget   of more than 1.5 billion over the next five years  so there is a ton going on and there's a ton more  

going on than just in the past you know few years  so a lot has changed a lot is continuing to change   and with respect to the you know competition with  China it's an incredible challenge but not one for   that we shirk from I mean we absolutely recognize  where we have to go and you can see that in China   and powers around the world that there is a  full throat like there's a full-on rush into   ai and automating weapons automating cyber defense  systems Mr. Lin I know you've been more skeptical   of this approach and think there are real  limits here can I ask you to kind of explain   your schedules a little bit like what gives you  pause when you hear that these plans are afoot or   that we're gonna solve our security problems  through this push what's your view of that   well I think even the dod doesn't believe that  this is the only thing that's going to solve   our problems I mean the procurement process the  acquisition the weapons acquisition process in the   united states is totally screwed up I don't think  any you know I don't think Greg would dispute that   and being able to insert a fast-moving technology  into the dod acquisition system is a highly   non-trivial process and you know he's been in  the middle of that and I’m sure can tell us many   war stories of things that should have happened  but didn't happen because of a bureaucracy so   I start with that as for the fundamental  technologies involved I worry about it in   in several ways one I very much worry about the  proposition that we have we're on the record as   having said that we are absolutely not neglecting  ethical issues safety issues and the so on when   it comes to the deployment of ai and I believe  that I believe that the American commitment to   those values and so on is very odd in fact  higher than those of our adversaries okay   and I have to wonder whether paying attention  to those issues is inherently a slowdown   on the pace with which we develop technologies and  are able to integrate them into military systems   for example I can imagine this an approach  in which we embed our views of the ethics   of ethics and the laws of war and so  on into our autonomous weapon systems   and they embed their understanding of the laws  of war et cetera into their autonomous weapons   systems and I worry and I suspect it's going to  be true that because they have a lower level of   concern than we do there will be at least some  cases in which those their weapons are going to   be more militarily effective than ours perhaps  with higher collateral damage and so on but they   care about that sort of stuff less so I worry  that that really gives us you know leaves us at   a disadvantage and sets up a race to the bottom  and so that's one part of it the second part of it   is that I fear that we are not that the investment  in ai technology is mostly going towards the   flashy stuff the exciting weapons side of it the  command and control better command and control   and very little of it is going to the mundane  sorts of things the boring stuff the logistics   the administration stuff and so on the dod is  a huge administrative organization is a huge   bureaucracy it has lots of interop it has lots  of databases that don't interoperate for example   where's the effort in ai to make our  systems more interoperable with each other   to do human resources better to do payroll better  and so on and that was like the most in some ways   the easiest place to integrate it you know I spent  years running our bureau in Kabul and covering   the war in Iraq I’ve seen the military kind of  inefficiency the military bureaucracy on the on   the receiving end of it and it is astounding to  see just how much goes on there and how little   sometimes is exchanged between different even  different commands within the same organization   exactly right curious before we shift over to the  administrative stuff I’m kind of curious on the   weapons front to throw to Raluca that you know I  think you know in the west the us is seen as the   most militarist power I guess for lack of a better  term but there are a number of European countries   and European companies that have been pretty  far ahead in developing automated weapons in   Britain I believe Norway was pretty early on a  fire forget missile I think there's some others   is there is there a whole European view is it  by country and country is there what is the view   over there and how would we best kind of sum it up  for what's largely an American audience here today   excellent question before I turn to a European  view or a European approach to emerging   and disruptive technology especially with  their applications in security and defense   I want to turn back to your original question and  the declarations and it's very interesting coming   from brussels looking at these declarations  because they have this flavor of a zero-sum   yet again losing the game lagging behind all  this type of well metaphors I will call them   are typically used as well when it comes to  the European union especially in emerging and   disruptive technologies we are losing the game we  are lagging behind but always it boils down of how   we measure for instance leadership how we kind of  compare and contrast what's advan really an edge   or advancement in this regard so yes looking at  China and what China is doing I always wonder how   some of the reports coming gather from let's say  more private companies or even governments measure   you know the supremacy and look at more broadly  also societal political economic dimensions when   it comes to you know contextualizing this zero-sum  game I would call it in and even competition so   now I’m turning to the EU or Europe in general  indeed at the state level of course security   and defense is still a competency of EU member  states and NATO is still perceived as you know the   military institution here in Europe to prove to  provide the deterrence and also what we understand   by hard military you know posture but when and  the hard military umbrella but when it comes to   recent efforts at the European union level there  have been you know advancement in thinking more   strategically about this nexus between technology  and security and defense so all these efforts are   now summed under this yet again metaphor buzzword  however you want to call it a strategic autonomy   and technological sovereignty and it's quite  interesting because when it comes to artificial   intelligence artificial intelligence as a  technology area it's seen of course as key   to provide or as a critical technology area  to provide an edge yet again when it comes   to security and defense so from this point of  view there have been a multiple multitude of   efforts both let's say more at the strategic level  thinking political level thinking now in Europe   there is a process called the strategic compass  where member states are involved in thinking more   geopolitically about a threat assessment landscape  and coming up at the same worldview let's say when   it comes to risk and threats not only when  it comes to Russia but also China but also   when it comes to capability development and joint  capability development and it's quite interesting   to look more deep more in-depth or more deeply  at these capability development efforts because   there are some key areas identified and I think  these are important to highlight and here I build   a bit on what herb mentioned here it's not only ai  but a multitude of critical technology sectors and   areas that especially the commission is looking at  when it comes to their dual use potential across   civil defense and space so from electronic and  digital of course and here artificial intelligence   and advanced analytics and big data are essential  also manufacturing such as advanced and editing   manufacturing the new buzzword here in brussels  is of course semiconductors and microelectronics   and other technology areas like space  and aeronautics and there are yet again   a few flagship projects that have been highlighted  as critical unmanned area systems and here they're   increasing automation and the potential of ai  is quite significant and other space-related   technology area technology areas or actions  at the EU level when it comes to building this   strategic autonomy and technological sovereignty  so yeah maybe I stop here yeah I’d like to I’d   like to bring in Kara here because we've talked  a lot about what governments are doing kind of   possibilities here Kara sort of focuses on some of  the more dystopian kind of what we think of when   we think of ai and the future and her focus of  course information and how governments can control   that look ai gives them enormous opportunities  to control information what are you seeing in   this what are the what's going on here what are  the risks what do we look like going forward   well I think I can pick up a thread that herb  initially talked about that is something I think   doesn't get highlighted enough on the adversarial  side so you know I spent my the first part of my   career in the intelligence community and there  was nothing more frustrating than the lack of   interoperability between systems not being able  to talk to each other you know I spent some   time at fort Meade and we sort of had the gold  standard of those systems where we could collect   most of the data and the information and basically  glean value from it because it was pretty high   speed but when I was working purely for the dod  and military intelligence it was a hodgepodge of   information if you could get it so you know not  every system was created equal and the ability of   all of these different silos seemingly disparate  silos of data and information and in order to   integrate those and have them talk to each other  that in my mind would have been it would have   helped out a lot as a targeter in the places  matt that you spend time in Jalalabad and Allah   bag ram and certain areas of Afghanistan so  interoperability is key but what happens when   the enemy and I’ll call China anatomy what happens  when they make those strides in interoperability   and what are the implications there when you have  an authoritarian government when you have the CCP   using not just data but technology like better  algorithms a lot of the private companies in China   are leading the way here to parse through that  data so I’m very concerned about our adversaries   making those strides in interoperability  that we've identified it as something we need   within our own systems and letting them integrate  those data sets to glean new insights on us their   own enemies so look at the Microsoft exchange  hack the opm hack add it to other hacks linked   to Chinese potentially Chinese state involvement  like Marriott Anthem Equifax and then you layer   on top of that the tech to parse through that data  now they can integrate it now they can draw value   from it detect patterns identify anomalies against  competitive nation citizens like the united states   so previously a lot of that data would end up  on the cutting room floor now it's going to   be useful to our adversaries then layer on top of  that layer the legal atmosphere in China where we   all know the cyber intelligence law the national  intelligence laws it gives them a lot more leeway   to do things with that data whatever this whatever  the CCP wants basically and then plus adding U.S.   private company I would say indifference or  even sucker to what China is trying to do   when I worked at Facebook we were always told  that we were a global company tried really hard   to get into China it didn't work you've all  probably seen the Microsoft LinkedIn news now   they're pulling back from coming out of China but  at the same time you have other brands like Nike   the CEO basically saying we're a brand of China  and for China so you know we need to basically   make sure that other companies in America that  are coming up in the realization of the China   challenge they can sort of grow and understand  that there's a different take to be had and that   China is working technologically politically  culturally to change us I’ll stop there with   the interoperability piece but we can talk more  about the information but if anyone else is   interested in that we'll get there I mean I am  wondering listening to everyone we're talking   about government procurement we're talking about  you know what American companies you know a lot   of this sounds like a conversation we could have  been having 30 years ago about a different program   and you know we're living in a world where we're  you know it's not nuclear weapons it's not guided   missiles which government's had to develop  it's technologies that the private sector is   much better at developing I mean look I’ve got a  very weak handle on some python and xml code and   I’m pretty sure if you gave me an amazon account  or credit cards already have in a few weeks and   a state with pretty loose gun laws I can develop  a weapon that will shoot a certain kind of thing   person whatever that it's not that hard and I can  develop drones that will go look for things for me   they won't be very good but if this is this is  possible that this is you don't need governments   to do this anymore so is there some way we should  be rethinking this how we review everything you   know how we view companies are they American  or not and just say they're not and we've   got to work around that how we view the idea of  procurement that you know clearly in a world where   you know you can get dji drones and cameras and  high-end chips on amazon is one where government   procurement is not going to be the deciding factor  possibly how do you kind of get past these kind of   conceptual views conceptual kind of  roadblocks I guess or speed bumps   that maybe keep us from thinking that you know the  world has changed and that we need to adapt to it   who wants to go first on that one yeah I think  I’ll take it really quickly everybody talks about   the culture of an organization and when in 2017 we  were thinking very seriously from I used to work   at a think tank that that craig was affiliated  with the center for new American security   and in 2017 we had a artificial intelligence  and international security project and we used   to sort of try to evangelize to the bowels of  the building and the pentagon and say you know   make software sexy again right let's change  the culture the conversation has it's moved   on a little bit that I don't wouldn't say that we  sort of succeeded in in doing that at all and Greg   can I’m sure attest to that but I think there  there's a cultural element here too people are   stuck in their ways hardware you used to be king  and so and when I say hardware I mean big sexy   machines right I worked with m9 mq9s and all those  machines in the in the past but I think allowing   for plug and play technologies edge compute all of  those things that yes there is a rethinking that's   necessary but I also think that you have to sort  of reconceptualize what the world will look like   in a broader sense too so by 2025 almost 5 billion  people will have access to the internet and that's   tens of billions of devices already connected  today that is a huge opportunity for people to   wreak havoc so it's also opportunities for people  to be connected and increase the convenience and   their quality of life but it's also an expansive  attack surface for bad actors and I don't think   the great general public is ready to really  understand that privacy convenience security all   of those trade-offs are really they have national  security and geopolitical implications getting the   public to think about tick tock in a way that  is not just oh these are harmless little dance   videos but no they can push propaganda they are  controlled by the parent company by dance which is   headquartered in China and that brings all of the  CCP's umass nations to bear as well on an American   user base so getting the public to realize that  there are national security implications there   that is a battle that we are still fighting so I  do think it requires a fundamental rethinking of   how we look at technology in general in fairness  to tick tock in China us Americans seem to do a   fine job kind of shouting and propagandizing at  one another far better than most foreigners can   do we understand our weak points better than  they do Herbert Mr. Lin you had something you   wanted to say when we talked about procurement  and this kind of you know we live in a new world   and clearly our government machinery our national  security machinery has not caught up to it what   do you think we can do what do you think needs to  happen to kind of change that to make that happen   well the federal sorry the dod the dod acquisition  system is actually very well designed and   functions very well for a certain purpose okay  it is extraordinarily good at preventing fraud   and corruption in it and you know there's all  sorts of processes built into to ensure fair you   know fairness and all that sort of stuff but when  you have a system that's arranged in such a way   as to ring out risk and when failure is punished  you get a slow system and you know you have to   the alternative for I mean the way to wring  out risk in that context is to you know   satisfy everybody and to get consensus that takes  time and you have to be willing to run failure   you have to be willing to fail and so far with  entirely understandably we're not willing to   fail on big systems we're only willing to fail  on small sections where the stakes are smallest   so it's only small stuff that can get into the  system fast the holy grail for defense contractors   is getting into major systems and those take a lot  of time because they're billions and billions and   billions of dollars at stake and congress stand  for failure is there a specific failure or a   set of failures you have in mind here a specific  failure or specific set of failures that you're   thinking about when you talk about that thinking  well here was a program that we should have had or   we could have had but we you know it wasn't coming  together fast enough so they got rid of it no I   don't have what I’m talking about is a cultural  issue yeah I mean there's an opportunity to cost   you there are things you don't do because you  don't want to fail um Gregory I’m kind of curious   you said on the inside so you see this every day  where are you seeing the slowdowns what are you   seeing as the main kind of roadblocks and speed  bumps on the way to getting these moving faster   yeah if you'll indulge me for a moment I realize  that a lot of folks in the audience are probably   international security experts  and not technologists so let me   preface my remarks with just a bit of a frame  about artificial intelligence in general   artificial intelligence is an umbrella term and  almost all of the technological progress almost   all of the technological breakthroughs over  the past 15 years have been in one subfield   of that umbrella term and that subfield is  machine learning and machine learning is a   specific approach to software that differs from  traditional software in traditional software   the program is right a set of instructions  all of which are typed out by human hands the   instructions or the rules of the program that  is the intelligence of traditional software   machine learning software is different in that you  expose a learning algorithm to a training data set   and I’m oversimplifying here but it writes its own  instructions it learns the instructions based on   what you've provided in the data set and that  difference in approach to software can lead to   remarkable increases in performance for a subset  of applications not all applications but a growing   list of applications facial recognition is just  an example of a type of technology where if you   try and write a facial recognition system using  traditional software the performance will be awful   and it will be incredibly difficult to do it  right but if you take a machine learning approach   to creating a facial recognition system you know  you Matthew as you said right could do it at home   using stuff that you download off open source  just plug and play and so that is the that is the   opportunity that we are seeking here in national  security is the radically improved performance   enabled by machine learning for applications where  we have relevant training data to create those   high performance systems now the international  security dimensions that you were getting at   Matthew come in when you talk about the cost and  complexity of creating different types of weapon   systems right nuclear weapons are expensive and  complicated and this is a good thing right imagine   an alternative universe where nuclear weapons cost  roughly what a microwave costs and it roughly is   technologically difficult to create there would be  a very different international security landscape   right if the cost and complexity of these  weapon systems changes and so the challenge   that the united states faces is that many  of the technologies that we have mastered   that are incredibly difficult to master right if  you've ever seen an aircraft carrier battle group   in operations I mean the skill the professionalism  in many cases like the raw athleticism   I mean these people are like winning an Olympic  gold medal every single week right the problem is   we are we have mastered these things that are  no longer operating in the same international   security environment it's very different right  to operate an aircraft carrier battle group for   operations in Afghanistan versus in the east  Asia region where there are now sophisticated   integrated air defenses and the second  challenge that we have is that there are now   alternative means to projecting power that are  enabled by general-purpose technologies such as ai   and our acquisition process was not written  with those types of things in mind right so   I mentioned previously that in machine learning  systems the data the quality the quantity the   diversity of the data set the degree to  which it matches the operational context   all of that directly translates into  the overall performance of the system   but the acquisition rules were not you know  written in an era where data was the key   asset to optimize for in many cases as you know  Chris bros one of the scholars of this area has   written the department treated data as engine  exhaust something that just sort of happens   while you're in the process of doing things that  you actually care about so that's sort of one   overarching comment if you'll indulge me I want to  make a second one which is that traditionally the   department of defense has procured software  in the same way that we procured hardware   right which is we're gonna have version one we're  gonna freeze it we're gonna never change it for   10 years right like literally the nuclear weapons  there was a famous 60 minutes segment where they   pointed out that significant parts of the united  states nuclear weapons computing architecture   runs off those old eight-inch floppies  right that look like lps like vinyl lps so games holds up surprisingly well yes which doesn't  say a lot for our technological products right   and so when you're building something like you  know an airframe design it once and then freeze   it kind of makes a lot of sense but when you're  building software is never done right you should   be constantly feeling iterative upgrades to the  system and much of the defense acquisition system   was not optimized for running the program that  way for writing contracts away but we have made a   ton of progress in this field there is a lot of  development platforms that are in the department   of defense right now fielding cloud enabled  software my organization the joint artificial   intelligence center has created and operates  what we call the joint common foundation   which is a software development environment  optimized for sort of the unique requirements   of machine learning software including on the data  side we also provide acquisition advisory support   pre-written contracting language that anybody  can just copy paste into their own programs   so there's plenty of good work going on in the  department of defense to accelerate these efforts   I’m not satisfied with how fast it's going but  that's why I work where I work we're trying to   make progress we're seeing that you know the  other issue here too beyond the culture of   logistics and procurement you have a culture  of the military itself let's take the navy   for instance if you are a high-flying grad of  Annapolis and you didn't go into special warfare   you're either hoping one day to command a carrier  battle group you are maybe becoming a pilot hoping   to command an air wing you know and that's what  your entire career is built around it's what   the big boys there do yet you know it's easy  to conceive a world in the very near future   in which a middling piddling cargo ship with a  fleet of very small drones could easily take out   a ship or do real damage to it where you know a  trillion dollar fighter jet program and 11 what   is it 11 or 15 billion whatever aircraft carriers  cost these days ships are not the main kind of   projectors of force or whatever we use but I want  to kind of move on for a second also to some of   the ethics here because it is a question a lot  of the audience has and look when you mention   ai and international security everybody kind of  goes to the to the terminator kind of world and   people do want to wonder my understanding and  correct me if I’m wrong but is that you know   we're living in the world at least in the U.S.  we're at the pentagon and others saying there'll   always be a human in the loop when on decisions  to employ lethal force by any weapon systems we   create can I ask you about what you think that  means because I get a lot of different definitions   why don't we start with you in Europe you know  what are the boundaries that are being set there   by various national governments and what exactly  do they mean because human loop is an awfully   vague thing it could mean does a human involved  in setting up the training set or setting up the   original algorithm that learned it or is there  a human in the actual decision to employ lethal   force I don't know and I’ve never got a great  answer on it well I’m also a technology neophyte   when it comes to the again technicalities  of a human in the loop on the or out of the   loop but definitely there are more substantial  ethical or normative discussions when it comes   to developing human centered and trustworthy ai  technologies again whatever those labels mean   for certain practitioners policy makers or  private companies at the same time so there is   a difference there again in terms of calibrating  what the EU has been doing for some time now is to   propose a sort of a regulatory framework of high  risk or risk based uses of artificial intelligence   and to approach this from again a trustworthy and  responsible way of first developing the technology   but also how the technology is deployed let's  say or used and from this point of view there   have been a lot of initiatives in this regard  both by the European commission at the EU level   but also the European parliament when it comes  to regulating or legislating but this is more   broadly about ai in general and not necessarily  about its security and defense applications but   because most of the discussions for instance of  the ai regulation proposal or the so-called ai act   that the European union is currently working on  specifically exclude military dimensions in this   discussion so this is quite significant to  think about because at the end of the day   still yet again EU member states are the ones  deciding their position there are efforts to   coordinate for instance work at the U.N. level  and in the under the umbrella of the campaign to  

stop killer robots but this is only let's say  specifically focusing on the killer robots or   electro-autonomous weapon system debate what's  quite interesting maybe to consider on a broader   level is developing this culture of trustworthy ai  and not only when it comes to security and defense   but more broadly about ai-enabled technologies  and this is quite at the front of a policy and   political thinking here in brussels especially  but when it comes to international security   or international insecurity the topic of our  discussion today this is highly relevant because   here I think both U.S. and EU working together or  jointly creating a sort of a common understanding   about the trustworthy development of  trustworthy ai let's say stream of work   and effort or commitment is quite important and  the recent declarations coming out of the EU   us trade and technology council it was in  Pittsburgh in September are promising but still   there is a lot of rhetoric and at the end of the  day it boils down to national interest in pushing   forward in developing dangerous technologies to  have that edge in warfare versus proposing a more   let's say a lethal or international law driven  approach to setting red lines in the development   of such technologies I mean you know use of the  word trust there and trustworthy is interesting   it's a curious it's a it's a curious  way of looking at it that we all we all   share look militaries make mistakes all  the time guided weapons are great example   you know bombs missiles they don't miss what  they're aimed at anymore yet they hit the wrong   thing all the time because our intelligence is bad  because we make mistakes we you know journalists   who've been killed his cameras look like weapons  militaries make mistakes constantly and you know   there's a good case we made that a fair degree of  automation will eliminate some of those mistakes   but Kara I’m kind of interested in where  you sit because that automation also brings   new risk especially when it comes to things like  information it gives governments that you know you   don't need a network of informers anymore you can  automate this you know what are you seeing in that   sphere and is there any discussion internationally  about how to handle that and handle that problem   of an automated information environment yeah  I think technology absolutely increases the   capabilities and the scale of reach for  information you know there are pros to this you   know these private companies were initially born  under the auspices of democratizing information   airing marginalized perspectives giving people a  voice famously the Arab spring was the beginning   of us using social media across the world  to topple dictatorships and authoritarians   and this would only spread and you know become  the new normal but we've also seen the ability to   corrupt the technologies that can increase the  speed the scale of information and how it travels   and again this separate from a military context  because there's been volumes at this point written   on and from the center for new American security  again even though I don't work there anymore but   they do a lot on the speed of information in a  warfare context but when it comes to the impact on   the body politic and geopolitics generally I  think that China is leading the way to exert   internal control over their population using  these technologies there's you know characters   of their social credit system but I think it  speaks to something broader in that they are an   aggressive surveillance state at the leading edge  of using these technologies for internal control   and then expanding that influence outward you  look at the encroachment on freedom of expression   outside authoritarian countries sort of emanating  from China it's a it's a misaligned transfer of   values where you know instead of exporting our you  know freedom of information and democratization of   information to China they're now importing their  censoriousness here and you look at can I jump in   here for a second and just ask I mean that is a  fair question somebody in our audience is asking   this they make a good point which is you know  can the strategic rethinking of digital media in   cyber space and sorry can you hear me clearly that  can the strategic rethinking of digital media and   cyberspace and cyber I don't know what everyone  call it coexist with a global and free internet   can those two values can they coexist or  is one gonna constantly override the other   I think they're always gonna be intention and I  think that we have to set it set out deliberately   as Raluca talked about you know establish a  framework that's agile enough to contend with   these issues that are always outpacing governance  issues right so the ability for technology to be   developed vastly outpaces our attempts to govern  it and our attempts to govern it with our values   so I think this is my sort of quick fix and I  do think there's a technical imperative here   and that is technology can be imbued with  values and those values can consist of privacy   right so if we take privacy preserving  technologies and we build out in cognition of   recognition of privacy there are there are many  different ways to do it they're you know some more   tougher than others but you know you tailor those  investments towards data encryption you have   federated models of machine learning differential  privacy which is sort of withholding certain forms   of personally identifiable information while  sort of sharing the other less personal data so   you can detect patterns and whatnot but I think to  sort of avoid giving authoritarian governments or   entities that consolidate power too much control  over individual data when you front load those   privacy protections in my mind that's a very quick  sort of technical fix it's not I won't say quick   I won't say that it's easy but we sort of have to  build in recognition of how these technologies can   be perverted and imbue and enshrine those privacy  protections and values within the design of these   technologies and that is that is a very minimal  start point but it's something that I think a lot   of smart people especially in the private sector  have to start thinking about now it seems minimal   to us and minimal people follow these issues look  we live in a world where that's going to happen to   have to happen through regulation when most of our  big tech companies are making money off selling   your data you're going to need to regulate this  they want to be regulated but you also need people   to design and enact laws who understand the  technology which is where we've got a real   weak point here but it does raise another  issue which I want to bring to Herbert which   is a kind of similar or related adjacent issue  which is okay if we're living in a world where   adversaries maybe not designing their weapons  and their systems with the same ethics we are   does that put us at a strategic disadvantage you  know look if there is to be a human in the loop   and we have an adversary he said there doesn't  decision maybe an hour is going to be slower us   you know humans are not as fast you know how  do we balance that desire to keep our ethics   and keep our norms in there when you're dealing  with adversaries who maybe won't respect the same   I think the answer to that is only that you that  you accept the hit I don't think there's any I   don't think there's any way of resolving that of  resolving that tension if you're going to build in   safety checks and they don't you're going to take  more time and assuming equal levels of technology   they're gonna be faster now maybe the right  answer to that is you don't take the hit   but you advance your technology faster than  they do well sure that's a great thing to say   but keeping you know keeping ahead of the other  guy in technology is a mighty difficult process   and you can't sustain it for very long and your  leads aren't very long so you're always going to   be running there yeah I think about submarines you  know and we got into world war one in large part   because Germany’s unrestricted submarine warfare  we created treaties after the war to forever   forbid that you know it's so barbaric it's the  brittle science fiction we're never going to do   it again we're going to make sure nobody in the  world can do it and then pearl harbor I think it   was like six or seven hours after pearl harbor  we were ready to start understood for submarine   warfare too and then the Nuremberg trials did  find it was a war crime they didn't convict us   of it they convicted a German advocate but he was  given no time for that charge because we done the   same thing and so yeah I can see your point  about there's no way to resolve the tension   and that's I like to throw it Mr. Allen that you  have that situation where if an enemy is willing   to do something we've shown before that if our  backs against the wall we'll probably do it too   so knowing that why are you confident or are you  confident that we wouldn't do it this time around   well first I think you know your point  about how old some of these discussions are   it's even older right in 1895 there  was an international conference   that banned putting bombs on airplanes the  united states was a party to this treaty it was a   five-year moratorium it was not renewed right and  that was in because a lot of the science fiction   of the late 1800s focused on the horrors of war  in the air this was before the wright brothers   even flew they did this so that's how old these  issues are when it comes to automation in warfare   the Norden bomb site which the united states  used in world war ii it actually took control   of the aircraft and steered the rudders and  put it over the bombing area and the machine   also was responsible for sending the signal  to open the bomb bay doors I mean we've had   automation in warfare driven by at the time  mechanical computers for a really long time   heat seeking missiles were first used in the  Korean war so all of this is quite old but now   with that any technology can be used ethically  or unethically right how does the united states   use precision guided munitions well we use it to  do things like we're going to hit the third floor   on the northwest face of this building to only  precisely hit exactly who we want to hit how does   Russia use precision guided munitions well Matthew  you're a reporter at the new York times the new   York times won a Pulitzer prize for reporting on  Russia in May 2019 using precision-guided initials   munitions to bomb five hospitals in one afternoon  in Syria I mean with precision-guided munitions   right so any of these technologies can be used  ethically or unethically I think what you've seen   out of the department of defense is an absolute  commitment to doing so ethically now I do want to   correct the record here the united states policy  is not human in the loop this is often erroneously   reported the united states policy is department  of def department of defense directive 3000.09   it's been in effect since 2012 and the term of  art used is appropriate levels of human judgment   what that reflects is the type of automation that  we're going to be comfortable with is going to be   application specific and context specific right  so the close-in weapon system which ships used   to automate their defenses when there's a  lot of missiles or planes attacking a ship   that will go to full automatic because one it's  in a defensive mode right and different there's   different interpretations of international  law depending on whether you're offensive   or defensive in context and you're shooting down a  bunch of missiles right and we've been comfortable   with that level of full automation for a very long  time now with respect to you know what's going on   in in China and Russia as I said the department  of defense policy is on the web you know you   could go read it right now that and we put it  out there 10 years ago and I have heard nothing   from China or Russia on this subject and I want to  point out right that there's discussions going on   in the united nations but those are by and large  like the diplomatic discussions what I’m talking   about is publications by the militaries of these  countries there is these policies are not open   you want to jump in here and push back on one  thing which is that you know and look I’ve spent   enough time on the us military I have I’ve never  met face to face an actor who wants to act in an   unethical way but even when we try and hit exactly  the right person there have been many instances   many instances where we don't where either it's an  MSF hospital in northern Afghanistan where it's a   target in a rock filled with women and children  where our intelligence is simply bad or for some   reason there's a screw up somewhere along the way  and while intent matters in determining who might   be guilty or who did wrong it at some point it  doesn't matter when you're on the receiving end   of it you know you're still dead or your family's  still dead and I think that's a concern a lot of   a lot of people have when they hear that you  know there is a push into automated systems   that are both for information for weapons for  both real world and digital kind of manifestations   that you know we're going to be faced with ethical  challenges and like you know it's interesting you   brought up the treaty over bombs because the fear  there was that you would have the devastation   that would wipe out whole cities and any picture  of Europe or Germany in 1945 could show you that   is that came to pass I mean that's what happened  how do we ensure I mean how do you balance that   with the technology that has the ability to do the  same an adversary issue may well be willing to do   it I mean I just I don't know the answer for that  but I’ve yet to hear a good one either not at all   I think there's kind of two different questions  in that regard right one is the united states   going to take appropriate technical safeguards  and procedural safeguards right to minimize the   risks that there's any sort of accident and I  can just tell you the answer is absolutely yes   but the department of defense is 3.4 million  human beings including military personnel   reservist personnel and civilian personnel  and we have a budget of 750 billion dollars   there is a lot going on and in an organization of  3.4 million people yes one in a million bad things   will happen that's kind of the nature of the game  and we play we play in an area where there are   life and death stakes operating safety critical  technologies involved in the use of force this   is an astonishingly difficult task and it's true  that the department has made mistakes but even   in the examples you cited right the medicine son  frontieres bombing there was an open investigation   by the department of defense we published that  report openly to say it and we also published what   were the procedural safeguards that were going  to change to ensure that that didn't happen and   for the individuals who did not take into account  the existing procedural safeguards right there was   direct disciplinary action that is not the kind  of thing that you see right out of that example   I gave you for the Russian case right they were  doing the mission there's a big difference there   yes this brings us back to a broader  question I’d like to throw to the whole group   which is you know are these technologies that  we risk kind of unleashing in ways that there   won't be a second chance you know much like if you  know somebody dropped a very large nuclear weapon   or there was a nuclear exchange we may not get  another crack at that even it was by mistake   are any of these technologies we're  talking about now kind of big enough   broad enough and impactful enough that  they'll only be one shot to use them   and that the chance to walk it back or say oh  we need to refine our systems won't be possible um Herbert what do you think about it you've  thought about the dangers here a little bit   you think most of the dangerous falling  behind but is there danger here that you know   we've set up in motion some kind of computer  virus some kind of automated system whether it's a   physical weapon or a digital attack that we won't  be able to pull back it won't be able to refine   second time around that it can do enough damage  to make I don't know well I mean my general view   is that there's no situation that's  so bad that you can't make it worse and so the answer is that hopefully if the first  time something like this screws up we come back in   and we learn and we at least minimize the damage  doing going forward so no I don't think it's   I don't think it's ever hopeless in in the  sense that you're that you're talking about yeah   you know um we've got another question here which  takes us a slightly more optimistic direction   which might be a nice way to kind of push into the  final five minutes here which is you know right   now the space race has become largely a small  group of billionaires trying to get up there   and now captain kirk as well but somebody has  asked how can we leverage ai to further space   exploration of research whether it's in a  competitive situation with adversaries or   collaborative situation a relic what  do you think about that what comes in   well I’m a trekkie at heart so to boldly know  what has gone before is yeah it's an interesting   and aspirational I think desire at the same time  in terms of advancing certain technological areas   this is a driver for sure for technological  innovation I mean space exploration and so on   however it's quite interesting that now space  is becoming more and more let's say crowded   so as with for instance emerging technologies  and artificial intelligence you know   an international governance regime or space  traffic management as well as some rules of the   game need to be established especially since we  see a lot of private companies challenging as well   you know entrenched international norms when  it comes to utilizing space for instance   and it also becomes a strategic area as well in  terms of providing connectivity and other types of   opportunities for again advancements when it comes  to secure transfer information and data so all   spaces yet again another strategic area or your  area emerging quite hotly so as with ai and with   space I think that one of the questions moving a  little bit from the technology dimension and how   to develop reliable and transport the technologies  is also this issue of building more of a   an alliance let's say international alliance  of like-minded partners that can work together   to counter you know certain postures in  the international arena and landscapes   such as Russia’s or China’s when it comes to  indiscriminately using technologies but in this   regard what I’m seeing as well is competition  when it comes to these emerging international   governance regimes and norms between  the OECD the U.N. the council of Europe  

uh a lot of forums where there can be discussion  had about ai the development of ai in a more   human-centric terms and to kind of piggyback on  Kara’s point where indeed democratic values and   human rights are at the front one the forefront of  developing these technologies as well as their use   for security and defense purposes so from this  point of view I think that much more work needs   to be done not only on the technology side but  also more in building this international norms   regime when it comes to applying emerging  technologies to security and defense   I mean Kara do you see anything like that afoot in  the in the more purely digital realms and when it   comes to surveillance and digital security where  you know even if there are international norms   countries can use that on their own people and say  look this is our own this is our domestic concerns   buzz off but where do you see that going what do  you think I think that's the impetus between a   lot of the you know when these countries  integrate these new technologies when   Zimbabwe buys a facial recognition system from  China when Venezuela uses the id card that is   worked on by CCP linked engineers that go over and  sit with their communications platforms and tell   them how to use it I think yeah I think they're  very much concerned with their own internal   stability and internal control and use it as an  excuse for making these systems more pervasive   and building them out in a a bigger way and I  think that you know a lot of the middle eastern   countries do this a lot they are very heavy-handed  with their surveillance systems and they do it   to control their populations that's huge but I  also think to pick up on a Raluca point just now   that she made there is a sort of a fracturing  and you know different federations of people   making their own rules as they go because of this  they're seeing a problem you know we even see it   in in places like India and Australia where they  have their own you know initially sort of free   systems and Australia you know in a fight with  google because there's so many internal equities   that they have to deal with so and India they're  making their or they're thinking about making sort   of a nationalized biometric system so they're  these constellations really of almost insular   or at least forays into more insular systems  where instead of having this you know broader   interconnected world I think people are sort of  or nation states at least are closing amongst   themselves so what is that going to look like  in the next five to ten years it's going to be   you know take out the bifurcation equation  where you have half of the world on you know   trying to stretch it off because we're down  to like 11 seconds left I think everybody for   coming to listen and thank all the panelists  I found the discussion fascinating I hope   everybody at home or work wherever you're  listening did as well have a good day

2021-10-19 19:46

Show Video

Other news