Artificial Intelligence and the Rise of Digital Repression

Artificial Intelligence and the Rise of Digital Repression

Show Video

John Dale: Welcome all of you to our Webinar series on Science and Human Rights, co-organized by the Science and Human Rights Coalition of the American Association for the Advancement of Science John Dale: and the Movement engaged Research Hub of the Center for Social Science Research at George Mason University. John Dale: Today's topic is AI and the rise of digital repression. John Dale: I'm. John Dale, Director of the Movement, Engaged Research Hub and Associate Professor

John Dale: in the Department of Sociology and Anthropology of George Mason. I'll be your moderator. We are delighted to have with us today. Steven Feldstein. John Dale: his senior fellow at the Carnegie Endowment for international peace John Dale: in the democracy conflict and governance program. His research focuses on technology and politics. Us: foreign policy. John Dale: international relations and the global context for democracy and human rights.

John Dale: Steven is the author of the rise of digital repression, how technology John Dale: is reshaping power, politics, and resistance John Dale: published by Oxford University press in 2,021. John Dale: It was the recipient of the 2,023 grammar award for ideas improving the world on improving world order. John Dale: This webinar will examine the interplay between artificial intelligence technology surveillance practices and global governance norms. John Dale: AI technology has extended the power of states to track citizens John Dale: due to advances in biometric identification systems, social media, monitoring and predictive policing techniques Steven Feldstein: while entrenched autocracies are making eager use of these new capacities. John Dale: More open political systems are also incorporating these tools, raising troubling questions about the impact on due process. John Dale: free expression and active citizenship. How will the growing availability of AI technologies impact democratic governance.

John Dale: fuel, repressive practices, or undermine the rule of law? John Dale: The answer depends on efforts by international organizations, national governments, civil society groups. John Dale: and the wider global community to craft new new norms around AI. John Dale: What those norms look like, and how they will shape existing practice and future innovation is hotly debated John Dale: participants. John Dale: Today we'll come away with a better understanding about the stakes involved in these conversations and the ensuing policy implications.

John Dale: We'll begin with Stevens presentation, he'll. He'll probably talk to us for about 30 to 40 min, and then we will begin to ask questions. I'll probably have a couple of questions, and and Karthick Romano John will, too, who helped us John Dale: set everything up for this this Webinar and i'd like to to thank him. John Dale: Then we will open up to the broader audience. You will see at the bottom of your screen a. Q. And a button.

John Dale: Use that rather than the chat to send us your questions. John Dale: and we will also be recording this Webinar, and you'll find a link to it in the chat to access it later. John Dale: So without further ado, i'd like to turn it over to Steven. Steven Feldstein: Great. Well, thanks for the warm welcome, John. I really appreciate it. And thanks to George Mason and the center for Social Science research for hosting me for this

Steven Feldstein: for this talk. i'm, you know i'm. Actually i'm excited to kind of delve into some of these issues. I I have been thinking about them for a long time. Steven Feldstein: but you know I find that the field is so dynamic, and there is so many new Steven Feldstein: issues to to consider that I I really do think that there is a lot on the table to think about to parse through to discuss. So i'm excited to sort of Steven Feldstein: lay out some ideas that I have based on the research that done for the book, and subsequently and then more recent developments, particularly linked to issues like generative AI, Steven Feldstein: and then engage in a dialogue in a. In a. Q. A. And here some of what's on your mind, and Steven Feldstein: and see what kind of ideas we can we can discuss. Steven Feldstein: So let me start. I will.

Steven Feldstein: I have a presentation here. Let me share that. We can kind of Steven Feldstein: walk here so. Steven Feldstein: and the rise of of digital repression. So Steven Feldstein: so you know, I I think, as I mentioned this, so there's been a whirlwind of new developments related to artificial intelligence this year, and it can be a little bit hard to make sense of of it all. Steven Feldstein: and I think it's useful to break down Steven Feldstein: more precisely what specific technologies and associated harms of concern.

Steven Feldstein: And you know, I think one of the goals, for for this talk, at least, is to offer a framework to better understand and figure out how to wrestle with some of these issues as well as potential responses. Steven Feldstein: So I really want to focus my my talk in 3 areas here. Steven Feldstein: I want to start by talking about AI in the digital repression context. And this very much sort of builds off in my book

Steven Feldstein: in terms of particularly the kind of AI Steven Feldstein: in surveillance questions that are are around. Steven Feldstein: Next. I want to talk about global governments and global norms. You know. In other words, what are countries around the world, what are international organizations doing to put in place safeguards and more common understandings when it comes to Steven Feldstein: associated harms and objectives linked to artificial intelligence technology. Steven Feldstein: And this will bill in part on a forthcoming paper that I have in a democratization journal that specifically looks at the European Union's AI Act, and efforts to build a a kind of global normative agenda around its key components. Steven Feldstein: and then, finally. a given kind of massive amount of discussion Steven Feldstein: surrounding generative AI, particularly large language models. There are new developments and societal considerations, I think, are very critical to all these questions about

Steven Feldstein: how we will govern ourselves, and you know how technology links to oppression. Steven Feldstein: So I want to take some time to kind of walk through that. And this part is really new Steven Feldstein: in terms of material that I've I kind of recently put together, so Steven Feldstein: i'll be curious to get feedback on that as well.

Steven Feldstein: So let's start with the first. This the first question AI. In a digital repression content. So Steven Feldstein: in my book I outline a range of technologies harnessed by leaders for political purposes, everything from surveillance, strategies to censorship. Steven Feldstein: techniques to disinformation. and so forth. And one of the cross-cutting technologies I profiled in the book is artificial intelligence and really kind of focused on how artificial intelligence enhanced and surveillance.

Steven Feldstein: And so you know. What do we know at this point? And you know what what was established? Well. we know a few things that we know that when Steven Feldstein: artificial and when algorithms advanced algorithms are combined with biometric systems like facial recognition, it can help help identify individual individuals of interest in a crowd. Steven Feldstein: We know that algorithms can allow operators to monitor mass amounts of information Steven Feldstein: looking for suspicious patterns or dissenting speech. Steven Feldstein: We know that whether or not artificial intelligence systems are fully operational. Steven Feldstein: There is enough of a chilling effect that they have Steven Feldstein: that they it serves as a de facto, suppressing when it comes to descent when it comes to the ability of people Steven Feldstein: to freely communicate with one another, particularly in coercive environments. Steven Feldstein: And so there's some associated advantages with AI systems when it comes to surveillance. Yeah, I'm.

Steven Feldstein: You know they offer greater capabilities and comparable strategies. and especially with an increase with with continued increases in computing power systems will continue to enhance even further. Steven Feldstein: You know, AI. Systems are always on so on like physical repression. Counterparts in the past. They don't suffer from fatigue Steven Feldstein: or in attention. while Steven Feldstein: you know they aren't fully autonomous, yet that potentially is on the horizon.

Steven Feldstein: AI systems also help solve principal agent problems in a sense of when you have large amounts of people who are conscripted to serve as your eyes and ears to enact surveillance objectives. Steven Feldstein: you know that that often that can also bring problems when it comes to loyalty Steven Feldstein: with AI systems that loyalty issue is someone solved. It's a machine that's undertaking the bidding of the operator in question. Steven Feldstein: And then there's a a cost efficiency argument as well.

Steven Feldstein: especially when attempting to monitor mass amounts of data and information across different information streams and large societies. You know the China Chinese authorities and their approaches are a good case in point. Steven Feldstein: You know the having to do the equivalent in terms of traditional surveillance, physical surveillance, such as we saw, you know, in the East Germany, with the Stasi Steven Feldstein: would be inordinately more expensive. Steven Feldstein: And so, while there is, I, I I think. Steven Feldstein: Well, there is a a significant investment. Capital expenditure required to run these systems. When you look at kind of using them at scale.

Steven Feldstein: Amongst large populations there tends to be a cost. Efficiency associated with them as well. Steven Feldstein: but they're not perfect, and they don't. They don't exist, or they don't work well for all societies. They require cohesive command and control. When it comes to the functioning of security forces, they require sufficient capacity. Steven Feldstein: capabilities, and resources to leverage appropriately Steven Feldstein: so. When you have all those elements in place, or you have close to it, they can be extremely effective, and if not transformational, in terms of allowing Steven Feldstein: a very finely tuned ability for States to monitor and track information about citizens. But when you don't have it, you know they are a tool that that can do very little Steven Feldstein: in fact, in some ways can be a distraction from other from other strategies. So Steven Feldstein: they're not panacea. It really does depend on context. Now, where are the hotspots when it comes to? You know where where these, where where these tools are being used? So certainly China is a global AI. Repression leader, and techniques, honed.

Steven Feldstein: and Xinjiang, as well as in safe cities across the country. Steven Feldstein: have served as a real modeling Steven Feldstein: for other countries. Other regimes, other municipalities that are interested Steven Feldstein: and enacting some of the some similar techniques. Steven Feldstein: Other authoritarian countries, and even weaker democracies, have experiment quite a bit with using these tools, and it's become fairly ubiquitous, although. as I want to emphasize actual practice in terms of effectiveness of use does vary. But we see all sorts of anecdotal evidence at least Steven Feldstein: of you know, facial recognition used in Moscow's Metros to pick up protesters, anti war protesters. We see the burgeoning amount of safe cities and fish recognition systems being implemented in a week for democracies like India, Pakistan. Steven Feldstein: Thailand, and other places.

Steven Feldstein: And so you know, one of the questions I Steven Feldstein: really try to Steven Feldstein: unpack is to look at and understand better better the proliferation of AI surveillance systems around the world. Steven Feldstein: And what I what I've had Steven Feldstein: someone Roth, is that at least 97 countries around the world Steven Feldstein: have been documented to deploy public AI surveillance techniques. So let me. I'm going to here. If you look at this next slide here. and this comes from a paper I published subsequent to my book last year for the National Endowment for Democracy. It talks about the kind of struggle Steven Feldstein: for regarded global AI surveillance. This breaks down the kind of distribution of of the 97 countries I mentioned in terms of who is deploying these systems, and, as you can see. Steven Feldstein: going from. you know, very sharp green on the left. Liberal democracies to dark blue, close autocracies. There's a fairly even mix in split Steven Feldstein: when it comes to the regime types. And so, in other words, as around systems are certainly not used uniformly when it comes to different types of countries between democracies and autocracies. Steven Feldstein: but they are distributed Steven Feldstein: fairly uniformly. In. In other words, while there may not be the same types of abuses associated with using the systems in democracies

Steven Feldstein: that's not to say the democracies. Still aren't using the systems period and so if you look at at that number, I think 52 countries at the 97 are classified as as democratic, either electoral democracies, which is sort of like weaker democracies. Steven Feldstein: you know. Steven Feldstein: you know. Steven Feldstein: like Brazil or Mexico. or South Africa, or Liberal democracies, like your classic European. Steven Feldstein: Liberal democracies, you know Australia, United States, and so forth. Steven Feldstein: And you know one. I I want to also mention is that while we tend to focus on the most extreme case, cases use cases and and a lot of times, they revolve around examples in in China.

Steven Feldstein: In fact, you know, even at home we have Steven Feldstein: the presence of large amounts of a of how it's surveillance taking place. So it's a good example that I like to use is to look at the Us. Border. Steven Feldstein: Oh, in the Us. Border, According to recent reports, we have at least 9 predator drones operated by customs and border patrol. Steven Feldstein: We have 55 integrated, fixed towers, so these are 120 to 180 foot surveillance systems equipped with infrared cameras and built in radar. Steven Feldstein: They're built by a company in Andro which has been very active in in this space, and Androl has come up with a system, an AI system called lattice to autonomously identify, detect, and track Steven Feldstein: objects of interest. Cameras can pan 360 degrees. and can that detect a human from 2.8 kilometers away? Steven Feldstein: So you know what kind of what kind of controls do we have? What kind of oversight Steven Feldstein: do we have when it comes to to Steven Feldstein: in different individuals who are tracked? How little on data that's collected? Steven Feldstein: I'm not sure myself. And this is in a, you know, a liberal democracy where there is a fair degree of openness and accountability. So you think about that in a in a place where there's less of that

Steven Feldstein: a of the rule law in place, and it certainly raises even Steven Feldstein: greater concerns. Steven Feldstein: So the point being is that they're considering concerning uses in democracies as well as autocracies. But we do have to be careful not to overstate the case in terms of how they're using democracies versus the very coercive uses in authoritarian states. Steven Feldstein: And and one of the areas for me that, I think is particularly important. Steven Feldstein: Yeah, is, you know what I term global swing States, where you know, I think the struggle for how these systems are used Steven Feldstein: could go either way.

Steven Feldstein: And this is, you know. As I mentioned, this slide comes in the report. But in that report I specifically looked at this issue, and what I did was I identified 67 digital swing states, so I essentially use varieties of democracy. Index. I use their electoral democracy scores, and I kind of picked a swath of countries that scored Steven Feldstein: the low liberal democracies, but above Steven Feldstein: sort of close autocracies, essentially ranging from that to high in Jamaica to the low end. Malaysia Steven Feldstein: and all these States combine different democratic trace with autocratic and attributes. and they all suffer from a mix of democratic weakness. So you have concentrated power. Steven Feldstein: an executive branch, lack of judicial independence, and and and me freedom concerns. Steven Feldstein: And Steven Feldstein: so this right here, you know, I like I like maps and charts. This shows you of the swing States. Steven Feldstein: You know the the breakdown. So 44 of the 67 swing states I looked at have evidence of of public AI surveillance

Steven Feldstein: and so those are the ones, you know. Steven Feldstein: with the the darker kind of color Are Are those digital swing states Steven Feldstein: actually sorry? The Steven Feldstein: the ones with the lighter color? No, the ones with the darker color are the ones that have publicly as surveillance the ones, the lighter color do not. Steven Feldstein: And so that kind of shows you the the breakdown, but really kind of goes cuts across regions Steven Feldstein: pretty well. Now, of of all these groups 55 are members of Steven Feldstein: the bell, and we were initiative, and and so there's certainly a strong, I think, push Steven Feldstein: when it comes to equipment. That is source from China. But it's not only just China as well. I'll talk about that in a second.

Steven Feldstein: and you know there's a varied amount of of use as well. So one of the places I mentioned Pakistan is a place that has evidence of Steven Feldstein: public AI surveillance. But there's been a lot of studies that have kind of look more closely at how these systems are used in that country, and and what it's actually found Steven Feldstein: is that Steven Feldstein: the systems often are not are are sort of marginally operational. There's a lot of problems in terms of actually taking data collected and sort of doing anything actual with it. And so it's much more for show what you know. One contact I interviewed in my book Steven Feldstein: would describe as security theater, you know, or kind of elements behind.

Steven Feldstein: You know, Michel, for coast the Penoptica, and that the perception that one is being watched can oftentimes be sufficient in terms of doing surveillance, even if the reality Steven Feldstein: of being washed is not completely there. So it's an interesting interesting sort of question now. Steven Feldstein: when it comes to supply the technology. As I mentioned, companies from China are popular suppliers. but companies in the Oecd, based in the Oecd countries are also major players. Steven Feldstein: I think that you know in terms of kind of understanding the balance and dynamic from China, you know, you certainly get a lot of modeling in terms of how these systems can be used. Steven Feldstein: You get subsidized components that make it affordable for countries that otherwise would be less reluctant or less able to purchase systems to actually acquire them. And you also have robust marketing of of these products. Huawei. Steven Feldstein: Zte, and others have pretty been pretty aggressive in terms of getting meetings with different government officials and talking about what their products can do. Steven Feldstein: So there's no question that that has had an influence. But I think we also have to be careful not to overstate the influence of China either when it comes to these systems. So there are a lot of push pull factors in line. If the if the pushing factors are the kind of

Steven Feldstein: supply and supply side China related side, the pull factors also matter. So what are pull factors, you know it relates to the 8 agency of countries themselves. The political incentives that regimes have when it comes to making determinations about whether they want to acquire these systems and what they want to do with them. Steven Feldstein: So it's not just about China being in the game. It is also about the the very specific incentives that countries have about whether they want to attain these attain these systems. So you know, take you gone as a good case in point. Steven Feldstein: You got as the system, whether it's been reported that they, or is that country where it was reported a few years ago Steven Feldstein: that the regime purchased the safe city system for Kampala that was subsequently used against OP. Against the opposition ahead of national elections. Steven Feldstein: Now, you know, was this something that was simply a matter of China pushing the system? Or were there other political incentives in mind, particularly President M. Stephanie's desire to rig elections and suppress the opposition as needed in order to guarantee another term in office.

Steven Feldstein: you know, I would say the ladder, in which case, then, you know the idea that simply this simply being a China oriented strategy of surveillance or to authoritarianism doesn't really kind of get you get you to the full full picture. Steven Feldstein: So what can we do about these trends. Steven Feldstein: And then this really takes us to to the. Steven Feldstein: to the global governance global Norms question or the second part Steven Feldstein: of his talk. So there really is a challenge, I think, between, and a mismatch between innovation and regulation Steven Feldstein: and regulations have not kept pace, as you know, whether in the United States or or or more globally.

Steven Feldstein: I think the good news is that there are grown signs of consolidation. So we see, at least at a high level in in different Steven Feldstein: in different fora. So the Oecd Steven Feldstein: A as well as Unesco, have done a lot in terms of releasing high level AI principles linked to trustworthy AI designed to shapes, shape ethics and practice. Steven Feldstein: There are at least now 175 countries, firm Steven Feldstein: and firms who have produced documents listing ethical principles for AI. There's, of course, the actual, enforceable, legislative, hard law process taking place in the European Union at the moment with the AI Act.

Steven Feldstein: and we also have the United States kind of a a growing number of documents that remain fairly fairly abstract, but are starting to kind of push the conversation in more concrete directions, including a blueprint for an add bill of rights in the United States as well as this risk management framework for AI. Steven Feldstein: So how do we understand the AI governance landscape. Is there a way to kind of divide out and think about Steven Feldstein: how it is organized. Steven Feldstein: and I think some would argue that you know you are seeing the emergency sort of 3 contestant areas centered around the United States around China, and kind of more authoritarian practices in in Europe.

Steven Feldstein: But I think this is too productive. I think that's too simple in terms of looking at things, I think, for one, it leaves out states like India. Steven Feldstein: Brussels start out, yeah, India, Brazil in Russia, which are making, you know, pretty important strides in terms of coming up with our own mix of of technologies and uses of of AI, of AI systems. Steven Feldstein: Now, you know, I think one of the questions will be that you know, despite this kind of you know.

Steven Feldstein: given this fragmented landscape Steven Feldstein: will we see signs of consolidation behind any of the blocks or groups of countries or markets where governance Steven Feldstein: developments are starting to move. Steven Feldstein: and I think that you know something that the EU in particular is is one place that will continue to kind of lead and shape regulation in the AI space, and I think there's some arguments that are in favor of that. You know the EU is the is a first mover, and when it comes to regulations, and that can bring about a sticky impact. Steven Feldstein: It has a large consumer base that allows for solidifying de facto and jury influence when it comes to Steven Feldstein: a I know worms, and you know, in terms of kind of near peer competitors. Steven Feldstein: You know the Us. Remains pretty far behind.

Steven Feldstein: and there's also a shared consensus with the Us. And a lot of core areas, which means that there might be a less of an incentive for the Us. To pivot to its own set of ideas away from at least some aspects of the Aia. Steven Feldstein: However, I think there's also some questions about Steven Feldstein: the degree to which the European push will really set the standard for safeguards, and Steven Feldstein: you know, difficult, different ethical principles when it comes to AI, Steven Feldstein: I I think, for one there's some, you know. Steven Feldstein: big design and implementation concerns when it comes to the Aias risk tiered risk framework that so far it looks like it's coming into place in terms of having different types of technologies. Airline technology is being put in the first, second, third, categories of potential risk.

Steven Feldstein: and that can also lead to future proof plot problems. So we're already seeing some debate right now about to what extension generative AI systems we put into higher risk categories. And so it is very difficult, ex Anni Steven Feldstein: to make the terminations about how different systems will work, what kind of harm will result when you, we don't simply don't know Steven Feldstein: exactly what will emanate. and while Europe's large consumer market does provide enough leverage to offset lagging well does provide leverage when it comes to having a big market, so the market purchase and power. Steven Feldstein: You know logic. Steven Feldstein: On the other hand, you know the biggest AI companies, the center of innovation resides elsewhere in the United States, in China, now in Europe. Steven Feldstein: And so I mean, if you look at the generative AI kind of explosion. Now you know, where is Europe when it comes to that. And I think there's also questions about whether AI technology itself is conducive to extending what you know has been popularly termed at this point. The Brussels, with that Steven Feldstein: you know kind of linked to what Brussels was able to do when it comes to privacy, standards, and Gdpr. The argument being that the Brussels effect might also extend to AI governance.

Steven Feldstein: But because AI Steven Feldstein: is is something where there's more of the ability to have market differentiation, so essentially to have different rules associated with a products in different markets. So so, rather than having a Brussels effect that simply stays within Brussels and different standards that apply elsewhere around the world. Steven Feldstein: So I think you know. Ultimately we'll see a little bit of both. you know. I think most likely we'll see a conversions between the EU and the Us. On core issues, with Steven Feldstein: the EU, exhibiting some sway as a first mover, but significant equities relating to the Us. Due to its industry, leadership in innovation. Steven Feldstein: and I think.

Steven Feldstein: and I think we'll, you know. But I think the jury is out, and it's a lot more left Steven Feldstein: on this on this stage, and a lot more left to to debate. Steven Feldstein: So that takes me to the third part. Then what to make about alternative AI, and large language models. Steven Feldstein: Now, I think Steven Feldstein: you know generative AI is, bring a host of unpredictable, it not troubling implications alongside, also exciting possibilities. Steven Feldstein: and certainly progress, I think, is outpace expectations and certainly outpaced regulations. So you know, good starting point is, what are we talking about? What do we mean by generative AI? What is what makes it different. And you know, Jim generative AI, broadly speaking, as a set of the algorithms that are capable of generating Steven Feldstein: extremely new realistic content text images audio from from train training data.

Steven Feldstein: Now, most of the the most powerful algorithms are built on top of foundation models. They are trained on best quantities of unlabeled data to identify underlying patterns that can be then used for a wide range of tasks Steven Feldstein: and the capabilities sort of. you know. Steven Feldstein: into a few categories. So one is to generate content ideas. So you can create new unique outputs from Steven Feldstein: whether it's video ads music. You know, molecular compounds with different properties. Steven Feldstein: A second is efficiency gains. You know, these large language models can really accelerate Steven Feldstein: the ability to fulfill manual and repetitive tasks.

Steven Feldstein: And then 3 is personalization. They allow for curate content. Steven Feldstein: Targeted ads at a mass scale. Customize experience, you know all of which I think can really change the nature of of how information is consumed, and and really how our economies function. Steven Feldstein: Now.

Steven Feldstein: I wanna you know, because this Steven Feldstein: talk is generally focused on Steven Feldstein: democrat challenges, and, you know, repress and concerns link to AI. I really want to talk about 4 areas of concern Steven Feldstein: and the potential implications, you know, as I was sort of thinking a lot about Steven Feldstein: the last week about. Where are we most worried when it comes to the potential effect Steven Feldstein: that you know, generative AI models will take us. I really kind of focused in on 4 4 areas, and i'll walk through each one of those. So Steven Feldstein: let's start with undermined democracy kind of broadly. Steven Feldstein: There are concerns that you know, Lms. Large numbers of miles will bring catastrophe and civilizational class. Steven Feldstein: Certainly, by some people.

Steven Feldstein: There was a recent OP. LED. On the Financial Times, by, you know, venture capitalist, Entrepreneur Ian Hogarth. Steven Feldstein: entitled, we must slow down the race to God like AI, Steven Feldstein: and really talks about the fact that you know this kind of leads to the idea that this will soon take over all all aspects of society, and we'll have, we will lose the ability to be able to control and make decisions when it comes to how these systems operate, that we are kind of giving away the store in that sense. Steven Feldstein: Now that that sort of tends to be the kind of far end side the alarmist side, and and i'm less convinced about that scenario that I wouldn't completely dismiss it out of hand either. Steven Feldstein: What i'm more concerned about is what an author name Hans Bajor calls the Democratic disaster scenario in a. In a recent piece that that he wrote. Steven Feldstein: and the idea here is that the privatization of language technologies will have drastic effects when it comes to the political public sphere. So you have the danger of creating new oligopolies that concentrate tech

Steven Feldstein: even more so in a few private companies we're already seeing that in terms of the most, while they use, you know, kind of underlying models that are in place, that these companies will serve as gatekeepers who will decide the future of opinion forming and political deliberation. Steven Feldstein: and that we can spec the large language Models will begin to ex sort of influence on our culture and society. And you know, what did they generate? Clinical manifestos or vision is that broad groups begin to follow Steven Feldstein: without having some degree of public control. This can lead to a pretty dangerous scenario, and we've already seen how disaster it's. It's it's been when it comes to private companies overseeing large scale communications and the preserve the perverted incentive Steven Feldstein: that result particularly related to in social media.

Steven Feldstein: So imagine the scale and power of this dynamic multiply exponentially. Steven Feldstein: So that's something to watch. Second propaganda and disinformation. Steven Feldstein: So it doesn't take much imagination to consider the propaganda potential of generative AI systems. I think you know one of the biggest advantages you get is that you have the ability to cheaply Steven Feldstein: generate scalable content that is indistinguishable from human written texts. But the problem but there's a a of volume in a sense that you can produce almost a nearly unlimited supply of that information, either to one poison the well.

Steven Feldstein: which means that you can get it to a point where no one trusts any information, because they can't tell whether it's true or false, or to convince mass amounts of individuals on false grounds to follow or pursue particular political objectives, including conspiracy theories, all which we've already seen. Steven Feldstein: Now you also have a a of scaling and micro targeting, which is, is both sort of in some ways it seems like it's. It's contradictory, but actually kind of comes along quite well in the sense that Steven Feldstein: these systems can allow you to target vast numbers of people with tailored messaging linked to potentially subversive political agendas can be ideologically based. I can relate to mercenary objectives. But either way these extreme forms of personalization are things that are right around right around the corner, if not happening already. Steven Feldstein: And then finally. there's this lower barrier to entry problem as well, which is that

Steven Feldstein: you know it is becoming increasingly more cost effective, and requires less and less capacity to establish propaganda centers, troll farms, and other sorts of of of centers with nefarious intent in mind. Steven Feldstein: leveraging the power of a large language, models in which to put out this type of propaganda and disinformation. Steven Feldstein: So that's something to watch. There's a good article in foreign affairs called the Coming Age of AI pop power propaganda, which is worth worth a read Steven Feldstein: third cyber attacks and criminal use. So the same attributes that we talked about can also be applied towards hacking when it comes to bring major disruptions. Steven Feldstein: Now I think you know Lms already, we know, can be manipulated through prompt engineering and other sorts of practices, to get them to generate a malicious code Steven Feldstein: and to help get around the safeguards that ostensibly are in place.

Steven Feldstein: And so what kind of examples of crime can we potentially foresee link to large language, models? Well, fraud and and impersonation. Certainly Lms. Can make this more authentic. It can happen faster and most importantly, it can happen at a mass scale. So the thing with fraud is that that me, even if you only fool 98% of the people. Steven Feldstein: you know, if you apply this at volume that 2% can can translate to quite a bit of people. And again, whether that's for profit, making motives or something far worse either way Steven Feldstein: that ought to be a concern. Steven Feldstein: You can also large land on this models can also be used for code generations, for hacking intrusion operations. Already we've seen that these models can generate code faster and in more complex ways than any existing models prior to.

Steven Feldstein: and that safeguards you know, that are built in for. Let's say, chat Steven Feldstein: only work when Chat Gpt. 3 knows what it's doing, so they can be quick trick into doing side of the criminal side of it. Work Steven Feldstein: such as creating malware. And you know, what do? What do individuals think about that? Well, Jennifer, the direct of of size of when asked about how Steven Feldstein: hackers can use AI tools in the future attacks, has said quote: I am really really worried in a way that I've never been worried, and I used to deal with Isis all the time. Every day. We just don't know where this is going to end up, so I have more word than I've been in a long, long time, but the downstream potential of the use of this technology Steven Feldstein: by bad actors. So there's a really interesting good report that came out of March, 2,023 on Chat Gbt by Europol, and I advise you to take a look at that. If you're interested in this this area.

Steven Feldstein: Finally, terrorism in war, Geo. Politics. So you know there's a question about can Steven Feldstein: chat, you know. A large language models help proliferate. you know. Knowledge use for. Let's say, weapons of mass destruction, like you know. Steven Feldstein: putting together chemical weapons. Steven Feldstein: and it looks like on that score. Steven Feldstein: You know. There there are. There are some factors that make it less likely. Steven Feldstein: because you know. One question we have to ask is research acquisition. The biggest roadblock to, you know.

Steven Feldstein: Wmd. Acquisition use. Steven Feldstein: you know, and terrace uses require lots of ingredients and information. Acquisition is just one of of many. Steven Feldstein: On the other hand, I think what makes people concerned about how largely the models could enable this type of these type of activities is that Steven Feldstein: there's lower lower barriers to entry. and some some oftentimes with terrorist acts. They are acts of opportunity. you know.

Steven Feldstein: could largely language Models create live shopping lists for weapons created, you know, help to source difficult to find disparate ingredients to hundreds of web websites simultaneously. Steven Feldstein: Can these models help, generate. Steven Feldstein: now help find publicly accessible, but difficult to find information, shortening the time that users need to spend on research and compiling information that is understandable to a non expert user for catastrophic or certainly risky behavior. Steven Feldstein: And then there's all the whole question about war planning. Already we've seen in the Ukraine war and something. I've written about a little bit Steven Feldstein: about how AI has been essential to helping different actors organize in multiple streams of data to assist with targeting.

Steven Feldstein: and you can easily see large language Models further move this process along when it comes to generating recommendations for military action, perhaps even autonomously. When it comes to, you know, or kind of interacting with other autonomous weapons. So you have an autonomous targeting system Steven Feldstein: interacting with autonomous ordering drones. You can only imagine kind of Steven Feldstein: the downside risk to that. Steven Feldstein: So you know, I did want to know. I did want to turn to an expert and asked directly to what extent does

Steven Feldstein: do large language models. It's chat to be 3 threatened democracy. So I asked. Chat Gpt. 3 and this is what chat, c. 53 told me Steven Feldstein: I, you know. How can we stop large language models from destroying democracy? And so you know, at least in terms of the answer I got back it. It, it disclosed. It has no intention or capability to destroy the mosque any other form of governments. Steven Feldstein: But and I asked this question several different ways, and and certainly one of the things that it pinpointed a lot was Steven Feldstein: deepfakes disinformation, propaganda question.

Steven Feldstein: So I guess of all the different things I I mentioned, You know, perhaps the propaganda, this information side is the one at least for Chat Gp 3 Steven Feldstein: that it or its creators, or the Internet, in which it obtain its training data is most concerned with. Steven Feldstein: So something to think about on on that front Steven Feldstein: last kind of point, and I want to. I want to wrap up is that Steven Feldstein: you know there there are lots of questions in terms of where we are in terms of productivity cycle when it comes to generated AI, Steven Feldstein: and you know, let's be very clear. We're we're just at the very beginning. Steven Feldstein: and we look at the history of other technologies that we put in the use, and this is some. Steven Feldstein: There's a smart OP. LED. Written by Paul recruitment right

Steven Feldstein: a few weeks ago on this front. Steven Feldstein: If you look at electrification. Steven Feldstein: the productivity gains for electrification. There was, there was certainly a lag that kicked in before it actually was able to be incorporated in different manufacturing processes. Likewise was the first microprocessor that came out in 1,971.

Steven Feldstein: You actually saw the declines of productivity, you know, for for 2030 years, until you or 20 years, until you start to see a growth and sort of a peak Steven Feldstein: around 2,005. And so you know, the point is is that even when you have new innovations, and even when you work kind of in this more modern sort of cycle of innovation, it takes time Steven Feldstein: for these processes to kind of embed themselves appropriately for people to figure out how to use them, to properly, properly harness them, to bring them, incorporate them in different economic processes. So well. I certainly think that generative AI will be disrupted, and will be to lots of change. I also think that it is something that will happen over a period of time. It will happen tomorrow, so Steven Feldstein: you know we can go to that easy at least tonight. The world won't change tomorrow. But though it may change soon. let me sort of conclude here. I think in general, we're in a period of immense change when it comes to AI, and there's lost concern about how it will affect society, politics, and war. Steven Feldstein: We're also coming out of fraud. Geo. Political juncture, where the old structures systems are breaking down and emerging orders that get to establish. I do think, however, the world is learned a lot about digital governance norms and safeguards, and we're less likely to be caught a unaware of when it comes to the potential harms from the information to the information ecosystem.

Steven Feldstein: But and there's no guarantee that needed regulatory action will occur. Steven Feldstein: So I think it's incumbent upon citizens to push for change, to hold political leaders accountable, to guard against abuses that emerge. And you know I just want to have said, I think we're entering a very unpredictable and uncertain moment when it comes to tech Steven Feldstein: to these technologies. So let me stop here and turn over to John. Thank you. John Dale: Thank you, Stephen. This is a fascinating topic, and your research is is amazing. I'm so glad You've join us today, and I've got I do have some questions for you, and i'd like to ask you a follow up on this last point you made about innovation John Dale: and and sort of John Dale: your sense that we should see increasing innovation moving forward in contrast to the point you made at the beginning of your discussion on John Dale: undermining democracy the way that the concentration of control over AI within big tech firms has John Dale: also been in. In some ways, I guess we could say. stultifying our our innovation. It's with this very insular vanguard, one that's not so inclusive, including more and more people, to be John Dale: innovating and using these technologies in a variety of ways. It seems like you've got a attention there that you're pointing to. And i'm i'm. I'm interested in in hearing about that. And i'll I'll stop with that. I've got a I've got a follow up question of that, too.

Steven Feldstein: Yeah, that's a great question. I've been thinking about that. And I think the way I think about it, and I'm still trying to wrap my head around exactly how this all works. But if you look at the generative AI ecosystem. You can. You can analogize it in some ways to kind of platforms, and then the applications that are built on them. So in the application layer. Steven Feldstein: which is where you have all sorts of different smaller entities that can come up with creative ideas and different things and ways in which to leverage Steven Feldstein: these kind of underlying generative models. That's where you can really see innovation. Kind of take off. So it's that. So think about it this way in a sense that, like you, had different applications that were built off. You know the Ios system for for iphones that were able to kind of allow for Uber Steven Feldstein: or Airbnb, or all these other kind of applications to to bubble up on the one hand, you could say like it's still defined because it's just Ios and Andrew and I agree with that. You know. I think, that it is. It is problematic that you only have a very small handful that are sort of controlling the kind of entry, form or gatekeeping Steven Feldstein: you know, to these foundational models. On the other hand, you know Steven Feldstein: it would. It's also fair to acknowledge that at at it, just at a different step, at a different layer. Steven Feldstein: There is planning a room to do lots of creative and innovate things. We're already seeing that.

Steven Feldstein: So that's kind of how that that sort of tension works a bit. John Dale: and you've been. you know, pointing to the ways in which we might be seeing efforts to regulate, I think, largely referring to. John Dale: Let's see you. You mentioned Unesco and Oecd, and 175 countries that are starting to develop some kind of policies around regulating AI as well as of course, the EU AI act in the Us. Blueprint for an AI act. John Dale: A lot of these effort sound like John Dale: governance regulatory efforts. I know we can't depend too much on John Dale: big tech corporations right now. If we're reading the Financial times over the last 2 weeks. We've seen how they're laying off all their AI ethicists as they prepare to cut costs and become more competitive to get their AI out there. You have Elon Musk, who said we should wait for 6 months.

John Dale: but many cynically interpret that to be pointing to the fact that he's his AI. That he's been developing is 6 months behind the others, and maybe he's just buying himself time. But i'm wondering. John Dale: do you see also John Dale: a role for civil society actors in helping to John Dale: regulate this this emerging AI market and and use by states like China. I guess you call this a sort of a State surveillance versus State capitalist, model State capitalism in terms of Shoshana Zub offs John Dale: concept, where where she too, actually just looks at legal regulations. Not so much our participation in civil society say the role universities might play, for instance. Steven Feldstein: Yeah, no, I I would look. I would say that

Steven Feldstein: you know legal regulations are are Steven Feldstein: almost nowhere without having a simple society component to them, I mean, especially at right now. I mean there's essentially what we're looking at the vacuum, right? I mean these lot high level governance frameworks and the Oecd. They're not binding. They're voluntary. They don't actually mean that much. If you don't apply any specificity to them. Steven Feldstein: So what they actually how they work is really incumbent upon citizens from all corners, whether it's academics and researchers kind of exposing different practices and thinking about ways in which to kind of hold accountable Steven Feldstein: companies that will measure how algorithms are working to look for bias and so forth, whether it's systems on the ground, pushing for you know greater privacy for their data, or putting in lawsuits when it comes to copyright infringement linked to large language models. I mean, this is a a very unsettled terrain it is. It is. Steven Feldstein: you know there are very few rules at the moment, and that won't last forever. So in this definitional period. Steven Feldstein: particularly when there isn't settled law in any way about what the norms of use ought to be, and what the safeguards are having citizens weigh in actively in terms of exposing abuses coming up with Steven Feldstein: ideas for use, and so forth, is critical, so i'd say more than anything. It is up to citizens right now, certainly, until we kind of get some catch up when it comes to to regulation to be engaged. John Dale: But I think Karthak had a question that sort of relates to that to some extent, folks in the global South. So i'm going to invite him to

John Dale: to ask a question real quick. And then I have one technical question for you about your AI global index of curiosity. After after Karthak asked his question there, Cartha, did I put you on the spot. Karthik Balaji Ramanujam: Yeah, thanks, John Hi, Steven. Great talk. So coming from India, I come from a city called China, and I think last year they started in installing surveillance cameras. Karthik Balaji Ramanujam: and I was kinda kind of get getting a general sense that people are actually welcoming sort of the installation of those cameras in the general context of, you know, increased security Karthik Balaji Ramanujam: right in a in a city that is probably not safe from in the night, I guess, with increased crime and all that. Karthik Balaji Ramanujam: So for for people who are, you know, sort of up on the edge, who are not directly in track attracting the police. They feel that it's secure, and for people who are probably interacting with the police on a day to day basis, they are probably not a bad

Karthik Balaji Ramanujam: of sort of the intrusion of AI, or surveillance technologies into their rights into their lives. You know sort of they not connecting the clouds. Probably so in that sense. How how do you? How do you foresee sort of the public Karthik Balaji Ramanujam: working within the democracy space to sort of challenge these intrusion of human rights and sort of privacy. Rights, you know, is probably educational way forward. I I just wanted to get your thoughts. Steven Feldstein: Yeah, sure. Well, I think there's 2 ways to kind of think about it and challenge it. So the first thing that is really interesting is that if you look at the data so far and it's not.

Steven Feldstein: It's a complete, but all the studies that have taken place when it comes to associate crime levels and the installation of smart surveillance systems, there is little and no correlation at all. Or you know, relationship between the 2. Steven Feldstein: In other words, that even if people think or feel like these cameras make them safer. The evidence says it doesn't change crime rates, and so I think over time that evidence will make its way in and begin, you know, can can provoke a sharper debate about what is the port from these cameras? If the same level crime Steven Feldstein: prior to the installation of these systems still exist after they've been. They've been established. I I think. The second thing, too, is that as citizens become, I ask more questions as these technologies become less novel, and they start to kind of realize different other uses for the systems in terms of collecting information about

Steven Feldstein: and tracking where individuals particularly are going, you know, or what they're communicating, or what they're saying, all of which are which of which there are instance of that taking place in India right now, I think they'll start to ask questions about the about the balance, about the trade offs in terms of you know. Steven Feldstein: the supposed public safety utility of these systems, combined with the very real deprivations of of political liberty, and you know Steven Feldstein: public order. Rationals are time-honored. Very well. Trodden arguments used to justify the installation of surveillance. Whether it's these AI systems, or something else is, you know, or spyware, and rarely is there as much of a connection as the authorities would have it appear. Karthik Balaji Ramanujam: Thank you, Stephen, for your response. Steven Feldstein: Hey, John? I think you're muted, but I but I know you had a question. Alright, Someone someone in the Q. A. Is interested in John Dale: whether we might be able to get copies of your slides after afterward, and and I wanted to ask you if you could reopen your third slide on global global presence of AI powered surveillance technologies For Steven Feldstein: Yeah, let me do that. John Dale: And why, while you're doing that, also make a a pitch for a an interactive map that you created at Carnegie's Institute, Carnegie, and down for international piece John Dale: called, I think it was your model of your AI global index. And so i'd come across that first before I saw this slide. And I've been wondering is is I look at these maps.

John Dale: I know you're focused on firms. You know where the where the firms came from in the AI global index more than looking at it sort of John Dale: by country. So where where the AI technology was exported from, let's say. But when you're considering firms in this John Dale: global context, are you looking largely at where they're headquartered or where they were incorporated, or you consider who's on the board of directors of these firms which would give you, You know, many countries, maybe China and us John Dale: boards of directors or members of the Board, you know, in one firm. And then how do you? How do you sort of sort this into these? Into this map? Steven Feldstein: Yeah, Well, actually, this is. This is slightly different. What this shows is just which countries around the world have acquired different systems. Right? So if you're if you have a color on there. It essentially says that there is some evidence that you have procured a system. Steven Feldstein: whether it's from Huawei or another company. Steven Feldstein: you know at at some point. And so what I did kind of more specifically within the the index that you're mentioning is that I then broke it down and linked them to companies. So when I found, like the the incident, the event, I would then sort of name the company, and then I would also associate that with the country of of our origin. And so I started to go down the road. And this is this is slightly older research for me. I started to go down the road in terms of looking at

Steven Feldstein: shareholders, and you know Arizona Corporation, and so forth. And there's been a number of interesting papers that come out on that question in terms of like looking at Huawei, and it's a shareholder structure, for example. Steven Feldstein: in the bottom line, you know, is that it's very opaque, and it's hard to know exactly to what degree there is direct control or members on the Board of Directors, who are, you know, assigned by the Ccp. But there it certainly seems to be evidence that that is the case, and there's also a little bit Steven Feldstein: clear in terms of what direct subsidies as a particularly Chinese companies have when it comes to money directly, being given, you know, by the State to subsidize a range of different investments, and so forth. Steven Feldstein: and that's by design, you know they don't want to sort of disclose that. So that I mean, I would say, that remains to me. It's something that is worth a continued push to better understand. I've looked a little bit about this in a more in a some related area in terms of spyware companies in terms of their where they're incorporated.

Steven Feldstein: and where some of the money is coming from. But it's it's tough. Steven Feldstein: you know, pretty pretty difficult work, and actually makes for some interesting kind of research. If you know of people, you yourself, what do you imagine John Dale: I will talk to you about that more we've been. We've been doing a little bit of this work at at movement engage so, and I know it's very. It's very time consuming and and very difficult, plus trying to track where the data is going afterwards starts to.

John Dale: I think, reinforce your point that you know it's a it's a blurry line between these autocracies and democracies to some extent in thinking about. You know how these John Dale: intrusive technologies, in particular, are to be mapped in terms of their effects. So you don't have to just have the technology operating John Dale: in your in your country being used by your country in a in a more nefarious way, or a more invasive way. It can also be extracted some of that data for uses in other countries, and then used again John Dale: at election time, for instance, as we, as we've seen within our country. But anyway, I want to open this up a little bit more to some questions. I see we have quite a few. Now, Karthick, have you been Karthik Balaji Ramanujam: filtering these? Yeah. So one question relates to sort of how to sort of combat a technologies at an individual level rather than looking at from an institutional perspective. I guess.

Steven Feldstein: Yeah, I mean, I guess. Well, I mean that it's for I guess Steven Feldstein: we give you kind of an individual answer. You know, in that Steven Feldstein: it depends. I mean, so. One of the things that you can do is you can start with privacy, and you can say, what are steps? That is. So if you're worried about date, your data being used in ways that go beyond your consent.

Steven Feldstein: There are steps you can take to protect yourself. Steven Feldstein: I mean, frankly, steps I take these days to to, you know, detoxify your digital presence online to ensure that you have better control over your data. So that means, you know. For example. Steven Feldstein: you know the grand bargain with social media companies is that you get free access, and they get to take everything, and and that that all your information and monetize it Steven Feldstein: so you can. You can de-platform yourself from social media platforms, and and that'll help go a long way for me personally. At this point I'm. Really only just on a handful of places, and and debating these days really with Twitter

Steven Feldstein: given. It's a kind of descent whether that's even worth staying on any more. I I am still for now, but many others I I just don't participate in at all. So

2023-05-02 06:04

Show Video

Other news