AI's Disinformation Problem | AI IRL

AI's Disinformation Problem | AI IRL

Show Video

Can you always tell a fake image from an authentic one? Deep fakes are undoubtedly one of the most challenging risks posed by artificial intelligence. These are the images, audio recordings, and increasingly videos, which look convincing enough to fool all but the sharpest amongst us. What's really interesting is that fake images, even the convincing ones, aren't new inventions. No, like that Lochness Monster picture, or one of those creepy double exposure photos of a sad looking Victorian ghost.

Right, but their sophistication is rapidly advancing and many experts say that political disinformation campaigns using deep fakes are not only happening, but set to become a far bigger problem. So on this episode of AI IRL, we're gonna hear things like this. Is how do you have a society, how do you have a democracy if you can't agree on basic facts? We can disagree on what to do with the facts, but we should not disagree on actual facts. And this. Should it be an independent body? It depends who gets to be in that independent group. Are they appropriately speaking for what, everybody in the world? As we try and answer the questions, what's next for AI and disinformation? And is it about to get a lot worse? Hany Farid, thank you so much for joining us.

So you're a professor at the University of California at Berkeley and you've been specializing in digital forensics for quite a while. So the analysis of deep fake detection, for example, for many years. Set the scene today. What is the state of disinformation, pre-ChatGPT and post ChatGPT. So first I think it's important to understand that disinformation is not fundamentally new. It is being amplified and powered by AI today, but we've had disinformation well before there was AI, well before there was an internet, well before there was computers.

So this is part of a continuum, but I think it's an important part because it is about democratizing access to technology that is typically, used to be in the hands of the few and now we're in the hands of the many. And the one thing we know that when you take a powerful tool and you put it in the hands of the many, the threat vector changes. So where are we today? So the average person can now go to a website and generate an image of a person who doesn't exist and create fake profiles online to perpetuate everything from crypto scams to romance scams, to all kinds of fraudulent material. And those are trivial to do.

You don't have to know technical skills. Go to a website, type something in, you get an image of a fake person. We also have the ability to go to a website or a service and type in, give me an image of and describe what you want. Joe Biden having a meeting with fill in the blank and it will generate that image for you and the images are very, very realistic.

We have services where you can upload two minutes of audio of your voice or my voice, and clone that voice and then type and then have you or me, say anything you want me to say. Cloning voices, solved problem. We have lots of services where we can now take videos. So for example, you could take this video that you're recording right now with that camera and replace my face with another face. We're not going to.

Well, first of all, the viewers have gotta start wondering how do they know that it's me and not one of my grad students because I was too lazy to come over the Bay in the morning, but we can do that. So on the one hand, more people have access to these tools for fun purposes like pulling a practical joke on Nate, but it's also used from a more nefarious perspective to stoke unrest. Or create non-consensual sexual imagery or commit fraud or push disinformation campaigns. And so I do think you make a good point though, is that there's an upside to the social media is that when this stuff gets out there, there's a lot of eyeballs on it, but here's the thing you have to understand about social media. The half-life of a social media post is measured in minutes, which means half of views happen within the first one or two minutes. And by the time the fact checkers come around and fix the record, the vast majority of people have moved on, number one.

But number two, and this is the important part, is that social media and human beings have made it so that even when we come in and fact check and say, "Nope, this is fake," people say, "I don't care what you say. This conforms to my worldview. This does not conform to my worldview.

I will believe it, I will not believe it." And why is that? So why are we living in that world where reality seems to be so hard to grip? It's because our politicians, our media outlets, the internet have stoked distrust of governments, media and experts. You're the media and I'm the expert and that what I think is what is really disconcerting to me is we've created these filter bubbles, these alternate realities. And here's the question you gotta ask yourself is how do you have a society? How do you have a democracy if you can't agree on basic facts? We can disagree on what to do with the facts, but I don't think we should be able to... We should not disagree on actual facts of what happened and where did it happen. Are there some recent examples of like viral deep fakes that have given you real cause for alarm? So there are two that I can think of in recent memory, and they've given me alarm for different reasons.

So one was the Pope in the puffy coat. And if you saw this image, it was just brilliant. I was fooled. Were you really? That's exactly why I was worried is because journalists fell for it.

And journalists who are smart and who are savvy and who are fundamentally skeptical about things, they fell for it. That's right, so the photo is completely innocuous. It was fine.

I mean, I don't care about the photo, but a lot of journalists file for it. And that's when I thought, "Ah, we have crossed through." There's a tipping point here where the images have become so good that really smart people who are by design critical are falling for it. That was number one. But here's the other one that really scared me.

It was a fake image of the Pentagon being bombed. And it wasn't a particularly good image, by the way, but it got posted on to Twitter on a verified account. Thank you, Elon Musk. It went viral 'cause it looked like a news organization got retweeted, reshared. The stock market dropped a half a trillion dollars in two minutes from a crappy fake image. How do you square off that we can move so quickly on the technology that can create deep fakes, but we're so slow to create the technology to combat it.

Who's leading which camp and should it be that way? You can make a lot more money from the creation of it. Follow the money. Defense doesn't pay, offense does. Creating fake stuff, you can make a lot of money. It's creative, you can make things more efficient. There's not a lot of money in creating defenses.

Think about the people creating malware and spam. Here's a scary factoid for you. VC Funds invested $180 million in this deep fake detection technology last year and that was up from only $1 million in 2017. This year, it's at about 50 million.

How much is it gonna take? And by the way, compare that to how many billions are going into the generative side of things from ChatGPT, OpenAI and all these other companies. So yeah, I think that's right. I would say it's about one to two orders of magnitude difference and by the way, you raise a really good point is that follow the money. Where the VCs are putting money is where people are going to invest their efforts.

And so the VCs are looking at this and saying, "Well, we can make money here. We don't think we can make money here." I think they're wrong by the way, I do think that we are quickly entering this age where people are getting frustrated, not having the ability to believe what they see. And I think that is going to start to become the way privacy became an issue.

Pre Cambridge Analytica and we thought, "Ah, whatever, privacy." And then Cambridge Analytica hit and thought, "Oh man, this is bad." And we started to see a real shift in the way we as consumers, but also the corporations thought about privacy. I think we're gonna see the same thing about trust and integrity and provenance and authentication, but I think that is going to take a little bit. So Nate, so it's not just about detecting what is fake, but also believing when it's actually real. This is called the liar's dividend.

When you enter a world where anything can be fake, anything you read, hear, or see, well then you can deny reality. Lemme give you an example of this. 2015, 2016 rather, then-candidate Trump gets caught on the Axis Hollywood tape saying some pretty bad things about women. And what happened is he apologized, he said, "I'm sorry, it was not appropriate," and tried to excuse it away.

Now, fast forward two years. Deep fakes are on the rise, generative AI is on the rise and he's asked about that and he says, "It's fake." Why'd you apologize in the first place, doesn't matter. It's easy to deny reality when something can be fake and arguably, I think that is the bigger threat here.

I'm worried about non-consensual sexual image. I'm worried about fraud. I'm worried about disinformation campaigns.

Blackmail? I'm worried about blackmail. I'm worried about the phone calls people are starting to get saying-- Robocalls. Robocalls in your loved one's voices. I'm worried about all of those things, but that is nothing. It is nothing to not being able to hold people responsible for their actions. Police violence, fake.

Human rights violations, fake. Politicians doing something illegal, fake. Where are we, where are we if anybody can deny anything and now we all become tribal. We believe what we believe and there's no external information that helps us reason about a complexity.

So how do we fix that because a huge part of that problem is not even to do with AI. That is to do with people and arguably mass media and the internet and everything else, but that doesn't necessarily mean AI. So can you solve it? There's sort of three solution bins and I think we need all of them. So one is technology so that when I take out my phone or a camera and I record police violence or human rights violations or a politician speaking, it should on the device authenticate where I am, when I was there and what I recorded. So GPS, GPS. Date and time, potentially my identity, if I want to give up my personal information.

They're already adding that into like the X of data in a JPEG, right? Correct, here's the second bin that's related to it. If you're OpenAI or a Midjourney or Stable Diffusion or any of these companies that is generating this content from text image, audio or video, you should be doing the same thing, which is watermarking and fingerprinting every single piece of content so that when somebody uploads an image of the Pentagon being bombed to Twitter, they will know instantaneously that this is synthetically generated. That's not to say it should be banned. There's nothing wrong with it, but it should be labeled. Two more things.

One is sort of my bread and butter, which is that, as you said, these technologies are not perfect. They'll take a while to be deployed, they will get hacked. So now you need the passive techniques. Give me an image, how do I analyze this to determine if it's real or not? And then the fourth one and the final one, and I think we need all of these, is we need regulatory pressure. We need the government to say there's a carrot if you do this well and there's a stick if you do not do it well. That is the government's role.

Think about product safety, cars, pharmaceuticals, transportation, the food we ingest and the government says, these things have to be safe. Why can't we treat the internet like this? Why don't we treat information the same way? Why don't I put a nutrition label on a piece of information the way I put a nutrition label on every single thing I buy from a store? Rumman Chowdhury, thank you so much for joining us. Thank you so much for having me.

So you run a nonprofit called Humane Intelligence and you also run a consulting firm called Parody, where you help companies like DeepMind, Meta, even governments on responsible AI practices and how to mitigate algorithmic bias. But I kind of wanna take it a few years back to when you saw this play out on the inside, when you were director of machine learning, ethics, transparency and accountability at Twitter. So I would love to kind of get into what you learned about the state of misinformation and disinformation while you were there.

I think the biggest thing to understand is that it is a near insurmountable problem. The volume, the number of people inadvertently and purposely sharing mis and disinformation online is such a near impossible task to tackle. Could you maybe define the differences between misinformation and disinformation for those that don't know the difference? So on one end, you get misinformation, which is almost like misdirection. So it often has a kernel of truth, but maybe it's shifting people to look at one thing and kind of saying, don't look at this other thing. Disinformation would be purely falsified information, something that's deeply untrue.

In general, most of the information online that is false or fake traditionally has had some sort of seed in reality. My concern is that with generative AI, we can make much more convincing pure deep fakes and anybody can make it so we will see an increase in the kind of true deepfake technologies that will spread completely falsified information. And the other thing to think about is also the actors online who are sharing it.

So of course you can have bots, you can have malicious actors and people who are purposely sharing it, but actually a very big percent of the information online that is fake is often shared by people who didn't know any better. And we've talked about people. I mean, I suppose it's a bit controversial, but it's part of the issue that people are the problem. I mean, I know it's kind of a contrary opinion, but we talk about blaming tech companies for making and spreading this stuff, but should we maybe be doing more to educate people into not believing it in the first place? I think that is certainly going to be a step we'll have to take in a generative AI world.

I think we are sufficiently trained to not believe everything we read. You know the phrase, don't believe everything you read on the internet. What we are not sufficiently trained to do is look at images and video and listen to audio and have that same level of critical thinking. In general, if we see a video that looks real, for example, a bomb hitting the Pentagon, most of us will believe it and we won't question it. If we were to see a post and someone said, "Hey, a bomb just hit the Pentagon," we are actually more likely to be skeptical of that 'cause we have been trained more on text than video and images so I think you're correct. I think there is this level of education that people need.

So talking about some of these companies, and obviously you've worked at at one of them, but there are many, so we'll talk in general terms here: Do you think that they actually can control this? So there are ways in which they could take action. For example, DeepMind have introduced something called synth ID, which is a technical watermarking of any photo and there's a way to analyze the photo no matter how it's cropped or changed or altered or filtered that you can tell if it's deepfake or not. Even if you take a photo of the photo. Oh, I'm not sure if you take a photo of the photo.

Because this is one of the issues, right? Like if you take a photo of a photo, it's a brand new photo. So you need to be able to analyze something that survives the root through compression and transmission and everything else, and degradation that you get from compression. That's a great point 'cause often we think of these very high tech ways people will break things, but to your point, most of the best hacks are incredibly low tech. So I think you're spot on. I have no idea if synth ID works if you take a photo of a photo. So when it comes to flagging things and even take content moderation really at the core, that's at the core of controlling what comes online and what doesn't.

Should there be an external government agency or just independent body that oversees this rather than it being controlled internally at the company? And this is the ultimate discussion, and the phrase I've been using for all of this is the question to answer is who gets to be the arbiter of truth? And it's such a difficult one to answer. Should it be an independent body? It depends who gets to be in that independent group. Are they appropriately speaking for what, everybody in the world? We've not yet achieved that, like a fully representative body that represents everybody in the world that will be able to confidently speak on every issue. I mean, technically that's what we have elected officials for to some degree now.

In the U.S, sure, but I think right now, we are having a bit of a crisis of democracy where people do feel like they're not being represented in their own governments. The other part I'll add is do we want this content moderation to inherently be political because that's what will happen, right? People will be elected two to four years later, new people will be elected.

Do we really want the decisions on what content is and isn't allowed online to be purely political in nature? I know there's accusations that these companies make political decisions, frankly companies make capitalistic decisions. They don't actually make purely political decisions. If we move it into the hands of the government, we are literally making them purely political decisions.

Let's lighten the mood a little bit and talk about putting executives in jail, which is topic I love talking about, but it's really important where I'm from in Britain because there is a bill in the works that amongst many other things, does talk about essentially making executives at companies personally liable if their company's helping disseminating what they call online harms online. It's still up all up in there at the time we speak, but I wonder, do you think threatening executives with potentially being locked up, even if it's an extreme case, is that helpful? Could it be helpful? I don't know. I think it is very...

What we want as human beings is to have a single person to blame. We want there to be a good guy and want there to be a bad guy, frankly. Like just in general, as human beings, we love to have hero culture and when hero culture means we have villain culture. Frankly, a lot of these people are also larger than life and comically, villainous people.

That being said, to what extent is throwing Mark Zuckerberg in jail going to fix the problems of social media at large? Probably not very much because Facebook or Meta or Google, they're a giant machine. They're massively global companies. There are decisions being made by many, many people. It's part of a company culture.

Putting one person in prison may be almost theatrically a pleasing thing to do for some folks, but I don't think it would actually solve a big problem. It's the nuclear deterrent. It's the equivalent of a nuclear deterrent, right? It's there as a potential threat that forces a different general way of behaving even if the actual launch button or the prison sentence never actually happens, it's interesting.

I mean it's interesting if we think about it, maybe game theoretically. So I can't help but be a quantitative social scientist sometimes and I like to think of things in terms of what are people's incentives. So the question is a social media company incentivized to spread mis and disinformation? Actually they're not, they don't want that because it makes them look bad. Let me push back on that a little bit because I do think the widely held belief, and even with our previous guest, was that social media companies' business models are at the core of why disinformation proliferates because it puts more of a value on content that's engaging, even if it's not true. So what I'd say is the cost of identifying misinformation, as I mentioned, is so massively expensive.

Balancing that with making sure people have an experience they want is a very difficult thing to do. So is an a engagement-based model leading to problems of radicalization misinformation, yes. Are social media companies actively trying to sow misinformation? No, they're not. And that's what I'm getting at versus let's say somebody who's actively trying to do a bad thing. So for your prison example, if the CEOs of these companies were actively trying to spread misinformation, then I'd say, yeah, you know what? Throw them in jail, they're bad people. I think on the other end, it's recognizing that this is an incredibly difficult problem and what is the level of transparency and access to these models or understanding the impact of these models that we may need that is maybe different from what we have today? So I look at the Digital Services Act in the EU, which is really fascinating.

So their audit methodology is based on having these companies prove to an independent technical auditor that their models aren't, for example, influencing free and fair elections or violating fundamental human rights online. So I'm curious about your former team at Twitter, also known as X now, how they feel about going back into content moderation work at other tech companies that are now jumping into AI where this role is also going to be critical. Yeah, so to start, Twitter and X are two totally different companies in my head. Companies are driven by the culture of the people who are there, they're two totally different places, but I take your point.

Well, I think the people who are drawn to this work, whether it's trust and safety, ethical and responsible AI, which is the work my team did, we're drawn to the mission and we're drawn to the meaning of the work. We understand it's going to be hard. We understand that sometimes we can't solve problems and frankly we can't save the world, but I think that there's a fundamental good feeling we have about at least trying our best and being part of the solution. So I've found that every single one of them wants to continue work. What's something that no one's talking about that you think they should be as far as AI and ethics and in your expertise goes? We touched on this a little bit earlier, and the thing I worry about, there's such a rush to governance.

There's such a rush to say, "Let's pass laws. Let's regulate companies. Let's slap 'em on the wrist and find them, throw 'em in prison." I think we do need to take a step back and think about what we are doing with these new laws that may create new paradigms of government overreach into private companies in an almost abstract sense, right? There's a reason why government and private companies in countries like the United States tend to be very separate. We may not always like the outcomes of that, but let's be clear, the opposite, like the extreme opposite is a purely state-run economy. So we want to say things like, for example, the government should compel tech companies in the U.S. not to sell to enemy states.

Do we really want the government to be able to tell for-profit private companies where they should or shouldn't be selling their goods and products? I mean, we did sort of see that with the previous U.S. administration in Huawei and basically saying, Kurt, you can't buy these company's products and you can't sell your components to them and that resulted in that company getting a lot better at developing its own stuff and there's a risk that may be backfired in some people's eyes. It's sort of tangential to the conversation about disinformation, but it all supports your argument.

I mean, it all just ties into the like sort of, let's be careful of where we're shifting this power. Like right now, I think a lot of people feel that tech companies have an oversized amount of power. I think that's a fair thing to feel. Now I don't wanna rush into saying, well then the government should clearly be taking some of that power away. Well, what about maybe developing a world of independent auditors and independent actors? That's what we do with my non-profit, Humane Intelligence. We're trying to build the independent community of people who are critiquing technology and actually helping drive and shift it towards benefiting human beings.

So the term fiduciary duty got to mean something to a lot of us at Twitter last year and I carry that lesson forward when I think about companies who want to do things for good and what it really means.

2023-10-10 11:28

Show Video

Other news