TecHype: Debunking Deepfakes: Unmasking Digital Deceptions with Hany Farid

TecHype: Debunking Deepfakes: Unmasking Digital Deceptions with Hany Farid

Show Video

[Music] Welcome to TecHype, a series that debunks misunderstandings around emerging technologies, provides nuanced insight into the real benefits and risks and cuts through the hype to identify effective technical and policy strategies. I'm your host, Brandy Nonnecke. Each episode in this series focuses on a hype technology. In this episode, we're debunking deepfakes.

You've likely seen a few deepfakes when scrolling through social media or in a major film. One of my personal favorites is Dwayne the Rock Johnson super imposed onto Dora the Explorer. Seriously, Google it. You will laugh for days. Now, while deepfakes can be a creative and often hilarious outlet.

They can also be extremely dangerous. I'm going to share a clip with you right now of Senator Ted Cruz, Elon Musk and President Biden discussing the risks of deepfakes. A deepfake impersonating a politician can undermine a Democratic election.

Impersonating a CEO can cost stock price to impersonating a head of state, and war could break out. You just heard Senator Ted Cruz, Elon Musk and President Biden. Or did you? You might have been able to tell in their voice or the way their lips moved that something just wasn't quite right, that there was a tell there that you knew it was faked. But Deepfake technology is becoming increasingly accessible and much more advanced. I created these audio deepfakes in about a minute, using an app I found online. So what can be done to better ensure we realize the benefits of this transformative technology while mitigating its risks? Today, I'm joined by Professor Hany Farid, professor at the University of California, Berkeley, with a joint appointment in Electrical Engineering and Computer Sciences in the School of Information.

Hany specializes in the analysis of digital images and the detection of digitally manipulated images such as deepfakes. Hany, thank you so much for joining me today for this episode of TecHype. It's good to be here, Brandi. Thank you so much. I think it's really important that we start with a definition. What are deep fakes? So deepfakes is a is a very broad term.

It's an umbrella term that refers to content, text, audio, image or video that has been synthesized by a machine learning algorithm. That's sort of the umbrella. Now, within that is lots of different things that can happen. Yours was a particular type of audio and video deepfake, But the core idea is that we have taken what used to be in the hands of manual operations, somebody sitting in Photoshop or video After Effects a or a Hollywood studio manipulating images and videos and audio, and we've automated that.

We have handed that over to a machine learning algorithm so that it can automatically do that. And with that comes something really interesting, which is the democratization of access to a technology that used to be in the hands of the few are now in the hands of the many. So when we talk about this, it's not so much, we can manipulate images in video. We've always been able to do that.

But now it's not just one or two people, but millions of people like you. You went download an app in a minute, created an audio that really is sort of what is new here is that democratization. And quickly, to follow up on that, the democratization.

I've often heard what I created be referred to as a cheap fake. Yeah. So there is there are two terms going around cheap fake and deep fake. Cheap fake has historically meant things that are done by for example, here's my favorite example of a cheap fake. Somebody created a video of Nancy Pelosi and it made it sound like she was drunk and all they did was just slow down the audio.

You could do that to this recording to slow this down when you play it back. And we will sound like we've been having a couple of drinks before our interview. That was a cheap fake. So the deepfake is typically is referring to the use of machine learning or artificial intelligence to generate it. There's nothing fundamentally profound about that. I mean, so we you don't care how the fake is made.

You care that lots of people can do it and what can they do? Yeah. So sounds like there's a lot of misunderstandings around how deepfakes are made and how widely available they are. And one of the main goals of TecHype, is to debunk misunderstandings around emerging tech, so that we can fully understand the real benefits and risks.

What do you think are the three most common misunderstandings about deepfakes? Yeah, I think one of the biggest misunderstandings that anything can be deepfake a person running through the streets screaming, a person being brutalized by a police officer, that because we have technology that can create the types of deepfakes that you've generated, anything, absolutely anything. And that is simply untrue. There are limits to what are possible today, for the most part. Where deepfakes excel is sort of just in up AI, a talking head like like this, like this right here.

That is actually pretty good. The voice is getting better. Getting my mouth and my body to move. But me running down the street, full body animation, we're not there yet. We will. We'll get there, but we're not there yet. So I think that there are still some limits to what is possible with current state of the art deepfake technology.

How close are we, though, to that future scenario of full body deepfakes? That's the right question. I don't know. But here's what I can tell you is that if you look at the trend so I've been in this business for a long time. Usually we measure advances in the technology in years.

Now we are measuring it in weeks and months. So the deepfakes splashed onto the screen about five years ago and we have seen a phenomenal evolution in the technology. Really, every few months you see advances. We're just getting our heads around one technology and GPT shows up and blows up the world.

So if the rate at which it is going, I think you can measure full blown animation of bodies, animals, objects, complex scenes and years, not decades. So also, are there any other misunderstandings? And we've discussed this first one about how anyone can, you know, whether or not they can create a convincing deepfake What are two other misunderstandings? Yeah, here's another one. And I know where this one comes from.

In the early days of deepfakes, the consensus was this was a risk only to famous people, politicians, actors, actresses, people hosting television shows that used to be true. And the reason it was true was that in order to create deepfakes four or five years ago, you needed a lot of content. You needed hours and hours of audio. Joe Rogan, for example, was easy to deepfake relatively easy.

President Obama was relatively easy to deepfake because there were hours and hours of video and the sense was, Wow, me, you the average person and we're not really at risk. And that has changed. So, for example, if you look in the space of nonconsensual sexual imagery, which is probably one of the most disturbing trends in the use of deepfake technology, where people take the likeness of primarily women and insert them into sexually explicit material and then carpet bombed the Internet that is affecting not just politicians and actors, but it is affecting journalists and human rights activists and lawyers and people who attract unwanted attention.

Because even now, with a relatively minimal digital footprint, I can go search your name online and I'll find a few dozen photos. And that's enough now to start to create a reasonably convincing deepfake. And that's because the technology has gone from needing hours and thousands of images to less and less and less. And now we all have some risk because we have an online digital world. Now, this is extremely troubling because we all have images online.

What's another misunderstanding? Yeah. Here's here's the one that I think is most interesting. In some ways. We tend to focus on the threats of deepfakes in terms of what we can create. So we can create a video of the president of the United States saying something or a CEO saying our profits are down 20% and then watch the market move to the tune of billions of dollars or nonconsensual sexual imagery. There's another threat here which often goes unnoticed, which is that when we enter its world where anything can be manipulated, well, then everybody has plausible deniability to say that anything is fake.

So now a video of police violence, human rights violation, a politician saying something inappropriate or illegal, it's fake. I have plausible deniability. I don't have to own up to anything. And in fact, you saw that play out then candidate Trump in 2015 got caught on the Access Hollywood tape saying some awful things about women. And he apologized. He said, I am sorry. That was not appropriate.

Fast forward two years. He's now president. United States. deepfakes have entered into the lexicon. He's asked about it. It's fake.

Done. I wash my hands of it. Prove it's not. And here's the thing. I don't think that's a plausible argument because that tape was released well before the deepfake technologies emerged.

But today, if there is an audio recording of President Biden saying something inappropriate, he has a reasonable argument that how do you know that that's real? This is the so-called Liar's Dividend that when anything can be manipulated, images, audio and video, well, then nothing has to be real. And I got to ask you, how do you reason about the world? How do we have a democracy in a society when we all can't believe anything we see or hear or read online? These are potentially existential threats to democracies and society. Yes. This is a very, very problematic because essentially, if there is no definitive truth, then everything can be a lie. And we can all revert back to our own sort of closely held beliefs. We can just believe what we want to believe.

And the facts are not my problem anymore. And we're already sort of in that world with the mess that is the online information ecosystem and the hyper polarization of social media. We're already there and the deepfakes have the potential to push us beyond that boundary. Yeah, and I created that very simple deepfake in a matter of hours and was able to post it on YouTube, which can be extremely dangerous if somebody were to use that to leverage it for a political purpose.

So we've talked about three misunderstandings. Let's talk a little bit about the benefits. We’ve talked a lot about risks. I want you to also discuss risks, because I think we should always be discussing benefits at the same time as risks so that we can better understand what we need to do to make sure we maximize benefits and mitigate the risks. So what do you think are the three real benefits and risks? So the question you want to ask, given the conversation we've had up until now, is why are people developing this technology? This seems completely bonkers.

And if you look at where this technology is being developed, it's primarily coming from the academic community and it's primarily coming from people who are in the computer graphics and computer vision world. These academic communities have for years, for decades, developed technologies for special effects in the movie industry. Right. And you said this, The Rock in Dora the Explorer, by the way.

I agree with you. Fantastic. And you should go Google it. It's great. So these technologies, the primary application and the primary driving motivation is to develop technology for special effects.

Computer graphics in the movie industry and arguably it is going to revolutionize the way movies are being made. So that's number one. Number two is artists were very quick to use this technology for very creative purposes, and there's lots of fun things you can do with these technologies.

It's also great for political satire. We should be able to make fun of our politicians. That's really, really important. There's another really interesting application, and I've seen I think this is maybe one that it is not without controversy, but I think it is interesting, which is people have used deepfake audio and video for campaigns to further political causes.

So, for example, there was a young man who was killed in the Parkland shooting. His parents created a deepfake of him, bringing him virtually back from the dead, asking, begging, pleading for movement on gun violence in this country. And it was very powerful because you saw this young man who was gunned down coming back to plead his case. Some people thought he went too far. I think that's a reasonable argument to have. But I think from the perspective of a mostly positive application, you can see why these could be very powerful when you had a fourth one sure place.

Yes, because this is also, I think, going to raise some really interesting ethical questions. Some people are starting to use deepfakes to bring back their loved ones from the dead. I was going to ask you about the posthumous. Yeah.

And what are the rights what. Are the rights of. Those individuals? Right. So their likeness. I think what's going to be really interesting is I imagine a world where somebody can take the body of writings you've done, interviews you've done conversations you've had, and use an AI to create an interactive chatbot - ChatGPT and then animate them with deepfakes so that you can have day to day conversations with them in the morning, in the afternoon, in the evening, whenever you want. And it's essentially a digital version of you that is interactive.

Is that good? Is it bad? What are your rights as the person who's passed away? This is coming, by the way. But I think we have to think very carefully about whether this is a good or a bad thing. Yeah, I mean, it's already happening in the motion picture industry where an actor may pass away and they bring them back in using Star Wars. That's exactly right. Exactly. And so these for special effects, for creativity, for speech, parody of our elected officials and for advocacy campaigns. Yeah. Let's go back to those risks, though, because those are really important to focus on. So we've enumerated a few.

Let me go through them again and we'll add a few more. The nonconsensual sexual imagery is a real problem primarily for women, and we need to figure out how to tackle this problem to fraud. We have already starting to see deepfakes being used to commit small and large scale fraud. There have been very high profile cases in the UAE and the U.K. here in the U.S., where people have defrauded financial institutions of tens of millions of dollars by impersonating another person's voice.

That eventually is going to come down to individuals where we're going to start to see very sophisticated phishing scams, where it's not going to be an email or text. It's going to be a phone call, and it's going to sound like your loved one or your boss or your friend saying, man, I'm in trouble. Can you wire me some money? Can you Venmo me some money? The fraud space is ripe to use deepfakes for every sophisticated ad phishing scam and horrifying. And I want to ask a question on this because maybe a year or two ago I was on the phone with my bank and they said, we're going to take some audio recording of you.

And then we're going to use your voice to verify it's you. Was that good or bad? Is it going to help mitigate this threat of a deepfake impersonating me? No. And here's why. And it's a false sense of security, because the idea that your face or your voice is your fingerprint isn't true anymore because of if.

Here's a really good example. Here you are making recordings of your voice on this show and the next show. In the next show, people are going to be able to stream that and clone your voice and call your bank with your voice.

Your voice is no longer your fingerprint. Right? So good luck with that. How to keep tabs on my account On where? Now I'm really scared. Okay, so I see nonconsensual sexual imagery fraud. We absolutely are starting to see deepfakes being used in disinformation campaigns.

It's taken a little bit of time, but it's already started where people are creating what looks to be like newscasters making official pronouncements. And we saw one in Venezuela just this week and in Western Africa last week. And we are going to start to see very sophisticated deepfakes being used to spread disinformation, fuel, violence and fuel human rights violations. And propaganda right, didn’t China a few weeks ago create deepfakes of news anchors who looked like they were from Western media outlets Wolf News. Yes, exactly. It was fantastic.

And it's just bashing America and Western democracy. That's right. So, yes, it's going to be used for propaganda. And so that's, I think, something we have to be extremely concerned about. And then I want to sort of just I want to emphasize, again, this Liar’s Dividend, because I do think that this may be the larger threat here is that we are all going to grow incredibly skeptical of everything we read, see and hear, and then we're going to revert back into our little tortoiseshell and say, well, it's a scary world out there. So I believe what I believe in, facts can't come in and penetrate that.

And that is very worrisome to me. Yeah, I totally agree with you. In fact, checking the fact checkers through all of our media institutions and providing that sense of understanding what is true and false. So next, I'd like to talk about some concrete strategies we've outlined some of those benefits and discussed the risks.

So what do you think are some concrete, technical or policy strategies that need to be implemented now? Good. So first of all, I think there is no silver bullet here. There's no - okay that do do this and we have solved all of our problems.

I think we do need a number of different things. Let me start enumerating them. So one is I think that the burden on solving this problem should fall primarily on those who are creating the problem, and those are the people who are creating generative AI, synthetic media and carpet bombing the Internet with their technologies and the outputs of those technologies. I think the burden should be on them.

So here's something that every single generative AI company can do today. They can watermark every single piece of content at their software produces, whether it's an image, an audio, a video or a piece of text. So by watermark, think a piece of currency, a bill. I know we don't use paper bills anymore, but a currency has watermarks in it that make it difficult, not impossible to counterfeit.

There is a digital equivalent of a of a of a watermark where you slightly perturb the content in a way that is robust to an attack. Somebody's trying to remove it, but allows for easy downstream detection. The technology has been used for many, many years to protect digital assets, copyright infringement, and you can bake those watermarks into the synthesis pipeline. And so that when you create an audio of Ted Cruz, if that managed to go online and go viral, anybody including YouTube, by the way, can simply say, ah, that has a watermark, it's a deepfake we can tag.

You don't have to ban it, right? But you can at least annotate that. And then if there is an intervention downstream, you can take the internet. So that way more transparency.

Okay. I'm going to push back on this a little bit because I am on board with you that if the companies that are creating deepfakes, they have the technology to do it and they implement this any end user. Yeah, that would be marked. Okay.

Now what about an entity that is nefarious and they're creating that. Good, as usual you ask the right question. They won't do it. They won't. And then let's also talk about the Californian's ID fake law.

Good. Let's do that, too, because both of us hate that law. Yes, we do. Let's talk about the bad actors. So here's the thing with bad actors, I can't stop the bad actors from saying, we don't care about your watermarks. Go to hell. We're going to do whatever we want. But here's the thing. For them to survive, they need to be on the Internet.

So they are going to need cloud services. So Microsoft, Google, Amazon can say, look, if you're not going to comply with these basic standards, you don't get to use our cloud services. you want your app on the App Store? Sorry, you're not complying by the standard. You want to have a domain name? Sorry. You're a bad actor. So we can restrict their access to core technologies that would allow them to make their technology accessible by telling a handful of companies, really a handful of companies. Look, these are bad actors doing bad things.

Phishing scams, malware, spam. We ban those services all the time. Yeah. And so I think that's the way you deal with those bad actors. You marginalize their access to the Internet where they sure, they'll be able to do this in the basement, but then that's a relatively minor issue. And then maybe they won't be able to get it out to the larger platforms. But I do worry, though, that the large platforms might do this.

But the real risk are smaller fringe platforms. Sure. Good, okay, now you go upstream, you go to the Cloudflares of the world that that give you protection from DDoS attacks. So there's always infrastructure right There's always going to be there's always somebody above you. There's a bigger shark in the pool.

So for example, if you can't get a domain name or your store, your app into the store, where are you on the internet? You're gone. Yeah. So I think that there are and look and none of this will be perfect. There will always be people who find ways to abuse it.

Our job here is not to eliminate the risk, but it's to mitigate the risk. And right now, any knucklehead I'm not including you can go on, do it and get an app and create a deepfake and put it online. And we need to create some barriers to that simple entry. Yeah, I just called you a knucklehead now. That's okay.

I appreciate it in some way, but I do have a follow up question on them. We need to discuss the California anti-deepfake law. So in the state of California, we did have a law sunsetted in January that would try to mitigate the spread of a malicious deepfake intended to influence an election. I think it's ninety days before an election or sixty days. Sixty days. 60 days, 61 days, you just go to town.

Yeah, send it out there. Now in that why actually mandated those who are creating and disseminating these malicious deepfakes to put their identification on them. Obviously a nefarious actor is not going to do that. And by and large, the law was very ineffective. Yeah, but I'd like to know more about what do you think can be done through legislation.

So we have we have an existence proof for a law that was probably I don't think they meant to create an ineffective law, but and I think they overreacted in the early days, and thankfully it's sunsetted, and I don't think it was going to get anywhere. I think that there are laws we can pass on nonconsensual sexual imagery. I think there are reasonable laws that we can say, look, you cannot put a woman's likeness into sexually explicit material without her permission. And that's a relatively uncontroversial law. And I think that's something that is, in fact, many states in the U.S., Australia and other parts of the world have started to ban this content.

The federal government is starting to take it up here in the U.S. I think that has a that's probably the first piece of legislation that will get real traction on the mis and disinformation front, It gets a lot dicier, as you know, because now you start pushing up against First Amendment and freedom of speech and the things that you and I tend to argue about quite a bit. I, I, here's what I would like to see. I think the first step has to be and I think this will resonate with you, Brandi, is transparency. I think the first thing is, look, just tell us what is what.

And then downstream I can deal with the policy and how to intervene. If I have to intervene, I think it's going to be very, very difficult to write a law that says you cannot create a deepfake that is deceptive because of course, we kind of I want that the Nicolas Cage in the Sound of Music deepfake and that clearly is deceptive. So I think we're going to tread very lightly here.

I don't think anybody has the answer. So my answer right now is in the early days of this technology, let's bake in a standard that says you must watermark this technology or sign this technology or track this technology. You as the creator are responsible. And then downstream, we'll start figuring out how to mitigate some of this gray area that is very complex.

Good. Can I add one more mitigation? Yes, please.. We up until now, we have been talking about the synthetic side. How do you mitigate harms from synthetic media? But there's another way to think about this problem is how do you authenticate real media? So there's an effort I'm involved in called the C2PA, which is the Coalition for Content, Provenance and Authentication. It's a multi stakeholder Linux foundation, open source effort. Adobe, Microsoft, Sony, Intel, BBC, hundreds of organizations that are building a specification and that will allow your device, your camera or your phone at the point of recording to log the date, time, location.

All the pixels that were recorded cryptographically sign all of that into a compact signature and then put that signature onto an immutable ledger. If you must have blockchain. Okay.

Now what that means then is that when that piece of content leaves the device and goes to a social media and shows up on your computer or your computer or you can go back to the ledger and say, Is this what I think it is? Was this video actually taken in the Ukraine in February of 2023 showing human rights violations? Has anything changed from the point of recording? And then it says, no, this is exactly what was recording. It is cryptographically signed by the device. And you can now say, okay, I know that this is was an actual recording from an actual person at a specific date and time and nothing has been modified. And so tackling the problem from the real side I think is also very important, not just from the fake side, because then we can say, look, we know that it's possible to manipulate lots of things, but if something has been signed by a piece of hardware, at least we know that this is something that is a real record. It is what it is. A real event. Yeah.

Now, my question for you is, I know that YouTube, when you upload videos, some of your metadata is taken off. So is YouTube currently agreeing to do this to make sure that the videos that we post can be tied back? Yes. Okay. So first of all, you're absolutely right. When content is uploaded to social media, almost always metadata gets stripped. It doesn't actually get removed, but it gets stripped. They hold on to the metadata.

It's very rich in data. So we do not and this is a bit of a sore point, have the YouTubes, the Instagrams, the Tiktoks, the Facebooks of the world and the Twitters of the world agreeing to this and so I think there's going to have to be some regulatory pressure to get them to say, we want you and we're not we're not putting value judgments. We're not saying good, bad favor, don't favor. We're simply saying respect the C2PA signature. If you must strip out lots of metadata you don't want, but leave that one little piece of metadata in there that will allow downstream identification. I think we'll eventually get these companies to come on board, because the one thing you can say is that most people agree that social media and the Internet is a bit of a mess right now.

And I think we are growing weary and tired of the tech companies saying everything's fine, everything's fine, everything's fine. And profiting quietly from that. So I do think that they are going to come on board, but they are not on board right now. Yeah, I do think that they're feeling increased pressure from Congress and from state legislatures that they might actually get on board to implement these things voluntarily.

Yeah. And hopefully that will put more pressure on them to do so. And yeah, I agree. And by the way, I think there's global pressure, too.

As you know, there's pressure coming from the EU, from the UK, from Australia, from all over the world. I think I think the game is up. The last 20 years you had some fun, but it's a mess and we need to start reining in some of these problems. All right. Thank you so much, Hany.

So Professor Hany Farid, thank you for joining me today. Deepfakes are only going to get better. While they may be used as a new creative outlet, setting up appropriate guardrails now, like requiring those digital watermarks will better ensure society benefits from its use.

TecHype was brought to you by the CITRIS Policy Lab and the Goldman School of Public Policy, at UC Berkeley. Want to better differentiate fact from fiction about other emerging technologies? Check out our other TecHype episodes at TecHype.org. [Music]

2023-12-31 00:55

Show Video

Other news