2023-07-12 Conversation with Tristan Harris, part 1

2023-07-12 Conversation with Tristan Harris, part 1

Show Video

What happens when a persuader, you know, a computer, knows more about your psychological vulnerabilities than you know about yourself? How does that person with asymmetric knowledge about your psychological vulnerabilities, how can they be in an ethical relationship with you? And that drove a philosophical inquiry that I kind of had to be self-taught in, how to think through that. And of course, so it brought me actually into an interesting space of questions. For example, a magician is in an asymmetric relationship with its audience.

But a magician, one example I remember reading from some of this exploration I did was, you know, an actor does not come out onto a theater stage and say, "I'm going to give you a disclaimer that for the next two hours I'm going to move you emotionally, but I want you to know that this isn't real and that I'm not actually the person that I'm about to play." Because the audience is in a contract where they know that the actor is playing a role, even though they might be deeply affected by it. But magicians can actually be so persuasive.

And I don't know if you've seen someone like Derren Brown, they can actually make people develop new superstitious or metaphysical beliefs about what is possible with changing reality, like things that genuinely feel impossible. And there are magicians who believe that you should actually make sure people understand that they are not actually a psychic. Hi, welcome to Innovative Minds. My pleasure to be here as your host.

Just call me T+. This show is a forum for leaders in tech and politics to discuss how to solve today's problems with today's tools. Today, our special guest is Tristan Harris, co-founder of the Center for Humane Technology. I'm also here with Audrey Tang, Taiwan's digital minister. Today, we talk about tech ethics in the modern age why we are addicted to our smartphones and solutions to make our life freer. Okay, so before we get started, I want to introduce our two guests today.

Previously described by The Atlantic magazine as “the closest thing Silicon Valley has to a conscience,” Tristan is a man on a mission to stop the infernal cycle of social media addiction. Tristan used to work as a design ethicist at Google. He is the co-founder of the Center for Humane Technology, a nonprofit organization working to align technology with humanity's best interests.

Hi Tristan, welcome to the show. Great to be here with you. Audrey Tang is Taiwan's digital minister. She became Taiwan's youngest minister without portfolio in 2016 when she headed the Public Digital Innovation Space.

She has been a hacktivist for over two decades. She is also a promoter of open source innovation. Hi Audrey. Hi, and hi Tristan. Good local time! Good local time. Good to see you again.

Tristan, you appeared in the documentary “The Social Dilemma,” which sheds light on the dark side of technology on our attention, well-being, and society. Have things improved since then? That's a very interesting question. The documentary, “The Social Dilemma,” was seen by something like 150 million people in 190 countries. It’s a big success on Netflix, which should speak to how many people, how many people's minds it expanded to understand what's going on behind the screen, that there is a supercomputer pointed at our brains, not interested in how to strengthen democracy, per Audrey's work, but interested in how do I keep you scrolling on that screen.

Have things gotten better or worse since then? You know, I have to look at, I can make a list of either side, right? I can make a list on one side and say, basically, Twitter is the same outrage machine that it was before the film came out. Basically, Facebook is still addicted to keeping people's attention and has trillion dollar market caps based on the same, perpetuating the same process. Not that many people have canceled their accounts or turned off notifications. So, I could be very pessimistic if I make that list.

On the other hand, I can make a different list of all of the institutions that are now aware of it, all the parents that are now sort of mobilized and are trying to get their kids off of social media. I meet many parents whose kids, I'm sorry, this is more of the negative side, but whose kids have committed suicide because of some of this stuff. And they are organizing to pass laws now to try to make change. There's an effort to ban TikTok in the United States. That is a sign, at least, at recognizing some of the problems of what this sort of arms race for attention does.

I did want to correct one thing in the diagnosis, though, which is not just that we have this dark side of technology. What we have to look at is when this technology steered by perverse incentives, and specifically arms races on a perverse incentive? In the case of social media, it was the arms race for engagement or attention. That if I don't go lower in the race to the bottom of the brainstem to addict you or to outrage you with political content, and my competitors do, I'll just lose. If I don't put a beautification filter on those 13 year olds that give them, that make them inflate their sense of self image and only get more likes when they put a beautification filter on, I'm just gonna lose to the other companies that do do the beautification filter.

And so I wanted to emphasize that because I think as we go on in this conversation about how do we create a more humane world, what we have to look at is where are their perverse incentives and where are their arms races on that perverse incentive because that's relevant to where AI is going to take the conversation next. Yeah, I think it's a very important contribution to help people realize not all the negative externalities caused by technology are easily reversible. And I think this is one of the most important points to policymakers. There are many policymakers that think, oh, let's just let the technologies have fun for a while, if there's something bad that's produced by their technology, like polluting the rivers and things like that, maybe we can just invent technologies and deploy to clean up the water afterward. And it may work for some sort of technologies, but for societal scale technologies, social media being one and generative AI being the other, there may be a point of no return, that it becomes very difficult to clean up the mess afterward. And so I would say that this time, the leaders of the world are now taking care of the generative AI's dilemma much more seriously, partly thanks to “The Social Dilemma” film.

That's a great point, Audrey. And actually just to agree with you on something very important, you know, anybody who doesn't know the language of externalities, there's a difference between profits that are gained that generate externalities on society's balance sheets, costs that are borne by society, air pollution, lead pollution, stuff dumped in rivers, shortening attention spans in the case of social media, or breakdown of shared reality in the case of social media. And there's difference between externalities that are reversible and externalities that are irreversible or very hard to reverse. And we should be especially concerned about technologies that cause irreversible externalities. And I think to your point, Audrey, I really appreciate you saying that, because I do think that the awareness around social media has driven up more concern as we're approaching the generative AI regulation conversation, and that's something to celebrate. Exactly.

And like climate crisis, the canonical example of things that are very difficult to reverse are now also used as one of the metaphors, or not really metaphors, but analogies for the damage to the fabric of trust that could be caused by generative AI. Following up on this. Tristan, could you tell us more about the concept of persuasive technologies? Yeah, so in the film “The Social Dilemma,” we talk about this lab at Stanford called the Stanford Persuasive Technology Lab, where the co-founders of Instagram studied and where I took a class on behavior design. The professor, BJ Fogg, is very famous. Notably, people need to understand that it wasn't just a lab where evil geniuses twirled their mustaches saying, "How do we manipulate and persuade the world?” It was actually about how do we use persuasive technology for good? So, what do we mean by persuasive technology? It means just like a magician knows something about your mind that you don't know about your own mind, and it's the asymmetry of knowledge that makes the magic trick work, persuaders have an asymmetry of information.

They know something about how your mind is going to respond to a stimulus. Like if I put a red notifications badge on that product, that social media product, it's going to make people click on it more than if I put a blue one. If I make it so that when you pull to refresh like a slot machine, it'll be more addictive, that's a stronger persuasive technique to keep you coming back for more than if I don't have the “pull to refresh” kind of feature on the product. So, at this lab, what they were studying was how could we apply persuasive technology in a way that would help people have healthier life habits. So, actually, for example, I studied with the co-founders of Instagram, and we actually worked together on something called Send the Sunshine. This was in 2006 before the first iPhone came out.

And the idea was, imagine a social network that knew when two people in different zip codes had -- in one of the zip codes, they had bad weather. So, the person was getting depressed. They had seven days of bad weather. And this message would-- I mean, the system would text your other friend who was in a good zip code where there was sunshine and say, "Hey, do you want to text Audrey, who's had seven days of bad weather, a photo of the sunshine?” So, this was an example of persuasive technology being used to slightly nudge people who had some love and some sunshine to cheer up people who had a little less sunshine. And that's a good example.

But of course, the way that this all went down was that these persuasive techniques became applied to how do we keep people's attention because the venture capitalists that funded social media with the business models of advertising and engagement all depended on using each of those persuasive nudges for more attention, growth, engagement, and so on. So, that's what led to a lot of the externalities and the climate change of culture that Audrey spoke to earlier. Audrey, what are your thoughts on persuasive technologies? Yeah, too much of a good thing, so much that we've become addicted to it, is almost by definition a bad thing. And Taiwan responded to that in our core curriculum of basic education. Starting 2019-ish, we switched this idea of media and digital literacy to media and digital competence.

because “literacy” is basically trying to get the children into a mood of critical thinking and so on, but really good precision persuasive technology just bypasses all that. There is no critical thinking frame that can make unaddict people to these specially designed persuasive technologies. On the other hand, if the children actually look at how “sausage” is made, how such persuasive technology is designed and even you know is just Scratch, Raspberry Pi, Arduino, try to manufacture some of this themselves and contribute to fact-checking, contribute to data stewardship and so on. Then they become immune the

same way that journalists themselves become immune to one-sided conversations and outrage and things like that because they learn about the importance of checking your sources and also having your own narratives, your own voice. So we discover that having your own voice that has a democratic impact is one of the most important antidotes to counter this kind of precision persuasive technology and to facilitate that, of course, the government need to be radically transparent to provide a real-time data and so on that is required for this kind of narrative to have an impact. Tristan, what do you think of the initiative Audrey just mentioned? Well, I've already said, in fact, actually, I'm very proud to have introduced a lot of Audrey's work to people on our podcast. Actually, I can tell you, Audrey, that the directors of “Everything Everywhere All at Once” became big fans of your work through listening to our interview on our podcast.

And I say that because I have always been so inspired by what you're doing in Taiwan to show that there can be a 21st century democracy. And I think you are leading the charge of how to update the kind of the spirit and principles of what a open society participatory democracy looks like in the age of AI and exponential tech. So, I love your principle of digital competence rather than digital literacy. One of the things that I think we're particularly in need of is being able to synthesize multiple perspectives and take the sides of other people's arguments. The phrase steel man versus straw man in an argument.

You know, steel man should be a principle that everybody should know. You know, how do I get the best possible, how do I give the best version of the argument that you're making to prove to your nervous system that I understand the point that you're making? And can I hold the argument that you're making along with the side that is the other side of the argument? You could even imagine AI tools that help us do that more efficiently so that whenever you see any argument, it finds all the other counter arguments and tries to show that in interesting ways. And obviously that's related to your work at Polis. Exactly, yes. And part of that, of steelmanning, is that if you're seeing with only one eye, like the left eye without the right eye, everything looks very flat, very two-dimensional, and the depth of discussion is lost.

And part of steelmanning is to consider the argument from the right, and then basically make it so that the vision of the left and the right becomes a stereo vision. So, while the left and right eye still don't see “eye to eye”, it becomes possible for the society to reflect on what's before us in a more depth of discussion way. I like that pun there of seeing eye to eye. That's good. Coming back to “The Social Dilemma,” Tristan, are social networks solely responsible for the polarization of society, or are they merely amplifying existing societal trends? So per our last little point about complexifying the narrative and not doing single cause and effect reasoning, which would be just one side of an argument, obviously social media didn't create from scratch all the polarization that exists in our society, or create from scratch addictive tendencies that cause people to be alone or isolated. But did it massively amplify with the power of billion dollar or trillion dollar market cap companies with a supercomputer pointed at your 13 year old in your society who's sitting there by themselves, who doesn't know that there's a supercomputer pointed at their brain calculating which TikTok video to show them next.

Did it exacerbate those trends? Yes. Did polarization pre-exist? Yes. Does Twitter have an incentive to actually provide a decentralized incentive for everybody to add further inflammation to a cultural fault line? And the better you are at adding inflammation to that cultural fault line, the more of a division entrepreneur you are, the more likes and followers we will pay you. If you paid people a thousand dollars every time they add inflammation to a cultural fault line, and you pay them nothing when they synthesize multiple perspectives, what is your society going to look like after you run it through that washing machine for 10 years? And I think that's kind of the set of incentives that we have laid down with social media. And again, not because Twitter is evil or Jack Dorsey meant for that to happen, but because they themselves were caught in that race to the bottom of the brainstem on that perverse incentive. I see.

The accumulation of these unfavorable incentives results in amplification. Could fragmenting the internet be a potential solution for combating polarization? Yeah, I think that's a complicated question. When there's only three places for people to go and those three places are all perversely incentivized by attention and engagement, we see the problems of that.

So, we all know what that led to. If you have, though, like a thousand places that are all optimizing for attention and engagement, and in other words, they're still optimizing for that perverse incentive, but it's happening now in many more untraceable places, like what's going on on Twitch or Discord or a hundred other places, it's harder to know. And so the fragmentation of society goes up. Yeah, I'm curious what Audrey thinks about that. Yeah, I think one of the main trends that we're seeing is that people are going to the places that they have more control.

They go to these more niche places, not exactly because these are more fun or more fluid or whatever, but rather that because it's smaller, the creator have to pay more attention about what the users actually need. And the nudges, right, the persuasive technology that are deployed become at least partially co-designed. And that is a very different dynamic. So, I'm cautiously optimistic, just as I'm cautiously optimistic about open source AI models, that it creates its own danger.

But at least it leads much more people to learn how to steer such issues and also see such issues with much more clarity than compared to if we only have three very difficult to oversight social media companies. Yeah. I mean, another trade-off is when people talk in small groups or communities, they tend to have more effective conversations than when you are talking to the entire world at once, and your conversation is visible to the entire world at once.

So Twitter is not really about free speech as much as it's about what is the overall design, which is more like a gladiator stadium. It rewards each of us for pounding our chest and raising our spear in the air when we're making a point, because we have a really big audience. And there's problems that are inherent to having a very, very big audience. Now, to Audrey, I think you're working on design theory that sort of shows that you can do communication in large audiences, but in a more democratic and respectful, tolerant way. But that's not how Twitter is designed. I'd love for you to-- if you have any thoughts about that.

Yeah, exactly. So the idea of Plurality, or collaborative diversity, indeed means that when people talk in their smaller communities, but these community have a defined way to interoperate with other smaller community of equal status, then what is like the common knowledge, what is the generally agreed consensus across those ideologically very different communities tends to be the bridge-making statements. And based on those bridge-making statements, such as the Community notes on Twitter, like the one plural part in Twitter, that tends to unify people toward the center of gravity that can create this kind of dynamic that we just talked about of steelmanning each other, of looking in stereo vision, and things like that.

And so I recently visited DG CONNECT in Brussels, in Europe, and I said to them that, you know, your Digital Market Act that is basically making sure that the social media companies in the future that reaches a certain gatekeeping status have to always interoperate with each other with a common protocol. I think that is one of the vision we're working toward. That's an interesting development to keep an eye on. Tristan, before we delve into AI ethics, do you have anything else to say about the documentary, "The Social Dilemma?" I think that what Audrey said, I didn't really steelman all the positive points and developments that have happened. It's sort of hard to remember because there have been, it's affected so many different places. Like we've been reached out to by the leading institutions in so many countries, heads of state, former heads of state.

One of the things that surprised me is just how unifying the issues are. People like to think that there's a big debate about big tech, and the left and the right in the United States disagrees. Censorship versus free speech, or how do we deal with content moderation and misinformation.

When you frame the issues as the race to the bottom of the brainstem that is really bad for children, for example, everyone agrees with that. And division is a unifying issue, I like to say. So, I think that no one wants to have systems that are built to divide us. And I think that there are ways of talking about it that we have found that the left, the right, you know, across the political spectrum, can kind of get on board with the core diagnosis. Unfortunately, if we're sort of teeing up the next part of the conversation, the reason that social media has been so hard to regulate, at least in the United States, is that it became so entangled with everything in our society.

So, there's vested interests. Journalism became entangled with it. You can't be a journalist except by being on Twitter. And major media get a lot of their traffic from these things.

Campaigns and elections all run through social media. There are now parts of the national security apparatus, including open source intelligence signals where you're doing large-scale data mining on Twitter. There's a lot of different sort of things that are now built into and entrenched and entangled with social media existing.

And so it makes it hard to make really big changes to it easily and get people on board. I would say that the thing that should be easy to do is simply saying you can't operate the information commons of a democracy without being a fiduciary to protecting and caring for that information commons. And I think that specifically going after the engagement incentive and replacing that with a bridging incentive, like if we were to pass one law, it would be to take systems that are kind of these global information systems and actually have them sort for national bridge, “unlikely consensus” — the way that Audrey’s system works. I think that would make an enormous difference, especially going into something like the election coming up next year in the United States. Yeah, definitely. And we're also in Taiwan experimenting with this idea of holding those social media companies liable if they cause these damages to externality, for example, by posting investment advices from people who are very shady, not professionally accredited investors and so on.

So, instead of going into this takedown debate, which is a divisive issue, we simply say, no, we're not going to take it down. But if you happen to cause scams, cause fraud, cause damage and so on, you're also liable for it. Yeah, I think liability is one of the ways of dealing with the harms. Because in general, when you have any economic system, if the externalities are, you don't have liability for those externalities happening, they're going to keep happening and the way you correct a system that's basically a race to hide externalities is you have to make those externalities visible, measurable and then have costs associated with them. And then as soon as they have costs, the race changes because now it's not a race to that incentive by pushing all the externalities somewhere else. It's a race

to compete but then including those externalities. So who can create the most engagement but not create the most addiction, fraud, scams, polarization, etc. in society? And that's the real adjustment to the race that we want to create, from a perverse incentive to a positive incentive. Yeah, definitely. Next, I'd like to talk about the ethics of technology, especially about the challenges posed by AI. To begin with, I'd like to ask Tristan, as an ethicist, what are the primary philosophical schools of thought that inform your decision making? Well, I'll answer this question in an unconventional way. So, I actually didn't study philosophy

in college. I studied computer science, and there was a major at Stanford called Symbolic Systems where you kind of merge cognitive science, psychology, linguistics, philosophy, theory of mind, kind of all together into this weird discipline called symbolic systems. And that was basically the closest thing to a major that I kind of studied. And it wasn't until I was actually at Google and I was starting to think about the problems of the ethics of persuasion. What happens when a persuader, a computer, knows more about your psychological vulnerabilities than you know about yourself? How does that person with asymmetric knowledge about your psychological vulnerabilities, how can they be in an ethical relationship with you? And that drove a philosophical inquiry that I kind of had to be self-taught in how to think through that.

And of course, so it brought me actually into an interesting space of questions. For example, a magician is in an asymmetric relationship with its audience, but a magician, One example I remember reading from some of this exploration I did was an actor does not come out onto a theater stage and say, I'm going to give you a disclaimer that for the next two hours, I'm going to move you emotionally. But I want you to know that this isn't real and that I'm not actually the person I'm about to play. Because the audience isn't a contract, but they know that the actor is playing a role, even though they might be deeply affected by it. But magicians can actually be so persuasive-- and I don't know if you've seen someone like Derren Brown-- they can actually make people develop new superstitious or metaphysical beliefs about what is possible with changing reality, like things that genuinely feel impossible. And there are magicians who believe that you should actually make sure people understand that they are not actually a psychic, they don't have supernatural powers, but they have to disclose that they're just that effective. And I

think that, you know, I learned a lot from spending basically something like six years in a deep philosophical inquiry studying cults and cult dynamics, how do you ethically persuade people in groups, social media. There's actually a whole area of ethics called transformative ethics. There's a Yale professor named L.A. Paul around how do you transform someone into a vampire

ethically when the new version of them that's a vampire has different tastes and preferences than the version of them that before they were a vampire. And so that was kind of the core philosophical inquiry I was in and through that I studied, you know, all sorts of things. But existentialism has been more of a pure, existentialism and Buddhism have kind of been my core schools of thought for thinking about technology now. Audrey, same question for you. What philosophy inspires you the most in your work?

Well, I think it goes without saying that I often quote the Dao De Jing in my interviews. So definitely the thoughts of Lao Tzu is on my mind. And “De” in particular, I think, translates to virtue ethics. Even though that virtue in everyday English doesn't mean what it used to mean now, I think currently there are ways to try to express the same idea of virtue ethics, for example, ethics of care and so on. And I think this is particularly relevant in the cases that Tristan has been describing, because to an infant, a parent is like a magician. The parents almost by definition knows more about the infant's internal state than the infant's own growing psychology knows about themselves, right? And in these kinds of relationships a parent has a duty of care Which is a kind of virtue, “De”, right? In Lao Tzu's speech and a lot of the Taoist thinking is about how to not Dominate the infant and letting the infant having the room to grow while being a good enough parent in a way that's not like a perfectionist parent, which pushes the child into a predefined mode, but rather have a way to grow together with the child.

This is so critical, and actually if we had more time, I'd love to explore this more deeply, because it is exactly a relationship of care, because there's essentially one person with just more, not just more knowledge, but more power in steering the outcome of the less powerful party. And you can't have a system where one person has a hundred times more power and a hundred times more knowledge about the vulnerabilities of the other person. And then their incentive is to utilize the vulnerabilities or the asymmetry to their advantage. And that's what the advertising-based business model is. The TikTok supercomputer is way more powerful than the computer that beat Garry Kasparov at chess, except now the chessboard is your mind.

Imagine a world where a person who was a hundred times both more powerful and had a hundred times more knowledge about you than you knew about yourself, was incentivized to use that daily in a way that just barely misaligned with your interests. Like it was just minimally activating your nervous system, keeping you addicted, engaged, amused, while basically just robbing you of all the other things that kind of matter. And I think that's exactly the ethics that we need to have, the ethics of care. Yeah, exactly, because it's called exploitation, right? The way you describe it, everybody know it's exploitation of the child. But we need to see the computerized systems or the AI systems being in the same relationship in order to identify it's not just individualized abuse, but rather a system of exploitation. Yeah, and you can say that imagine then we replace parents with TikTok or the technology complex.

And right now it's like having parents who want to extract as much value for themselves from their children. I mean, that would be a perverse environment, but we have basically outsourced parenting to technology. And you can have technology that is at least playing a role in the development of humans, which we have to see that it's inevitable that it is playing a developmental role in humans, which is why that deeper change we talked about earlier of changing the fiduciary obligation of any sort of system that is in that developmental role inside of a society from maximizing for engagement or its own self-interest to instead be sort of in a caring relationship of increasing care. Exactly. Thank you both for your insights. Moving on, Tristan, you argued on your podcast, “Your Undivided Attention” that humanity's first contact moment with AI was social media and that humanity has lost According to you, our encounter with large language models is humanity's second contact moment with AI Is history condemned to repeat itself? Well, first, let me explain.

So obviously, AI is preexisted social media. So, I want to make sure I'm not simplifying. There's been AI systems that have existed everywhere in our society since the 1970s and 1980s. So, obviously social media wasn't first contact with AI, but was the AI that drove social media, which was a curation AI, it was just trying to pick which content, which TikTok video, which Instagram photo to show your nervous system next. That AI was a narrow runaway AI optimizing for a single incentive misaligned with society. And it was enough, even though it was so simple, it was enough to unravel the shared reality of a lot of democracies around the world and backslide trust and increase polarization and tribalism and cause some pretty massive effects, mental health of youth, etcetera. So, the point is that

noticing that there is a very simple AI that we already ran away from its creators. Once the the misalignment was detected, have we fixed the misalignment with social media? Is it now aligned? Well, we just talked about how we want to realign it with this regulation to change from engagement to care, but we haven't fixed that misalignment. And I think that says that, you know, we always say, well, if we made an AI, surely we would just unplug it if it was ever misaligned with society, or we would fix the misalignment. Well, here

we have a test case where we actually rolled it out. It got entangled with core parts of of our society and even became part of our daily lives. It's adjacent to and embedded with the things that we wake up to every day when you open your phone. So, now as we think about large language models and second contact with AI, which we call creation AI rather than curation AI-- curation AI is picking which tweet to show your nervous system. Creation AI is the ability to generate any kind of content, decode language, and content at scale.

So we think of-- I know you had Yuval Harari on the podcast. We wrote an op-ed together in The New York Times we talked about this new generative AI has the ability to decode any language. In democracy, conversation is language. The narrative of a society is language. The media is language.

Journalism is language. Also, code is language. Law is language. And so if generative AI can hack language, it can hack the foundations upon which society depends, whether that's code-- find me cybersecurity vulnerabilities in your infrastructure-- whether it's religions, find me a way to tie some events that are happening in the world to an extremist religion and actually sow propaganda directly at that audience, or whether it's law, hey, GPT-4, find me a loophole in this legal contract. Literally help me get away with murder and not have it be against the law of that country.

Do that at scale. When you can hack language, you can hack the fundamentals of a society. So, this is a really key point because we have to be able to predict and anticipate what are the kinds of things people can do with generative AI. And so I think the important point we to raise with this talk we gave called the AI Dilemma is if we weren't so good at noticing first contact, we have to see what did we get wrong to get it right this time before second contact fully takes place. And the key thing is to regulate AI before we entangle it and then become dependent on it in a way that we're not able to change it and realign it before it got too late.

From what I've heard, you believe that governments should step in and regulate AI before we lose control. Is that correct? -Yes, we will need to find a way to coordinate specifically the race dynamic around AI, the arms race to release AI. So, in social media, what the race to the bottom of the brainstem was, that arms race for engagement and attention, that's what drove all the climate change of culture harms and externalities with first contact. With the second contact with AI, it's really going to be driven by this arms race to deploy generative AI everywhere as fast as possible because that's how you're going to outcompete. That's how open AI outcompetes DeepMind or Anthropic or whatever. And when you're racing to go that fast to deploy, you're not moving at a pace that you can get alignment, safety, bias or discrimination right.

You're going to get all those things wrong. And so the question is, how do we deploy this? How do we set that race in a way where, again, we move from a perverse incentive to a positive incentive? So, that's I think, the question. I'd love to hear what Audrey thinks about this. Yeah, so just for the audience benefit, the word “alignment” that Tristan is using is a very technical term, and it means, not an AI that's more powerful, but rather an AI that can respond to the requirements, to the needs of people. That is to say, an AI that cares more, right? So, we've been talking about how power does not necessarily equal care. And I think one of the fundamental errors that people made in the first contact with AI in terms of social media is that initially, many have bought into this narrative that as those narrow AI become more powerful, they will also be more capable for care.

For example, if the human moderators cannot deal with the vast amount of hate and toxicity, well, the narrow AI at some points will be able to very accurately provide a careful moderator's role and so on. And we have seen that it was not the case, but it's with the hindsight, right? At that point, many people thought that power and care are the same thing. That's, and in AI jargon, it means many people thought that capability and alignment are the same thing. And I think one of the most important point we're going to make is that these are not the same thing, and given the perverse incentives, it's very likely that they are actually at opposite with each other. Yes, exactly. To ensure that the second moment of contact with AI doesn't result in the same negative externalities, having the dialogue we are having today is crucial.

I want to thank Tristan and Audrey for joining us today. We'll continue our conversation in part two of this episode. If you like today's show, be sure to subscribe, share and let us know what you think. See you next time on Innovative Minds.

toxicity or pollution. And I feel like the kind of what we need to do, what your work is showing is how do we increase the dimensionality of care to match the dimensionality of power and using technology. So, it's not an anti-technology conversation, It's you're really showing how to do it with technology.

Right, exactly. It's like in a freshly frozen ice sheet of a river or something and we were crossing it. We want to pay attention of course to the brittle parts so that we don't all fall into the frozen river, but we We want to do it in a way that's decentralized, meaning that each person that is on the river, different spots somewhere, we need to have some sort of communication so that anyone who discovered vulnerabilities in the ground that we're treading, notify everybody, and this is like dimensional scouting so that people can understand what's the most existentially riskful, and then everybody pays attention and takes care of that. And so the ability to sound the alarm and the ability to pay attention and to take care of each other, I think it's paramount in this day and age. Hi, I'm Tristan Harris, co-founder of the Center for Humane Technology, and I'll see you on Taiwan Plus. Hello, I'm Audrey Tang, Taiwan's digital minister.

See you on Taiwan Plus.

2023-09-12 02:16

Show Video

Other news