Can We Be in an Ethical Relationship With AI? ⎸ #innominds S2EP7 (Part 1)

Can We Be in an Ethical Relationship With AI? ⎸ #innominds S2EP7 (Part 1)

Show Video

what happens when it when a Persuader you know a computer knows more about your psychological vulnerability than you know about yourself how does that person with asymmetric knowledge about your psychological vulnerabilities how can they be in an ethical relationship with you uh and that that drove a philosophical inquiry that I kind of had to be self-taught in um How to Think Through that and of course so it brought me actually into an interesting space of questions for example a magician is in a uh asymmetric relationship with its audience but a magician uh one example I remember reading from some of this exploration that I did was um you know an actor does not come out onto a theater stage and say I'm going to give a disclaimer that for the next two hours I'm going to move you emotionally but I want you to know that this isn't real and that I'm not actually the person I'm about to play because the audience isn't a contract but they know that the actor is playing a role even though they might be deeply affected by it but magicians can actually be so persuasive I don't know if you've seen someone like Darren Brown they can actually make people develop new superstitious or metaphysical beliefs about what is possible with changing like things that genuinely feel impossible and there are magicians who believe that you should actually make sure people understand that they are not actually a psychic hi welcome to Innovative Minds my pleasure to be here as your host just call me t-plus this show is a forum for leaders in Tech and politics to discuss how to solve today's problems with today's tools today our special guest is Tristan Harris co-founder of the center for Humane technology I'm also here with Audrey Tang taiwan's digital Minister today we talk about tech ethics in the Modern Age why we are addicted to our smartphones and solutions to make our life Freer okay so before we get started I want to introduce our two guests today previously described by the Atlantic magazine as the closest thing Silicon Valley has to a conscience Tristan is a man on a mission to stop the infernal cycle of social media addiction Tristan used to work as a design ethicist at Google he is the co-founder of the center for Humane technology a non-profit organization working to Align Technology with Humanity's best interests hi Tristan welcome to the show great to be here with you Audrey Tang is taiwan's digital Minister she became taiwan's youngest Minister without portfolio in 2016 when she headed the public digital Innovation space she has been a hacktivist for over two decades she is also a promoter of Open Source Innovation hi Audrey Hi and hi Tristan good luck of time good local time good to see you again Tristan you appeared in the documentary the social dilemma which sheds Light On the Dark Side of Technology on our attention well-being and Society have things improved since then that's a very uh interesting question um the documentary the social dilemma was seen by something like a 150 million people in 190 countries it was a big success on Netflix which should speak to how many people uh is how many people's minds it expanded to understand what's going on behind the screen that there was a super computer pointed at our brains not interested in how to strengthen democracy per Audrey's work but interested in how do I keep you scrolling on that screen um has have things gotten better or worse since then um you know I have to look at I can make a list of either side right I can make a list on one side and say basically Twitter is the same outrage machine that uh it was before the film came out uh basically Facebook is still addicted to keeping people's attention and has trillion dollar market caps based on the same perpetuating the same process not that many people have canceled their accounts or you know turned off notifications so I could be very pessimistic if I make that list on the other hand I can make a different list of all of the institutions that are now aware of it all the parents that are now sort of mobilized and are trying to get their kids off of social media my many parents who've whose kids is more than the negative side but whose kids have committed suicide because of uh some of this stuff and they're organizing to pass laws now to try to make uh make change um there's an effort to ban Tick Tock in the United States um that is a sign at least at recognizing uh some of the problems of of what this this sort of arms race for attention does I did want to correct one thing in the diagnosis though which is not just that we have this dark side of Technology what we have to look at is when is technology steered by perverse incentives and specifically arms races on a perverse incentive in the case of social media it was the arms race for engagement or attention that if I don't go lower in the race to the bottom of the brain stem to addict you or to outrage you with political content and my competitors do I'll just lose if I don't put a beautification filter on those 13 year olds that give them that make them you know inflate their sense of self-image and only get more likes when they put a beautification filter on I'm just going to lose to the other companies that do do the beautification filter and so I wanted to emphasize that because I think as we go on in this conversation about you know how do we create a more Humane world what we have to look at is where are their perverse incentives and where are their arms races on that perverse incentive because that's relevant to where AI is going to take the conversation next yeah I think it's a very important contribution to help people realize not all the negative externalities caused by technology is easily reversible and I think this is one of the most important point to policy makers there are many policy makers that think oh let's just let the technologies have fun for a while if there's something bad that's produced by their technology you know like polluting the rivers and things like that maybe we can just invent Technologies and deploy to clean up the water afterward and it may work for some sort of Technologies but for societal scale Technologies social media being one and generative AI being the other there may be a point of no return that it becomes very difficult to clean up the mess afterwards and so I would say that this time the leaders of the world are now taking care of the generative ai's dilemma much more seriously partly thanks to the social dilemma film that's a great Point Audrey um and actually just agree with you on something very important um you know anybody who doesn't know the language of externalities you know there's a difference between profits that are gained that generate externalities on society's balance sheets costs that are borne by Society air pollution lead pollution um you know stuff dumped in rivers um shortening attention spans in the case of social media or breakdown of shared reality in the case of social media and there's difference between externalities that are reversible and externalities that are irreversible or very hard to reverse and we should be especially concerned about technologies that cause irreversible externalities and I think to your point Audrey I really appreciate you saying that because I do think that the awareness around social media has driven up more concern as we're approaching the generative AI regulation conversation and that's something to celebrate exactly and like climate crisis right the canonical example of things that are very difficult to reverse are now also used as one of the metaphors or not really made a forest product analogies for the damage to the fabric of trust that could be caused by generative AI yeah following up on this Tristan could you tell us more about the concept of persuasive Technologies yeah so in the film The Social dilemma we talk about this uh Lab at Stanford called the Stanford persuasive technology lab uh where the co-founders of Instagram studied and where I took a class on Behavior design the professor BJ Fogg is very famous notably people need to understand that it wasn't just a lab work Evil Geniuses you know twirled their mustaches saying how do we manipulate and persuade the world it was actually about how do we use persuasive technology for good so what do we mean by persuasive technology it means just like a magician knows something about your mind that you don't know about your own mind and it's the asymmetry of knowledge that makes the magic trick work persuaders have an asymmetry of information they know something about how your mind is going to respond to a stimulus like if I put a red notifications badge on that product that social media product it's going to make people click on it more than if I put a blue one if I make it so that when you pull to refresh like a slot machine it'll be more addictive that's a stronger persuasive technique to keep you coming back for more than if I don't have the pull to refresh kind of feature on the on the product so at this lab what they were studying was how could we apply persuasive technology uh in a way that would help people have healthier life habits so actually for example I I studied with the co-founders of Instagram and we actually worked together on something called send the sunshine this is in 2006 before the first iPhone came out and the idea was uh imagine a social network that knew and two people in different zip codes had um uh and one of the zip codes they had bad weather so the person was getting depressed they had you know seven days of bad weather and this message would would I mean the system would text your other friend who was in a good zip code where there was sunshine and say hey do you want to text Audrey who's had seven days of bad weather a photo of the sunshine so this was an example of persuasive technology being used to slightly nudge people who had some love and some sunshine to cheer up people who had a little less sunshine and that's a good example but of course the way that um this all went down was that these persuasive techniques became applied to how do we keep people's attention because the Venture capitalists that funded social media with the business models of advertising and engagement all depended on using each of those persuasive nudges for more attention growth engagement and so on so that's what led to a lot of the externalities in the climate change of culture that Audrey spoke to earlier Audrey what are your thoughts on persuasive Technologies yeah too much of a good thing uh so so much that we've become addicted to it is almost by definition a Bad Thing uh and Taiwan responded to that by uh in our Core Curriculum of basic education uh starting 2019-ish we switched this idea of media and digital literacy to media and digital competence because literacy is basically trying to get the children into a mood of critical thinking and so on but really good Precision persuasive technology just bypasses all that there is no critical thinking frame that can make unadict people to these specially designed persuasive Technologies on the other hand if the children actually look at how sausage is made how such preservative technology is designed and even you know is just scratch Raspberry Pi are doing you know try to manufacture some of this themselves and contribute to fact checking contribute to data's dealership and so on then they become immune the same way that journalists themselves become immune to one-sided conversations and outrage and things like that because they learn about the importance of checking your resources and also having your own narratives your own voice so we discovered that having your own voice that has a democratic impact is one of the most important antidotes to counter this kind of precision persuasive technology and to facilitate that of course the government need to be radically transparent to provide a real-time data and so on that is required for this kind of narrative to have an impact Tristan what do you think of the initiative Audrey just mentioned well I've already said in fact actually I'm very proud to have introduced a lot of Audrey's work to people on our podcast um actually I'm I can tell you Audrey that the directors of everything everywhere all at once became big fans of your work through listening to our interview on on our podcast um and I say that because I have always been so inspired by what you're doing in Taiwan to show that there can be a 21st century democracy and I think you are leading the charge of how to update um the kind of the spirit and principles of what a open Society perspiratory democracy looks like in the age of AI and an exponential Tech so I love your principle of digital competence rather than digital literacy one of the things that I think we're particularly in need of is uh being able to synthesize multiple perspectives and take the sides of other people's arguments the the phrase Steel Man versus straw Manning an argument you know steel man should be a a principle that everybody should know you know how do I get the best possible uh how do I give the best version of the argument that you're making to prove to your nervous system that that I understand that the point that you're making and can I hold the argument that you're making along with the side that you know is the other side of the argument you know you could even imagine um AI tools that help us do that more efficiently so that whenever you see any argument it finds all the other counter arguments and tries to show that in interesting ways um and obviously that's related to your work at polis and exactly yes and part of that of Steel Manning is that if you're seeing with only one eye like the left eye uh without the right eye everything looks very uh flat it's very two-dimensional and the depth of discussion is lost and part of Steel Manning is that to to consider the argument from the right and then basically really make it so that the vision of the left and the right becomes a stereo Vision so while the left and right I still don't see eye to eye it becomes possible for the society to reflect on what's before us in a more depth of discussion way yeah I like that pun there of seeing eye to eye but yeah that's good coming back to the social dilemma Tristan our social networks solely responsible for the polarization of society or are they merely amplifying existing societal Trends so per our last little point about complexifying the narrative and not doing single cause and effect reasoning which would be just one side of an argument obviously social media didn't create from scratch all the polarization that exists in our society or create from scratch addictive tendencies that cause people to be alone or isolated but did it massively amplify with the power of billion dollar you know or trillion dollar market cap companies with a supercomputer pointed at your 13 year old in your Society who's sitting there by themselves who doesn't know that there's a supercomputer pointed at their brain calculating which Tick Tock video to show them next um you know did it exacerbate those Trends yes you know did polarization pre-exist yes does you know Twitter have an incentive to uh actually provide a decentralized incentive for everybody to add further inflammation to a cultural outline and the better you are at adding inflammation to that cultural fault line the more of a division entrepreneur you are the more likes and followers we will pay you you know if you if you paid people a thousand dollars every time they add inflammation to a cultural fall line and you pay them nothing when they synthesize multiple perspectives what is your Society going to look like after you run it through that washing machine you know for 10 years and I think that's kind of the set of incentives that we have laid down with social media and again not because Twitter is evil or Jack Dorsey meant for that to happen but because they themselves were caught in that race to the bottom of the brain stem on that perverse incentive I see the accumulation of these unfavorable incentives results and amplification could fragmenting the internet be a potential solution for combating polarization yeah I think that's a complicated question um you know uh when there's only three places for people to go and those three places are all perversely incentivized by attention and engagement we see the problems of that right so we all know what that led to um if you have though like a thousand places that are all optimizing for attention and engagement and you know there's in other words they're still optimizing for that perverse incentive but it's happening now in many more untraceable places like what's going on on Twitch or Discord or 100 other places it's harder to know and so the fragmentation of society goes up um yeah I'm curious what Audrey thinks about that yeah I think uh one of the main trends that we're seeing is that people are going to the places that they have more control they go to these more Niche places not exactly because uh these are more fun or more fluid or whatever but rather that because it's smaller the Creator have to pay more attention about what the users actually need and the nudges right the persuasive technology that's deployed become at least partially co-designed and that is a very different Dynamic so I'm cautiously optimistic just as I'm cautiously optimistic about open source AI models that it creates its own denture but at least it lets much more people to learn how to steer such issues and also see such issues with much more clarity than compared to if we only have three very difficult to oversight um the social media companies yeah I mean another trade-offs is when people talk in small groups or communities they tend to have more effective conversations than when you are talking to the entire world at once and your your conversation is visible to the entire world at once so Twitter is not really about Free Speech as much as it's about um what is the overall design which is more like a gladiator Stadium you know it rewards each of us for you know pounding our chest and you know raising our sphere in the air uh when we're making a point because we have a really big audience um and there's problems that are inherent to having a very very big audience now to Audrey I think you're working on design theory that sort of shows that you can do Communication in large audiences but in a more democratic and respectful tolerant way but that's not how Twitter is designed I'd love for you to if you have any thoughts about that yeah exactly um so the idea of plurality or collaborative diverse let's say indeed means that when people talk in their smaller communities but these communities have a defined way to interoperate with other smaller community of equal status then what is like the common knowledge what is the generally agreed consensus across those ideologically very different communities tends to be the bridge making statements and based on those Bridge making statements such as the community notes on Twitter like the one plural Parts in Twitter that tends to unify people toward the center of gravity that can create this kind of dynamic that we just talked about of Steel Manning each other of looking in Stereo vision and things like that and so I recently uh visited digiconnect in Brussels in Europe and I said to them that you know your digital Market act that is basically making sure that the social media companies in the future that reaches a certain gatekeeping status have to always interoperate with each other with a common protocol I think that is one of the vision we're working toward um that's an interesting development to keep an eye on Tristan before we delve into AI ethics do you have anything else to say about the documentary the social dilemma um I think that what Audrey said I didn't really steal man all the positive points and developments that have happened it's sort of hard to remember because there have been you know it's affected so many different places like we've we've been reached out to by the leading institutions and you know so many countries you know heads of state former heads of state um one of the things that surprised me is just how unifying the issues are um people like to think that there's a big debate about big Tech and the left and the right in the United States disagrees censorship you know versus Free Speech or you know how do we do content moderation with misinformation when you frame the issues as the race to the bottom of the brain stem that is really bad for children for example everyone agrees with that and division is a unifying issue I like to say so I think that no one wants to have systems that are built to divide us and I think that there are ways of talking about it that we have found that the left the right you know across the political Spectrum can kind of get on board with the core diagnosis unfortunately uh if we're sort of teeing up the next part of the conversation the reason that social media has been so hard to regulate at least in the United States is that it became so entangled with everything in our society so there's vested interests um you know journalism became entangled with it you can't be a journalist except by being on Twitter and major media get their a lot of their traffic from uh from these things uh campaigns and elections all run through social media uh you know there are now app parts of the National Security apparatus including you know open source intelligence signals where you're doing large-scale data mining on Twitter there's a lot of different sort of things that are now built into and entrenched and Tangled with social media existing and so it makes it hard to make really big changes to it easily and get people on board I would say that the thing that should be easy to do is simply saying you can't operate the information Commons of a democracy without being a fiduciary to protecting and caring for that information Commons and I think that this specifically going after the engagement incentive and replacing that with a bridging incentive like if we were to pass one law it would be to take systems that that are kind of these Global Information Systems and actually have them sort for National um Bridge unlikely consensus the way that Audrey system works I think that would make an enormous difference especially going into something like the election uh coming up next year in the United States yeah definitely and we're also in Taiwan experimenting with this idea of holding those uh social media companies liable if the cost is Damages to externality for example by posting investment advices from people who are like very shady like not professionally accredited investors and so on so instead of going into this takedown debate which is a device that's being issued we simply say no we're not going to take it down but if you happen to cause scams cause fraud cause damage and so on you're also liable for it yeah I think liability is one of the ways of dealing with the harms because in general when you have any economic system if the externalities are you don't have liability for those externalities happening they're going to keep happening and the way you correct a system that's basically a race to hide externalities is you have to make those externalities visible measurable and then have costs associated with them and then as soon as they have costs the race changes because now it's not a race to that incentive by pushing all the externalities somewhere else it's a race to compete but then including those externalities so who can create the most engagement but not create the most addiction fraud scams polarization Etc in society and that's the real adjustment to the race that we want to create from a perverse incentive to a positive incentive yeah definitely next I'd like to talk about the ethics of Technology especially about the challenges posed by AI to begin our with I'd like to ask Tristan as an ethicist what are the primary philosophical schools of thought that inform your decision making um well I'll answer this question in an unconventional way so I actually didn't study philosophy in college I I studied computer science and there was a major at Stanford called symbolic systems where you kind of merge cognitive science psychology Linguistics philosophy uh theory of mind um kind of all together into this weird discipline called symbolic systems and that was basically what Jace the closest thing to a major that I kind of studied and it wasn't until I was actually at Google that and I was starting to think about the problems of the ethics of persuasion what happens when when a Persuader you know a computer knows more about your psychological vulnerability than you know about yourself how does that person with asymmetric knowledge about your psychological vulnerabilities how can they be in an ethical relationship with you uh and that that drove a philosophical inquiry that I kind of had to be self-taught in um How to Think Through that and of course so it brought me actually into an interesting space of questions for example a magician is in a uh asymmetric relationship with its audience but a magician uh one example I remember reading from some of this exploration I did was um you know an actor does not come out onto a theater stage and say I'm going to give a disclaimer that for the next two hours I'm going to move you emotionally but I want you to know that this isn't real and that I'm not actually the person that I'm about to play because the audience isn't a contract but they know that the actor is playing a role even though they might be deeply affected by it but magicians can actually be so persuasive I don't know if you've seen someone like Darren Brown they can actually make people develop new superstitious or metaphysical beliefs about what is possible with changing reality like things that genuinely feel impossible and there are magicians who believe that you should actually make sure people understand that they are not actually a psychic they don't have Supernatural Powers but they have to disclose that they're just that effective and I think that um you know I learned a lot from spending basically something like six years in a deep philosophical inquiry studying Cults and Dynamics how do you ethically persuade people in groups social media um there's actually a whole area of Ethics called transformative ethics as a Yale Professor named La Paul around how do you transform someone into a vampire ethically when the new version of them that's a vampire has different tastes and preferences than the version of them that before they were a vampire uh and so that was kind of the the core um uh philosophical inquiry I was in and through that I studied you know all sorts of things but existentialism has been more of a pure existentialism in Buddhism have kind of been my core schools of thought for thinking about um uh technology now Audrey same question for you what philosophy inspires you the most in your work well I think goes without saying that I often quote in my interviews so definitely the the thoughts of loud is on my mind uh and uh in particular I think translates to Virtue ethics even though that virtue in everyday English doesn't mean what they've used to mean now uh I think currently there are ways to try to express the same idea of virtue ethics for example ethics of care and so on and I think this is particularly relevant in the cases that Tristan has been describing because to a infant a parent is like a magician their parents almost by definition knows more about the infant's internal State than the infant's own growing psychology knows about themselves right and in these kinds of relationships a parent has a duty of care which is a kind of virtue at the right in Laos speak and a lot of the Tao is thinking is about how to not dominate the infant and letting the infant having the room to grow while being a good enough parent in a way that's not like a perfectionist parent which pushes the child in into a predefined mode but rather have a way to grow together with the child this is so critical and actually it if we had more time I'd love to explore this more deeply because it is exactly a relationship of care because there's essentially one person with just more not just more knowledge but more power and in steering the outcome of the less powerful party and you can't have a system where one person has 100 times more power and 100 times more knowledge about the vulnerabilities of the other person and then their incentive is to utilize the vulnerabilities or the asymmetry to their advantage and that's what the Advertising based business model is right the tick tock supercomputer is way more powerful than the computer that beat Gary Casper of at chess except now the chessboard is your mind and if imagine a world where you know a person who's 100 times both more powerful and had 100 times more knowledge about you than you do about yourself was incentivized to use that daily in a way that just barely misaligned with your interest like it was just minimally you know activating your nervous system keeping you addicted engaged amused while basically just robbing you all the other things that kind of matter and I think that's exactly the ethics that we need though with the ethics of care yeah exactly because it's called exploitation right the way you describe it everybody know it's a exploitation of the child but we need to see the computerized systems or the AI systems being in the same relationship in order to identify as not just individualized abuse but rather a system of exploitation yeah and you could say that imagine then we replace parents with Tick Tock or the technology complex right and right now it's like having parents who want to extract as much value from themselves from their children I mean that would be a perverse environment but we have basically outsourced parenting to technology uh and you know you can have technology that is in at least playing a role in the development of humans which we have to see that it's inevitable that it is playing a developmental role in humans which is why that deeper change we talked about earlier of changing the fiduciary obligation of any sort of system that is in that developmental role inside of a society from maximizing for engagement or its own self-interest to instead be sort of in a caring relationship of increasing care exactly thank you both for your insights moving on Tristan you argued on your podcast your undivided attention that Humanity's first Contact moment with AI was social media and that Humanity has lost according to you our encounter with large language models is Humanity's second contact moment with AI is history condemned to repeat itself well first let me explain so obviously AI is pre-existed social media so I want to make sure I'm not simplifying there's been AI systems that have uh you know existed everywhere in our society since the 1970s and 1980s um so obviously social media wasn't first contact with AI um but was the AI that drove social media which was a curation AI it was just trying to pick you know which content which Tick Tock video which Instagram photo to show your nervous system next that AI was was a narrow runaway AI optimizing for a single incentive misaligned with society and it was enough even though it was so simple it was enough to unravel the shared reality of a lot of democracies around the world and backslide trust and increased polarization and tribalism and cause some pretty massive effects mental health of Youth Etc so the point is that um noticing that there is a very simple AI that we're already ran away from its creators once the misalignment was detected we didn't have we fixed the misalignment with social media is it now aligned well we just talked about how we want to realign it with this regulation to change from engagement to care but we haven't fixed that misalignment um and I think that says that you know we always say well if we made an AI surely we would just unplug it if it was ever misaligned with Society or we would fix the misalignment well here we have a test case where we actually rolled it out it got entangled with core parts of our society and even became part of our daily lives it's adjacent to and embedded with you know the things that we make up to every day when you open your phone so now as we think about you know large language models in second contact with AI which we call creation AI rather than curation ai curation ai is picking which tweet to show your nervous system creation AI is the ability to generate any kind of content decode language and content at scale so we think of I know you had Yuval Harare on the podcast we wrote an op-ed together in the New York Times where we talked about you know this new uh generative AI has the ability to decode any language in democracy you know conversation is language uh The Narrative of a society's language that you know media is language journalism is language uh also um code is language law is language and so if generative AI can hack language it can hack the foundations upon which society depends whether that's code find me cyber security vulnerabilities in your infrastructure whether it's religions find me a way to tie some events that are happening in the world to an extremist religion and actually so propaganda directly at that audience or whether it's law hey gpd4 find me a loophole in this legal contract literally helped me get away with murder and not have it be against the law of that country do that at scale when you can hack language you can hack the fundamentals of a society so this is a really key point because we have to be able to predict and anticipate what are the kinds of things people can do with with generative Ai and so I think the important point we wanted to raise with this talk we gave called the AI dilemma is if we weren't so good at noticing first Contact we have to see what did we get wrong to get it right this time before second contact fully takes place and the key thing is to regulate AI before we entangle it and then become dependent on it in a way that we're not able to change it and realign it before it got too late from what I've heard you believe that government should step in and regulate AI before we lose control is that correct yes we will need a fi we will need to find a way to coordinate specifically the race Dynamic around AI the arms race to release AI so in social media with the race to the bottom of the brain stamp was that arms race for engagement and attention that's what drove all the climate change of culture harms and externalities with first contact with the second contact with AI uh it's really going to be driven by this arms race to deploy generative AI everywhere as fast as possible because that's how you're going to out compete that's how open AI outcompetes Deep Mind or anthropic or whatever and when you're racing to go that fast to deploy you're not moving at a pace that you can get alignment safety bias or discrimination right you're going to get all those things wrong and so the question is how do we deploy this how do we set that race in a way where again we move from a perverse incentive to a positive incentive so that's I think the question I'd love to hear what Audrey thinks about this yeah so just for the audience benefits the word alignment that Tristan is using is a very technical term and it means not a AI That's more powerful but rather aai that can respond to the requirements to the needs of people that is to say yeah that cares right so we we've been talking about how power does not necessarily equal care and I think one of the fundamental errors uh that people made uh in the first contact with AI in terms of social media is that initially many have bought into uh this narrative that as those narrow AI become more powerful they will also be more capable for care for example if the human moderators cannot deal with the vast amount of hate and toxicity well the narrow AI at some points will be able to very accurately provide a care for moderators role and so on and we have seen that it was not the case but it's with the hindsight right at that point many people thought that power and Care are the same thing that's ending AI or jargon it means many people thought that capability and alignments are the same thing and I think one of the most important points we're going to make is that these are not the same thing and given the perverse incentive tips it's very likely that they are actually at opposite with each other yes exactly to ensure that the second moment of contact with AI doesn't result in the same negative externalities having the dialogue we are having today is crucial I want to thank Tristan and Audrey for joining us today we'll continue our conversation in part two of this episode if you liked Today's Show be sure to subscribe share and let us know what you think see you next time on Innovative minds or toxicity or pollution and I feel like the kind of what we need to do what your work is showing is how do we increase the dimensionality of care to match the dimensionality of power and using technology so it's not an anti-technology conversation it's you're really showing how to do it with technology right exactly it's like in a like freshly Frozen uh ice sheet of a river or something and we were crossing it we want to pay attention of course to the brittle parts so that we don't offer into the Frozen River but we want to do it in a way that's decentralized meaning that each person that is on the rivers different spots somewhere we need to have some sort of communication so that anyone who discovered vulnerabilities right in the ground that we're treating notify everybody and this is like a dimensional scouting so that people can understand what's the most existentially uh risful and then we everybody pay attention and take care of that and so the ability to sound the alarm and the ability to pay attention and to take care of each other I think it's Paramount in this day and age hi I'm Tristan Harris a co-founder of the center for Humane technology and I'll see you on Taiwan Plus hello I'm Audrey hung taiwan's digital Minister CEO on Taiwan place [Music]

2023-09-04 12:43

Show Video

Other news