Inaugural Summit of the Institute of Global Politics: The Future of Artificial Intelligence

Inaugural Summit of the Institute of Global Politics: The Future of Artificial Intelligence

Show Video

thank you very much to uh that first panel and uh anyone who's taken a break I hope they will be sure to come back for the next panel because um there was a there were some uh uh mentions of uh artificial intelligence uh in this first panel and clearly uh this is a topic that certainly keeps me up at nights I'm glad that going to be focusing on it uh at The Institute because the world is racing uh head first into a new era uh that will affect how we live how we work literally how we think and how we relate to each other uh I think it's fair to say that uh AI offers a lot of promises but it also offers new threats that have to be navigated and we're already seeing signs that authoritarian iian regimes uh around the world have gotten uh better and better at creating and weaponizing new technologies like AI to subvert manipulate opusc uh and the biggest tech companies themselves um are not just sadly often enabling authoritarian regimes um but the way that they are acting in the world um often times with impunity uh and without regard to the uh consequences of the technology they control um is uh also seriously concerning so for years academics uh and economists some here at Colombia have been warning that advances in machine learning and Robotics will have dramatic effects on job markets disinformation uh research as Joe was referencing uh have raised alarms about AI chatbots propagating uh conspiracy theor theories and false narratives so the risks are uh quite uh significant as the promises uh also hold out hope uh that uh it will make a positive difference we have to figure out more um effectively how we're going to manage uh the world that will be created so we have an amazing panel join me please in welcoming to the stage uh Maria ressa uh Nobel Laurette prizewinning journalist um and igp uh distinguished fellow um as well as uh Karen and I are very grateful she was in our class uh talking about these issues last week uh Mara Shaka uh International policy director at Stanford University cyber policy Center uh we're thrilled to have Stanford in the house thank you very much but also your background in Dutch politics European politics uh dealing with a lot of these issues is very relevant uh Timothy woo Julius silver professor of law Science and Technology here at Colombia Timothy has been outspoken and very uh substantive in h raising questions and concerns about the era uh that we are uh moving into and our moderator Nick Thompson CEO of the Atlantic thank you very much wow you know normally you're introduced by voice of God today we got secretary Clinton so slight downgrade but very close all right let's get going we have an amazing panel we have an amazing audience so excited for this I want to kind of break this conversation into three parts we'll start a little bit about where we are in AI where we think it's going what's unanswered about AI we'll then talk about threats to democracy authoritarianism and then we'll get into how you regulate it because we have some strong and I hope differing opinions on how to do that so let's get cracking Professor woo uh to you yeah she does this what to you is the most interesting unanswered question about Ai and I mean by that what do we not know about where it's heading that is still undetermined that is important to the rest of this conversation yeah it's a good question a good place to start um I think the thing we don't the number one thing we don't know at this point are the limits or the the plateaus so um I think there's a sense in the public Consciousness that the potential of AI is unlimited that it will sort of exponentially change everything on the planet Earth uh you know I think that's possible uh the history of Technology suggests that most techn at some point reach a limit um sometimes it could be related to a business model or sometimes just related to physics laws of nature other things you know if you go back to the' 60s um or 50s commercial airlines were beginning and I think there was a sense that Airlines and airplanes that get better and better and better and then maybe we would take Airlines to the moon or I don't know what we'd say but at some point it started leveling out and if you go into an airplane today it's very similar to the ones in the 1960s but the seats are a lot smaller so you know things tend to level out um now things sometimes go much further than people might have thought general computing you know really developed in the 40s and 50s has legitimately transformed everything um but other things P route or reach limits and I think we really don't know what those are it's possible to look back at five or 10 years and being like well the chat and the images that was extraordinary but we didn't really you know get much further maybe a little more like virtual reality where you know people were like this is going to be everything and then we sort of seem to gotten stuck and a get better so we don't want I want to say I'm not insisting there are those limits or we've reached them we just don't know all right same question Mara so what I wonder about is mostly about who has agency over this technology because when I talk to engineers at tech companies at AI companies uh and I express my concern about the fact that academics don't have access to information to do independent research or that policy makers don't have access to information to properly regulate or to understand what kind of next steps are needed with regard to laws and regulations some Engineers say smilingly what we don't know what AI is going to lead to either because part of the excitement for them is the unknown and the sort of experimental quality of this technology that it may come up with completely unexpected outcomes and so I think ultimately the question is who has a grip both in terms of understanding the technology but also in in terms of being able to handle pause steer this technology but not use well I'll be able to use it your concern is who's going to be steering it and who's going to understand it yes who has agency over it okay Maria um a little bit of what both said but for me I'm much more focused on safety harm accountability and responsibility right so uh think about it like this and so lose the word Ai and think coid vaccine we needed a coid vaccine right away we were all running for it but we didn't let loose the coid vaccine in the public until it was tested even though it was fast what we've seen in terms of AI and the first time it was really rolled out was social media and technology that connects us I became a journalist because information is power and that wasn't really factored into the equation of social media at that point in time and what we have seen since then is tremendous harm to society that has led to the breaking up of the public sphere lies turned into facts and facts turned into lies that has affected every part of society so the question is who was responsible who is accountable what about the harms that our younger generation is facing um here's the thing with the Next Generation AI right with large language models so many unanswered questions in terms of you know can it just suck up everything we are everything we know everything the Atlantic has pulled together and paid for can it just suck all of that up and then and then create something and not give anything back two what is the damage of whatever that is because you hear it's fantastic to hear we want regulation but then to also say to scare you into saying but we must get ahead of China China by the way has restrictions on it the last thing I'll say is we've had technology like this before it is crisper technology genetics we have the capacity to customize babies if we wanted to but Humanity was wise enough America was wise enough to put guard rails on that technology why are we not putting guard rails on the technology that is insidiously manipulating our emotions to change the way we see the world and ultimately the way we act sorry anger no no good anger anger is useful on the question of the Atlantic we are now blocking them so we have put a big padlock on the Barn Door after the horse horses have well left um but the and the question for that then is will you be searchable well we we can get into that that will get into a very complicated legal battle of which I should probably not speak on stage with the law professor um let me ask a theme that seemed to come up in all three of them I get the sense that all three of you would like this to slow down right Tim raised the question of like maybe it will slow down because technology slows down Airlines have gotten worse maybe AI will slow down do you all want this to kind of relax a little bit Mara yeah well I think the crucial question is when should an AI product be pushed onto the market so I'm all in favor of doing research for the sake of understanding Innovations driving Innovations but right now there's no public interest process that assesses whether a new product is is ready with regard to safety with regard to our democracy with regard to National Security a big topic uh that people are are concerned about to just be available you know should large language models be released should products that have hardly been tested in the public interest simply be put onto the market and the the drivers behind putting it out for release are of course that interaction with these AI products improve them so the incentives for the companies are to push them out as quickly as possible there's you know Cutthroat competition between the companies so they're also Keen to be the first to put out a new version and to stay relevant get new Investments Etc so it's not so much I think about slowing down but but having the use of these Technologies in a responsible controlled way but one could counter well they have to comply with the law right if you release an AI model and it violates Privacy Law or if it does you know it leads to a terrorist attack you have legal protections that exist for the internet so it's not like they an exemption from the law as it exists well I think the question is where is the state of the law and that differs a great deal and in this country where the biggest most powerful AI companies operate the law lags behind significantly and so on the one hand I think there are questions and I'm happy to hear answers from people who have looked at this in Greater detail but how US law applies so the statement from the Biden Administration appropriately is that law applies to Ai and other Technologies of course Li law applies to everyone equally in a rule of law based society but I don't think there's clarity about how and the main question that brings me back to my first point is who has agency to understand so with AI that is fluid in the sense it's changing every moment and it's very individual so your experience in interacting with an AI product is completely different from mine because it learns from your previous patterns my previous patterns it will be different next week than it was two weeks ago and so this highly fluid highly individualized and everchanging kind of nature makes it much harder to regulate because how am I able to assess whether my rights have been violated now tomorrow the day after how is an agency supposed to know whether laws like let's take anti-discrimination laws have been you know respected by these companies maybe there was a moment when there when they weren't maybe there were for me but not for you and I think we're we have seen plenty of examples you know the systemic discrimination of a number of AI applications where we see that laws are not being respected and upheld but where we have a problem of assessing how and applying oversight and ensuring accountability that Maria talked to talked about and I think all of these core elements are under huge pressure with the rapidly evolving uh Market in these systemically impactful products I can see that both Marie and Tim have things to say but since Marie was name checked she gets to gets the answer here so let's connect it with the last panel on the economy and how they brought up technology as a disruptor right I feel sorry for governments who have to communicate with their people because the design of the social media platforms the world's largest distributor of news is Facebook now called meta it's funny the in the from 2016 to today we've changed names but not that much in terms of design right so lies this is a 2018 MIT study lies spread six times faster than facts that's I will tweet it so you can see that that to me is like the upside down world right that's what brings you upside down so the incentive structure if we think economy um the trickle down effect didn't trickle down fast enough and then the incentive structure is to inflame fear anger and hate we've certainly seen the impact of that I felt this right I was getting 90 hate messages per hour because I was targeted by information operations geopolitical power comes into that so what did they conclude in the last panel They concluded that the systems today do not have the mechanisms to deal with the world today I would say the same thing our laws today are inadequate the EU is starting to DSA is out now but it's inadequate to deal with this because what is driving this is profit it is surveillance for profit it's data and technology and governance needs to come in last thing I will say on this is I was in Paris early this year when we realized that you know 60% of the world last year is under authoritarian rule this January it was 72% and then I was on stage with Chris Wy the Cambridge analytica whistleblower and he said you know in America the toaster has to go through more safety regulations than this than the software that tracks you that is with you everywhere you go so that's the safety so I go back sorry no that was great I mean I I had an idea for an experiment you tweet lies SP faster than truth six six times faster than truth I will tweet lipread faster than truth 20 times faster than truth and we'll see who gets more retweets um Tim uh yeah that's a good experiment um you know I think I might uh desent slightly I I very firmly feel the government needs to have the resources and power to do uh what's necessary but I do feel that we are missing a lot of information in order to act well in this space um and you know I am I I do I share this sense that um slowing down uh might be good not the least of reasons being that the creators of these Technologies seem very concerned about them I've never since since the you know nuclear fion I don't think I've seen so many people have created say something say I'm not really sure how I feel about this in fact it might be terrible so that that makes me nervous but I and this might represent a slight European American split I think the law does better in response to known harms um that it understands and you know the legal system is a blunt instrument so is a regulatory instrument and it could be you get completely obsessed with what we think AI is right now and do something and ends up being quite random and in fact counterproductive I just think there's a lot of in we we don't for example yet really have a commercial application we don't understand the business models so while I think we've moved too late with areas like privacy where the United States has still never quite got to it California accepted um and some other states um there is also the chance of just firing too fast not really and I'm not saying this because oh that'll put America behind just doing a bad job when you don't know what you're you're doing how do you worry you know when all the AI company leaders are going out there saying this could lead to the end of humanity like this is this is it how much of that was because it was a sincere concern and how much of it was because they're trying to provoke the government into overregulation to lock in their power that's my consp conspiracy theory to strong that's my deepest it's obvious economic interest right that is my deepest concern so in my heart I'm an antitrust kind of person I believe uh that excessive Monopoly power Market power is a threat um to many values including democracy and it is very obviously in the self-interest of the most powerful Tech platforms who are starting to see the first challenges to their business models right we also speaking as former Biden Administration official um we also have them in court uh now Facebook Google and Amazon Amazon's on in court trying to break them up so they're starting to feel the heat a little bit and the obvious threats from AI to the business model so the most the best thing to prevent competition is to have the government on your side and to have a licensing regime where just a few guys get to play so I am I don't think some of the scientists individually have this motive I just do think it is very dangerous to overregulation but I wanted to pick up on another point that Tim made yes regulation is often a response to known harms but regulation is also a solidification of enforcement of known principles agreed principles like non-discrimination to keep with the example but also antitrust that you just mentioned and I think that a lot of principles that have been non-controversial and enshrined in Democratic societies for for decades if not centuries are under pressure in new ways and so it requires new action not so much to do new things but to actually prevent established principles from being eroded now of course I am equally skeptical and almost cynical when I hear the very CEO warning for the technologies that they are driving at the speed of light to bring to Market I mean if you're really so concerned you take your hands off and you you wait a little bit by you know pushing out these these products and indeed licensing uh with companies would be a draw Bridges up approach where the first to pass the post to get to a certain scale can say well this is now the norm of quality give us a license and all the others uh can pack up I don't think that's where Europe is going if that was in your question um the AI Act is is designed in a different way it's the law that's being finalized right now a comprehensive AI law in Europe which you know has promise it also has problems um I think the the choices that were made by The Regulators were to build this law uh which has a risk-based model similar to what uh nist does in the United States build it on top of existing policy Frameworks so to say we know how to scope products for risk we're going to make a scale from low risk to high risk and present mitigating measures accordingly so more mitigating measures obligations on the part of companies when they uh create products that have high risk for people Liberty for people's access to education for people's access to Labor uh and so on and so forth and fewer obligations when it is let's say a customer service chatbot or something the flaws of that model were immediately confronted when uh generative AI came came uh to Market because this law has been in the pipeline for over two years and so um European Regulators were kind of scrambling like oh what do we do now and and here's the challenge because by by choosing to regulate on the effect side basically the downstream effects and where the risk gets created and not regulating the technology as such uh there is a friction there and so they're trying to solve that now and we'll have to see how it comes out the the thing that makes me hopeful and that also kind of addresses some of the points that um Tim mentioned so maybe you'll have him speak next um is that there's an AI board foreseen a group of experts that look for new developments because we all know we're talking about generative AI today in two years it will be something else and the law cannot follow the technology to the letter it just cannot cannot work that way so this AI board is supposed to look out for new developments and then assess you know where it should fall on the on the scale of risk and I think the kind of model where there is a designated body could be a regulator that looks out for emerging Technologies and their effects on these core principles that we have anchored into the law and then assesses you know should this fall within the high-risk category or not or should this fall within the AI act or is it rather a data protection issue that is the kind of model that I'll think we see more of less strict on the letter of the law more empowerment to The Regulators more mandate more skills more agency to assess What new technologies mean in light of I want to go to you to talk about where either Upstream middle stream Downstream the most effective way to regulate is but before we do that I want to ask Maria about the actual effects of democracy right you've been living through the battle between authoritarianism and democracy you've been watching it across the world better than anybody else what is the most specific Way new generative AI can undermine democracy and Empower authoritarianism because that'll help us understand a little bit about where we want to put the regulations we can lose democracy by the end of next year how's that for for look um when we were looking at the number of Elections between early this year into into 2024 we saw 90 elections right what's the critical Factor again um we're being insidiously manipulated it isn't like the mistakes it's not misinformation it is disinformation it is information operations it is the way the platforms are designed and this is still social media uh and how geopolitical power has come in to insidiously manipulate you using your emotions right and the smarter you are sometimes the harder you fall um for uh so so so what we've seen is um if we're already at 72% authoritarian if you do not have integrity of facts which is actually where we are and one algorithm created this right the polarization you talked about earlier it's a friends of friends algorithm that every social media platform uses what does that mean in order for you to grow your network they AB tested this so that they found out that if they recommend friends of friends you're more likely to click to join to grow your network when you grow your network they grow their platform right friends of friends algorithm and this is from the Philippines in the 20 2014 was when the information operations began Russian disinformation that led to the annexation of Crimea used 8 years later to actually invade Ukraine itself right 2014 was when Marcos the information operations of Marcos began to change his name from a kleptocrat kicked out by people power in 1986 to the greatest leader the Philippines has ever known 2016 was when the political dominoes began to fall duterte elected in the Philippines in May 2016 then you had about a month later brexit and then you had all of the elections down to something Hillary Clinton knows very well the 2016 elections here in the United States you know that 126 million Americans were targeted were touched by Russian disinformation so what does that mean if you don't have integ sorry let me just show you what happened in the Philippines for so many things to say um if you're Pro duterte in 2016 in the Philippines you move further right if you're anti- dutter you move further left and this continued over time because it is a growth algorithm that was used for millions and millions and tens of millions of people right you're talking about 3.2 billion at that point on on just one platform I had more than 10 arrest warrants in 2019 um and I could have gone to jail for over a century I turned 60 yesterday so I wouldn't have like lived all the way through it but you knower works I but but the best part is you know that we did stand up for principles for values and um I now in 20123 think 2016 to 2023 I only have two criminal charges left but there's still an overhang the dly sword hanging over rappler we could get shut down any day we will fight it but why does it take so much from the journalists to fight something that just gives more profit for these companies and that has frankly Insidious ly manipulated everyone on social media let me do one last thing your question is really what will generative AI do it's going to get worse right guys your elections here and again I I look at Taiwan in January China Chinese disinformation attacking Taiwan has already said they're getting clipped um Indonesia the world's largest Muslim population the front runner in Indonesia is the son-in-law of former president suaro who was in power for almost 32 years right his son-in-law is front runner if the elections were held today um anyway I'll go through all you you know when your elections are so the question is now we have Twitter turned x with Elon Musk when this when the paper in MIT was done in 2018 it was Li spread six times faster I would love to see that study done again today because I'm sure it spreads even more right so the safeguards that were put in place in 2016 after 2016 after we haven't even talked gender disinformation and how that tears down democracy right but but after after these safeguards were put in place and they were not enough again I I know this personally I can give you all the data this is what kept us moving um it's going to get worse generative AI you can have an A you know a video of you saying that well saying lies things you never would have said but it would have been released in the public sphere and done its damage and your your fact check is not going to get same wide distribution so buckle up I think Karen said it's going to be a we have must act now my 13-year-old son made that very video last week we were having a good time with generative AI tools so this doesn't sound great Maria [Laughter] um Tim yes uhhuh where's the proper place to change policy so that everything goes a little bit better do you want to set the market conditions so we have a fully competitive market do you want to figure out specific ways to regulate the specific companies that are in the middle of this do you want to have specific policies like every AI has to declare that they're real or they're not real or do you want to have just sort of enhance the laws that exist right now to apply to AI wear on this kind of like stream of potential plac regulate is the most important place to go no I appreciate that so I feel there's both too much and too little going on at the same time in terms of regulatory activity so I think to pick up on what Maria is saying that there is much too little attention to known harms yes like uh human impersonation being the most obvious example I can't think of anything that good comes anything good that comes out of AIS pretending to be humans because it's an easy way to attack a democratic system democracies depend on human feedback and if anyone can be a human this is a huge problem you know a tiny version of it is when comment systems get overwhelmed in in the F US federal government by people pretending to be human people pretending to be human voting is obviously terrible and people pretending to be human um you know saying they support a candidate or whatever it is misinformation disinformation none of that is good and we do way too little about misinformation way too little about fraud way too little about attacks on Election Integrity maybe we've gotten a little better so the known Harms trapped this weird conversation about you know which is a lot of fun to talk about whether AI is going to become intelligent and like have some kind of Terminator thing but then we have a lot of known harms going on uh like attack on Election not enough resource not enough time and not enough even legislative activity so I think there's a a misallocation of thought in this area as to what we should be uh doing on the the bigger uh question a debate we often had on on the competition side uh in the white house uh was I think a really hard policy question which is whether we think generative AI is more like nuclear fion or more like general computing both two 1950s 60s kind of Technologies um if it's more like nuclear fion you don't necessarily want a lot of startups getting into it and seeing up what they come up with not to anybody and in fact we locked down nuclear fion I think after we saw some of the results um of atomic uh Weaponry on the other hand um you know Computing general purpose Computing um semiconductors transistors was um also very disruptive ended up changing the world in powerful ways was also let's move to the ' 70s dominated by then by IBM and maybe a few other companies and we had a different approach we said it is our comparative advantage as a country to break up these monopolies we broke up AT&T we almost broke up IBM and we'll give a little room for these tiny little startups like apple weird dude you know in Washington state Bill Gates who has this weird little company um you know Sun Microsystems no longer so we we were like took very different and that you know speaking economically now played to the comparative advantage of the United States which has always been in small inventors small uh allowing a lot of stuff to happen and ultimately coming up with better products I'll say at the same time in Japan and Europe we're like no we're sticking with the big guys and um I can't think of a you know Japan double down on on on on their biggest supercomputing monopolis and you know they've kind of missed the boat and and Europe has some very good tech companies but all of the American tech companies came out of this kind of open competition so that that's a hard question either way obviously I don't want to be the person who prescribes um the policies that lead to the Takeover of humanity by robots um which is why I think we need to figure out what we're dealing with here yeah I've listened to a bunch of your podcasts I would have thought pretty strongly that you are on the side that it's less like nuclear vision and more like personal Computing and therefore we should allow lots of startups to go I think so yes I mean I'm saying I'm put myself a cavat in which do you think it's more like personal Computing nuclear fion well simply by the ingredients that are needed to produce generative AI only a handful of companies have those so even if you allow gazillion startups to bloom right now they can't yeah and I think you could argue that that's a product of a lack of competition intervention of a lack of data protection laws because of this whole scraping model where companies simply you know Hoover up all the data that is available either in your publication or or those of others uh and so I I I'm inclined to to um to try to seek Ways by which we can have re research academic research independent Research into what Ai and generative AI truly are I think it's hard to compare to anything that came before uh and perhaps if we um if we try to find analogies we confine ourselves to imagining what the impact of the technology could be and also to what the kind of solutions should be so I also think there's a risk in trying work only with the policy instruments that we've known which I see happening over and over again I would love for much more creative outof the-box policy entrepreneurship to truly answer what the challenges of today are uh and and not so much thinking like oh let's open up the tool box and see what we have because that is that is what's happening and it's probably inadequate what is the most you're known as a particularly creative legislator you have a wonderful reputation for being deep inside of Technology what is the most creative idea you've heard for regulating AI that you're drawn to you know what I wish I'd heard more ideas I think we live in a very peculiar moment right now and I've never seen this before we can safely say that all around the world city council of New York state of New York federal government of the United States United Nations around the corner the EU the Netherlands asan you know oecd everywhere in the world Ai and generative AI is at the table it's on the political agenda there's an orous political will on the part of political leaders to do something I think there is an enormous vacuum of ideas of what to do and one of the problems and I come back to the first um challenge that you put before us one of the problems is that we are incapable of independently assessing what we're really dealing with I mean I work at Stanford with people who build Foundation models who are educated to go to the big tech companies who are the top academics in the field of Technology and they do not have the access to information to do research on similar models as the one that open AI uh presents to us or that Google presents to us etc etc so there is essentially no public knowledge equivalent of what the companies know and this creates a problem for any well-informed public policy debate for any ability of people to then take the proper steps to kind of weigh the tradeoffs right I mean politics is all about trade offs that's why I sometimes smile at when people say like are you in favor or against regulation it's like well you know it's kind of a pathway that can lead you to gazillion destinations and I think we owe it to ourselves to sort of make the discussion about tech regulation more sophisticated what do we want to regulate for do we know what we need to know and perhaps the first steps are to actually have provisions and Shrine in law enforceable about transparency so that we can have you know that first step so what what has happened in the EU with the digital Services Act is actually have a provision that guarantees access to information on social media platforms for academics so besides the law um putting new obligations on the tech platforms for Content moderation which is what this law is about it also looks at how academic researchers can in turn look at uh what the companies are doing and I think that kind of dynamic is what we need to build in systemically so that we can have better decisions it's a good point to make here at Columbia University Maria Simple Solution on this right like right now Agile development which is the way code is rolled out operates in two week Sprints right so if you create a law it takes three four years maybe more um the tech companies would have already evolved every two weeks so we just rolled out an app right it's an alpha we're testing it ourselves and it's not released to the public because we care about the harm to the public um why can we not prevent public roll out until transparency safety if at the very least if they're experimenting you don't have to stop experimentation but you make them liable for it right now there's impunity in Rolling code out to the public let's again I'll go back to my Corona virus U vaccine right like I'm going to give vaccine a to this side of the room I'm going to give vaccine B to this side of the room vaccine a people sorry you died apologies and we have vaccine B is it worth the death because and again I'm not being Hy it's not hyperbole Myanmar genocide happened right and both the UN which marzuki darusman who was the former Commission on human rights in Indonesia he led a team there meta sent a team there and they both came back with with the platforms played a role but so that's an easy one prevent the roll out to the public until we know what it is the other one is in the DSA which is to give us a realtime data feed because right now the tech companies prevent each of us from combining the data that they combine with impunity right you you if you're an academic who scrapes it Laura Edon right we know this you get threatened with with a suit with a legal suit so why is it okay for the tech companies to combine data from different sources but they will prevent users from doing that once we get realtime data access then we can pull up and see the trends is it going to be harmful to kids is it coded bias is it is it coded bias against black women right or women of color or lgbtq these are things we cannot see by talking about it we must see the data Yeah Tim is Maria right has she solved it uh yes she's absolutely right about everything solved all our problems um I uh do have what I think is are some creative ideas for legislation which maybe I'm one beat before but I do think that this problem of human impersonation um should be more seriously taken care of and you know that you should maybe it's transparency I'm not sure but you should AI should proactively announce the work that they are um not humans um now it's not entirely new I stole it from a movie called Blade Runner which had that same law uh so I I am uh you know some disagreement between friends but I do think it's important to look back at history and what has worked and what has not worked um for our lessons I think we do a better job uh we we will do a better job that way I'll also say in a mild um descent from you've got all the answers um it isn't the fact that there will be no remedies if for ex if if AI uh an AI is harmful we still have the laws and you know Chad GPT some of these um uh llms um pretty quickly became started to defame people yeah you know if you started asking people about things I don't want to defame anyone on stage so I won't say examples but you can ask them things and they would say things that in the law are considered say defamation cons someone was a criminal someone's a criminal someone had a loan disease someone's bad at their job under the categories um and they're were sued and I and might win so there is this kind of underlying framework the the courts are I wouldn't count the courts out let's put it that or existing law entirely because you did have that vaccine that killed half people there's murder law there's genocide like there's yeah you know there's some but I do think what the the convers regulatory conversation needs to be already understanding there's a you know existing legal regime that deals with certain harms are there kinds of harms or practive measures that were not taken and I think that's what like good the good work will be in this space the good work will not be just vaguely a law Banning robot takeovers or very abstract stuff it has to be like where is it making and a good example for me is this example of human impersonation I think we almost need like a robot code of or some Powers uh maybe the FTC can do it the FTC is already trying my colleague and former colleague and friend Lena KH at the FTC is trying to develop the stuff to deal with discrimination AI but you know they need a little more room and actually more money more resources to try to be accumed of these harms but I am a strong believer in tough regulation to known harms and we're already seeing some of them we're not doing enough all right Mar let's wrap this up can you evaluate thumbs up thumbs down Maria's proposal that we have a two we waiting period Maria's proposal that we have total datal transparency and sharing and Tim's proposal that all a have to declare who they are which of these regulations do you like which do you not like and then you can add one to the mix and we'll call it a day I think both are very good but they they get at uh parts of the problem and I think we need a more systemic way of looking at how AI works and have the ability to uh deal with it in the public interest so I think a delay of pushing into Market is a great idea but then we also need Regulators with the proper means and Tim said they need a little bit more uh resources I think they need exponentially more resources somebody has to make the case maybe it should be an AI company Tax specifically to go into the public purse to actually create these capabilities because the asymmetry in information the asymmetry in budgets the asymmetry and therefore power is so significant that that needs to be tackled too all right well I feel like the asymmetry in my understanding of how to regulate has shifted a little bit thank you for this amazing panel these wonderful people who are all doing so much good in the world thank you for H can

2023-10-15 08:25

Show Video

Other news