Should we pump the brakes on AI? | Ep. 33

Should we pump the brakes on AI? | Ep. 33

Show Video

Elon Musk and other AI experts  are calling for a pause in the   development of powerful ai ai tools  but is this a case of trying to put   the genie back into the bottle we'll  explore this next up on today in Tech [Music]   hi everyone welcome back to the show I am Keith  Shaw an open letter coordinated by the future   of Life Institute which lists Elon Musk as an  external advisor calls for a six-month moratorium   on the development of generative AI tools like  chat GPT in order to give the industry time to   set safety standards for AI design and also  prevent potential harms of the riskiest AI   Technologies joining me on the show to discuss  this idea is Jason Mars PhD he's the associate   professor of computer science and engineering  at the University of Michigan and co-director of   um's clarity lab yeah he was on our show uh about  a month ago one of my favorite guests welcome back   to the show Jason absolute pleasure to be here uh  awesome topic very very interesting yeah so what   were your thoughts when you so you were telling  me before the show that that you received this   invite to sign the letter uh so just kind of give  me your thoughts on you know do you think this is   a good idea what did you think when you received  the the invitation to sort of sign it yeah no   and so yeah so I re received the invitation um a  couple days before it hit news in a big way when   you won and uh Steve Wozniak also uh signed  on uh yeah I I didn't sign it um so I think   uh it's it's there's a lot of these interesting  and and some somewhat valid uh concerns around   how this kind of technology is going to shape  reality yeah however there's there's a massive   I I believe it's part of an overreaction I believe  it it it it adds to the fear-mongering it adds to   um and it's also uh somewhat misplaced there's a  lot of Downstream implications on making any kind   of attempt that can only be implemented through  a congressional uh legal means uh as policy for   the nation is the only practical way you can  implement this and the downstream effects are   way more like incredibly disruptive uh in and of  it themselves and also ineffective right you you   can't you in the realm of software you it's where  basically all of the Creations come from Human   Minds uh you can't really put a moratorium  on the things people people invest their   time into and work on and develop with access to  technology and mass it'd be impossible to Define   what a large language model is in respect  to the kinds of things that are dangerous   um and it also put a pause on the development  of a technology that many folks globally are   developing right which which would would  stunt essentially we're already at risk of   um losing uh some of uh America losing its  Monopoly or it's you know it's uh Advantage   when it comes to technological development in the  world economy yeah this would this would stunt   our ability to be relevant we're already losing  the social media game uh with Tick Tock being   one of the largest Platforms in America right uh  surpassing all of the copycats that are trying to   replicate It in America yeah um so you can imagine  what an opportunity this would create for other   nations to to advance and overtake the technology  especially given that it's it's public how it   was built yeah if this was like a a us-based  moratorium then all of a sudden like you know   I know I'm pretty sure China is not going to sign  that and you know other nations are developing AIS   are not going to sort of you know jump into  this what what are their concerns about the   risks of of sort of this this race because like  my first impression was like it it felt like with   with Elon Musk and Steve Wozniak it was just like  whoa whoa guys like you know we weren't involved   in this so we want we want to have a it just felt  like hey look guys the race already started like   you can't just pause a race in the middle and you  just have to keep going or this trainers catch up   exactly this train has left the station right this  is not one you you know there's two kinds of ways   you can think about these problems um so there are  potential risks so let me first state there are   potential risks and then I'm going to talk about  the the approaches that I subscribe to on how to   reason about what to do given those risks so the  risks are real so uh right now we live in a world   where people are consuming the information they  would like to consume and the more interesting   and Boombastic that information is there's almost  a propensity to believe it right so okay now we   have a step function in our ability to fabricate  things that look real these models are trained to   mimic real based on what it has been trained uh as  real and so we're seeing this in many modalities   I recently saw a video where someone was using  Obama's voice uh to say what they wanted to say   and it was in real time rendering a voice that  was convincingly Obama could you imagine someone   uh mimicking Biden's voice and calling upon  all Americans to do something uh hateful and   atrocious right right and you know so so you have  an opportunity for misinformation uh fabrication   and and and to to mislead the the public but  this isn't novel we've had Photoshop for a very   long time right right there wasn't a moratorium on  Photoshop and we could fabricate photos for a long   time we can do it a little bit more convincingly  and across more modalities now uh and people are   likely to misuse it uh and we have to educate  the public now to be even more thoughtful and   suspicious of what they see uh and there's an  education Journey there so that's that's a real   risk right yeah there's also more economic risks  with um yeah go ahead well you know there's always   been bad actors from whatever Technologies  come out with like like you said we didn't   ban Photoshop because some people were putting  you know somebody's head on someone else's body   um exactly obviously you know you don't you don't  halt AI just be you know just because someone's   using it to fake uh Obama or Biden or Donald Trump  or whatever like that they're doing out there it's   it's more about educating the the users of such  stuff uh to do that so so is that their main is   that their main reason and it's not because again  there's a quote here that I wanted to read you   um and this is from one of the guys uh that I  think signed the uh no he's one of the organizers   of the letter and this is the quote quote it  is unfortunate to frame this as an arms race   it is more of a suicide race it doesn't matter  who is going to get there first it just means   that Humanity as a whole could lose control  of its own destiny I mean when you talk like   that then you're talking like Taylor robots  Terminators and you know we're doomed because   until you know AI is going to get smarter and  it's like whoa like that's not what this is   doing right exactly exactly maybe I'm just naive  and maybe this is something to do it it does feel   a little bit like the sky is falling from from  from this group yeah yeah I mean and and this   is what gets clicks and this is what this is  what gets uh attention right um especially when   it's coming from folks uh like Elon Musk  and but but the the the the real issue is   it's absolutely not that's absolutely not  the case we are going to have to adapt so   there's two approaches there's prevention where  we stop progress right well let's stop it let's   figure out what to do uh and then we'll restart  we'll unpause right progress but but really the   attention and the energy should go into adaptation  uh and how do we now live in this new world how do   we anticipate how things will change and then  stay ahead of the implications in so far that   some might be dangerous right I I think there's  other implications that are more important than   uh we have a misuse of information uh yeah there's  other implications that are much more foundational   there's a lot of like for instance there's a  lot of specialized skills in the world right   we've got journalists that win Pulitzer prizes for  example for how they exert their stories uh that   kind of thing needs to be re-understood because  the barrier to entry to writing a phenomenal   story is going to be reduced because now more  people are going to be able to access the help   of some of these systems right to write incredibly  brilliant stories so that actually applies on an   economic scale now the the things where the  the economy of it the the wages that certain   employees get being high for their specialized  skill now more people will be able to do that   thing right and so that might create differences  in the demand Supply those are the kinds of   interesting questions I would want leadership to  be pondering and thinking about how do we stay   relevant and and understand five years down the  line how this is going to change the economy and   so forth but this knee-jerk reaction kind of you  know Sensational yeah uh uh you know standpoint is   yeah not useful what uh what concerns you about  AI at the moment I mean obviously you your your   your Enthusiast of the of this idea and a lot  of these tools that come out you you mentioned   on the last show that you use it all the time  in your programming like when you're coding and   and you know it helps you become more productive  in in in the work that you're doing and I think   and I think I think you said you're using it in  your classes as well you're using it to to teach   um absolutely so and and your students are allowed  to use it as long as they sort of reference it   right yeah yeah so so what are your concerns  about some of these tools at this point is   it is it the you know because another big concern  I've heard is about explainability that there's a   lot of people that still don't understand why the  AI sort of spit out what it did like it didn't it   the explainability has been been explained to me  for uh three to five years now and I still don't   understand it so if I still don't understand it  it's probably an issue right yeah no absolutely   well and the thing is my concerns really are  manifested in what we're seeing happening   currently uh my concerns is most people won't  understand it uh Congress people don't know   what a website is is YouTube a website uh you know  like folks don't understand what's going on and so   bad choices are might happen Sensational things  like this may cause a congressional hearing Elon   Musk might be invited to say something votes  get passed and then yeah the the dynamic Stops   Another really important thing to understand  about AI that a lot of folks don't understand   um is and this might sound like a vote towards a  moratorium which still I I think is crazy but when   we develop as a community of scientists when we in  the art and the discipline of innovating when it   comes to AI models deep learning models that kind  of science is becoming more of a Discovery Science   understanding nature than it is an engineering  science where we're trying to build something   to solve a goal a lot of times when we build  these models and we train them in various ways   we actually don't know what that model is going to  be able to do we'll have some thoughts as to why   this mode of training and why this style of neural  network should have the capacity to do something   but once we're done training it we've been blown  away and it manifests itself in the papers in the   papers themselves they'll say often things like  we never expected the model to be able to do   this but look at what it did right and so and so  every every GPT three the you know what the paper   does published doesn't talk about it being able  to write code but then we we discover after the   fact this thing can actually write code yeah chat  TPT was primarily motivated to create a PC gpt3   unbiased doesn't say offensive things yeah but  it actually produced a GPT a a conversational AI   that did things Way Beyond what I anticipate the  the creators uh develop so it's a very Discovery   Science and you know who knows what will discover  these models being able to do once we take Tinker   with them yeah do you understand that that's how  that that is what could freak out some people   um or you know you know if if you told me for  example that you engineered and designed a a train   and then you wrote something about it says yeah we  were really surprised when that train started to   fly you're like exactly exactly so maybe we should  maybe we should pump the brakes a little bit Yeah   but or is it something that you know maybe it's  something that can't be understood or because when   I I try to read some of these papers and it's like  well it goes right over my head it's just like   yeah no yeah I'm hoping that there are smarter  people than me that that are reviewing these   things and going like when you read that where  you were like wait what you were surprised that it   did this or yeah yeah I'm also it a lot of it is  actually breathtaking even to the researchers as   to as to what these Technologies are are able to  do uh when trained the the fundamental difference   is when it comes to the practicality from a  societal perspective of taking up an approach   of prevention or slowing down of this kind of  progress It's fundamentally impractical and   it's impractical in ways where it would be more  practical for let's say uh nuclear uh engineering   right like when it comes to oh well we definitely  have to put a moratorium on the the production of   plutonium uh and splitting it because you know  there's a difference in the digital realm where   everyone has the tools to if everyone could dig in  their backyard and the you know access plutonium   the practicality of putting a moratorium on the  whole world doing that yeah is it doesn't work   right with in software we're in that realm where  people will do it underground uh and then it'll be   even more dangerous you know it's it's almost like  people can do it in secret and then produce false   information much more effectively if it's done in  secret China and Russia uh can produce fake media   that's epically more real yeah and if you have a  stunted America that doesn't even understand this   technology because we have a moratorium on it  we're more at risk so so if there's a practical   element to that and and this is where prevention  versus adaptation is an interesting debate I just   had a very deep debate with a climate scientist  one of the best in the world yeah uh he was over   for dinner and we were talking about climate  change and he's like oh absolutely this is a   real thing it's it's it's absolutely produced you  know by humans uh it's crazy but he thinks it's   it's it's irrational to Think We're Not Gonna  raise the degree 1.5 degrees two degrees it's   like that's going to happen everyone is focused  on prevention of that happening where we should   understand how we adapt societies to live in a  world that's two degrees hotter right right and so   there's and so we don't hear in public policy an  adaptation Focus right because public policy loves and now we all have to pause the world and  then and and that's that that's almost more   um appealing to the I guess the psyche  than what's cognitively harder which is   almost engineering re-engineering the world  assuming that these things are going to happen   but that's the right approach right you know  because if All Is Lost if we don't prevent   something and we're not prepared we're way more  at risk than if we use our ingenu annuity yeah to   re-understand what the world should be like  yeah and this this letter did sort of raise   awareness of of sort of this thing it was a high  profile you're like I said yeah this is going to   generate clicks do you think that this then turns  into something where uh you know a conference is   held or some sort of like high level discussion  and people people start talking at this thing   but that's still not going to prevent like others  from not attending and just doing what they want   to do and you know now you've got big companies  like Google and Microsoft involved and they're   like well you know we're doing this because we  want to stay in business and we want to you know   help our customers get better results and get  better answers you know it's almost like trying   to to get people to understand a common standard  for you know networking for example or right you   know interoperability issues and that took years  for people to sort of come together um are do you   think that maybe there this will lead down to a  road of a standard AI development or some sort of   yeah this is what we yeah absolutely right so the  the interesting thing about what's happened is the   AI is here now and now we have to understand what  to do yep now you mentioned you know conferences   there's a lot of different viewpoints and and  forecast as to how the world changes right do we   have this centralized AI that I mean there's a lot  of funding of companies and a lot of thought that   the future is we're gonna have these centralized  AI uh uh resources that are kind of regulated by   the government and they're like a utility that's  going to be consumed by Society right that's one   thought process because it's really expensive to  train they're huge uh and it's it's a lot like   a power delivery to a grid right um so I don't  think that's gonna happen that at all right that's   a very thinking in the now all of the scientists  the computer scientists in the world I won't say   all let me say much of the energy of the global  computer scientists of the world is understanding   how we make these Technologies smaller more  wieldable easier to train faster to train   easier to get as smart with less power with less  energy that's all of our Focus the first computer   was two you know buildings big uh before we had a  PC right and now we have phones in our pocket that   very Society will understand how to take these  Technologies and wield it so I don't think it's   a practical uh solution to think that we'll have  uh not this decentralized everyone democratize   access to AI right but the the high level point  though is what you would suggest is exactly the   kind of energy we see to exert we should have  special conferences we should have new kinds   of think tanks about just answering the questions  of how will society change right because deriving   the solutions will then be a straight line in  my opinion but it's making that prediction as   to given what we know about how technology  evolves given what we know about what we   observe the new digital uh ecosystem being where  everyone's getting their news from social media   Etc when you put together these interdisciplinary  uh uh fields of study and understanding to predict   what the world what are the 10 changes to World  10 years from now yeah having the best minds think   about those changes is the first step and then  as we observe the living system of the natural   development and once we understand those 10 things  then we understand how to adapt the world uh to it   I think the best luminaries shouldn't spend  time writing these open letters to to excite   dramatic fear but they should be organizing that  conference Elon Musk uh and um the organization   that slips my mind yeah the future of Life  Institute yeah future of Life uh they should   be organizing that conference now and getting  people to sign up to attend and present yeah   yeah this is going to change the world as opposed  to drama I like what it's what you said about   um sort of the it won't be a centralized AI  I think I tend to think that maybe that would   be better but then I start thinking well wait a  minute if you know down the line everyone might   have their own individual AI in their pocket and  then they can then you know sort of again saving   power saving saving uh resources and processing  time but then I start thinking well then all of   these little AIS are going to start fighting with  each other and then we're going to have like a   big AI warrant and then my mind goes off into  science fiction again um yeah some some yeah a   lot of things to think about so and again I think  the education part of it is so important as well   um getting people that might not understand  what an AI is at the moment start start telling   everybody that you know it's like this is this  is what's out there this is what might happen   this is what could happen um be alert about every  picture that you see every video that you watch   it might not be real uh you know it was created  to either fool you again we've got April fools   coming up like that's going to be like you know I  can't wait to see what comes out tomorrow like so   speaking of that totally speaking of that you know  since we didn't you know the the whole news about   the AI image fakery with uh Donald Trump you know  allegedly getting arrested like those photos and   then last week the the Pope I fell for the Pope I  don't know if you did it I was like that's a cool so yeah and so you know I'm all I'm always uh  aware of potential political images and political   things because I know that there are people that  are working on that so I'm ready like any image   you send me of a politician I'm like all right  I think that's fake but a pope wearing a cool   jacket like if I was like oh yeah that's great you  know that's so hilarious the first thing just last   night I was chatting with my wife and uh I was  like you know what you know what I'm shocked bye   you know there's this Nashville shooting epic  tragedy yeah crazy uh and there's there's a   lot of interest in seeing what's in the manifesto  right like we want to understand what caused this   right and I was like you know Ling ja they're not  releasing the manifesto okay just count down the   the minutes and hours before someone leaks a  fake Manifesto right generated by AI right and   that could also cause more issues than if they  they release it right right like dude you're   absolutely right like with it but you see that the  the insight to even make that prediction is what   we want every American and every person in the  world to understand so that when it does happen   they're like oh let me first think is this real  or fake right right as opposed to taking it a real   cloth but when I saw that those pictures that's  exactly uh the the kind of thing that's causing   um anxiety societal anxiety right uh it  was a very compelling uh picture of the   Pope uh you know Trump being arrested was  incredibly timely yep everybody wanted it   to happen and this is the danger when you  really really want something to happen in   the world that's when you're you're  your critical thinking skills go away   exactly and you want to believe it yeah and then  you'll believe it and then you'll act on it right   which is which is but you know the thing is  I call it an overreaction because we've seen   this story this movie has played many times over  right where there's a novel new technology uh and   you know there's this societal anxiety and it's  never manifested as bad yeah as the fear is right   um and you know Photoshop I think is a a great  example because frankly a great Photoshop   technician could have done that Pope picture  before right there was you know uh uh uh GPT but   but that that photoshop would have taken how long  for for a skill technician to do it would have   taken at least you know 20 20 minutes or more and  and this thing can get no yeah it was it was still   good it would have taken longer than 20 minutes  I'm trying to figure out how long it would like   I would never be able to do it just with Photoshop  like that's why yeah yeah that's why I like this   this idea because it does democratize the idea of  like well you know what I really want to I Wanna   for my poster or for whatever I want a picture  of a pope wearing a cool jacket and just being   able to sort of uh say it verbally and prompt it  into a you know an application that gives me power   to do things now again I'm not doing it for an  evil purpose I just want a really cool image   you know to sort of express the idea that I want  to do as a journalist or content creator things   like that um I don't think I would ever Veer  into the the negative part of it because I'm   I'm right I generally and most won't and most  most won't but you know rather than spending   you know a week learning how to do Photoshop  and doing all that stuff it's like that's what   excites me about Innovative Technologies but  I'm a good person so I understand that there   are people that aren't so good as well so um yeah  all right so yeah have you experienced gpt4 since   it came out I think you know I think it was right  on the cusp the last time we talked yeah yeah so   I so I so I've tinkered with it I think tinkered  with it a bit yeah um and you know and I've read   a good bit about it I think it's uh I think it's  interesting right because they're including images   which makes total sense yeah uh into these large  language models so we're seeing what they would   how we would describe it technically is now we're  seeing these multimodal uh large language models   being built in various ways um so I think that  that's the big uh advancement of course it's it's   better trained uh it's it's able to grab more  understanding and insights uh around the world   so it outperforms DPT 3.5 in the same realm of  things that 3.5 does but the fascinating thing is   that now it's able to capture these um high level  understandings about the images of our world right   so beyond just reading the text of the Internet  it's also understanding and interpreting the   images that you'll find on the payment internet  too so it's a model that both can listen to what   people are saying and see see uh what people are  saying but uh there's a distinction between the   GPT fours of the world and the chat gpts of the  world they're kind of two yeah types of AI GPT   4 and 3.5 and those gpts are models that are  trained to understand everything it can about   the world and then it can be applied for many  different tasks many different things you might   want to wrap that model in you might want to wrap  it in a product that solves a particular problem   and the intelligence of what's in the world is now  applied to your solution to that problem something   like chat GPT is essentially a wrapper around  something like gpt4 okay or gpd5 where it's it's   actually the model conditioned to do one thing  well in the case of chat GPT it's question and   answering interactions right so I do ppt4 whole  cloth wouldn't wouldn't produce a chat bot as as   compelling as chat GPT out of the box right but it  has in it an incredible amount of knowledge that   would then if the same GPT chat GPT wrapper was  applied to it would be Chad gpt2 right right right   right right so yeah um so you're still excited  about the you know the technology and where it's   going yeah absolutely absolutely I think it's uh  I think it's a phenomenal Direction I think the   only problem that exists now is there's a bit  of a monopoly of the of of that those kinds of   Technologies realizing these advancements in open  in open AI yeah uh it you know like when you look   at like even the Googles and and even Microsoft's  own other version of a large language model that   was a thing they're still using that yeah yeah  no no I mean and it's not chat GPT like as you   get it on Open AI like those are attempts to  catch up and to to participate because they   see a market that they're losing but it's not  democratized where where many many different   um many many different institutions whether it  be universities or or companies are actually all   innovating at the same time so I actually want  to see a world where more folks are innovating   on these Technologies and we're not just waiting  to see the next thing from open AI right right   but yeah I'm absolutely excited about it well  yeah so talking about the competitors you know   we saw that Google came out with Bard and and  you know did you have you tried that at all   yet or yeah yeah so so so yeah so I've seen  its output right so I haven't played with it   directly myself and tinkered it but I've seen  a b comparisons between what's produced yeah   it's clear that it's clear that the meth so the  methodology the the way that chat GPD was trained   it's clear that that was not applied to Barn okay  Bard is is is is much more the old school GPT 3   style of training which didn't condition it uh  for phenomenal understanding of conversational   Ai and a question and answering system yeah that's  practically why these other systems aren't as good   uh in my opinion yeah uh you know the way that  chat GPT was trained is they they use a lot of   humans to write what the model should say okay  to improve it upon gpt3 right so you have your   model that learned from the internet yeah but  then you have to have a human in the loop and   Coach the model right and train it and indeed  they took the coaching and they were able to   take that coaching and turn that into a model and  then you had two models training each other it's   called it's using a process called PPO but uh  where it's reinforcement learning so they had a   they had a trainer model teach a student model  right like uh a child I'm just uh simplifying   um a gbt3 and then that's how it got good  the systems we're seeing like barred and the   the Bings thing uh uh it's very clear from their  outputs that that kind of reinforcement learning   with humans in the loop yep was not applied  to those models and so that's why they don't   seem smart yeah they didn't get the extraction  it did feel like from so I I got an invite and   it was amazed of how quickly I was allowed in  and uh because you know usually it takes a few   weeks for me to get invited into anything and um  I tried it and and it just you know and of course   Google there's so many levels of warnings about  like the output like they are trying to protect   every angle and so yeah the the results that you  get are just kind of like dull and boring and you   know so I I equate like you know chat GPT and  some of the stuff that was in the Bing was like   your crazy uncle at Thanksgiving where Bart is  more like your dad and you know so you know safe   and reliable and you know doesn't want to make any  waves whereas you know crazy uncle over there he's   gonna just do whatever he wants and yeah um and  that's what Microsoft was sort of pulling back   too they're like we're gonna limit this so that we  don't get a lot of the hallucinations and really   freak out people that of where they could go so  um that's my thoughts on that so um Jason one you   know again thank you for joining us on the show  today uh of course just I love talking about this   stuff with you so we'll we'll definitely  have you back whenever you know whenever all right awesome thanks man all right  that's awesome here thank you yep that's   all the time we've got for today's episode uh  don't forget to like the video subscribe to   our Channel and add any comments that  you have below and join us every week   for new episodes of today in Tech I'm  Keith Shaw thanks for watching foreign

2023-04-09 10:30

Show Video

Other news