Is the world moving closer to an AI singularity? | Ep. 35

Is the world moving closer to an AI singularity? | Ep. 35

Show Video

new developments in artificial intelligence  continue to generate headlines along with deep   fake images and videos that are causing  some experts to call for a pause in the   development of this technology is the world  heading down a path of achieving general   intelligence sooner than expected and if we  are is that a good or bad thing we'll discuss   the possibility of the AI Singularity  on today's episode of today in Tech [Music]   hi everyone welcome to today in Tech I'm  Keith Shaw joining me to discuss the latest   developments in AI are jodica Singh she is  the director of data science at placemaker   Chris Tanner he's an MIT lecturer and head of r  d at kensho and Nick mate he is the an assistant   professor of computer science at Tulane  University hi everybody welcome to the show let's just jump right into the the discussion here  um there's been a lot of developments with chat   GPT generative AI a lot of these image creation  tools getting lots and lots and lots of headlines   um and as I'm as I'm kind of discussing these on  the show we also get a sense of we're moving in a   direction towards this concept of AI Singularity  the idea that eventually artificial intelligence   will reach or surpass the the level of human  intelligence out there just to start us off   uh what why don't we I get like a general  definition from you guys about what you guys   consider the you know the the definition of an AI  Singularity jodica why don't you be start us off   yeah for sure so uh when I think of yeah  Singularity it's this point in the future where AI   surpasses human intelligence now one way to look  at it is that it surpasses human intelligence in   a lot of you know ways which can really benefit  uh Us in general but the other way is like it   develops its own conscience in a way that it's now  learning on its own it does not need human input   but it's also going out of control uh and humans  are unable to capture that growth all right and   Chris what you know did is that the definition  that you guys sort of use as well do you do you   tie it with the the idea of Consciousness  or is it just sort of like being able to to   um like how do we know when when it surpassed  human intelligence yeah I would agree with that   I should also say that that in general I would  say it's pretty rare for I for me to hear folks   talking about Singularity AGI is the more common  thing okay in some ways one can view AGI as kind   of being a precursor for you know before you get  the singularity okay um but yeah I would say that   both of these things are would fit my definition  of Singularity you can almost view it as two axes   like how controllable or uncontrollable it is  and then just how intelligent this thing is   all right Nick anything else to add on that uh so  I I didn't have a great background on sync on the   definition of Singularity so I went and looked it  up and did a bunch of reading yeah on my my long   flight that I had yesterday so I didn't realize  that it's sort of the the term and like this   idea of existential risk grew out of um some stuff  that John Von Neumann who's a very famous computer   scientist sort of talked about in the 1950s right  so this grows out of this idea that technology   is out of control it was coming out of the rain  preparation um in the post-war period when we were   worried about nuclear winter and things like that  so a lot of these concerns um sort of come from   the same sort of place of you know what happened  like technology run amok right and so that's why I   I kind of um a little I don't like to use the word  Consciousness personally um but in terms of the   fear I guess or the the worry about Singularity  is this idea that technology is advancing ever   more rapidly and that at some point this is going  to be beyond our ability to understand it and it   might pose you know this very kind of cold war  period existential risk type idea like it might   end all of humanity right and I think that that  you know Hollywood wood has taken over sort of the   singularity idea as well you know the Terminator  2 there's the whole Skynet thing and and you know   Skynet there was a one you know point in time when  uh the AI achieved a sentience uh and and Bloom   everything was downhill from that point you know  I've I've heard a lot from Ray Kurzweil he's he's   also said like he's been making predictions that  I think the latest one I saw from him was that   by 2030 which is only seven years away uh that  he believes that's when you know it'll achieve   this Singularity so as it sort of got confused  and melded with this idea of artificial general   intelligence Chris you know you brought up that  that it felt that feels like there's a difference   yeah yeah I mean I basically uh saying the same  thing as what Nick was kind of implicitly saying   that it's not really a term within my Circles of  folks to talk about Singularity AGI is the the   thing that most people talk about or excited about  or maybe concerned about um so what what is that   what like how do you then Define artificial  general intelligence versus maybe what we're   seeing right now yeah yeah yeah yeah good question  so basically the the idea with AGI is that we know   that computers have always been better than  humans for at least very specific things like   calculating numbers doing arithmetic and this is  their Computing machines their calculators but   yeah now we're at a place where they can uh rival  human performance on a lot of different things so   the concept of AGI is I mean I don't think there's  no like strict definition but from some large   large group of tasks that uh humans just struggle  to be computers on that would be that would be AGI   um yeah but we're already you know in some ways  go ahead yeah yeah sorry oh I was just going to   say you know a few examples would be like language  translation right computers are able to do that so   much better medical diagnosis uh and then you also  you know have had these you know robots defeating   uh you know these chess players and go players uh  and really outperforming humans in those ways so   those would be some examples to add to press the  point yeah does it does it seem like in order for   it to get to a general intelligence points that  we would need one sort of system or app that can   do multiple things because it because I always  thought that the the joke was that the computer   that beats me at chess for example I could still  beat it at Go Fish or or I can you know I could   still do other things better than that specific  computer do you think it needs to we need to have   like a system that can multitask or can do all  of these things and that's when you would know   or is it something else is is was that just  too simplistic well I think for it to jump   in a little bit here the the that was one of the  goals of some of the programs that you saw for go   um that deepmind was working on was but they're  all your second example was still a game right   and I think uh Chris would sort of speak to the  the idea of AGI is that it's good across lots   of different tasks that aren't just games so  it's effective at driving and route planning   and translating and playing chess and and right  because I think like jodica was saying you know   computers are better at us in a lot of ways right  computers uh find routes faster than us computers   route our packages better than you know we used  to sit there and actually like plan where packages   would go and how boats would drive like we don't  do any of that anymore right like it's all done   by these scheduling algorithms that that process  data much faster than humans do um and able to   process larger amounts of data so the question you  know getting back to our Singularity topic is like   if are the computers better at than us at very  specific tasks and that they are and they've been   and they're going to continue to be um the idea  of AGI in some sense again like Chris it's not   a super well-defined term is that you know you've  got one system that does all of these things that   recognizes people on Facebook and and also can  drive your car and also can cook you dinner right   um and and I don't personally think we're  gonna we're anywhere near that uh despite   the fact that these language models might try  to convince us they are um but that's the uh you   know that's that's the AGI concern I guess yeah  I've also seen a bunch of things recently about   um there's been a there's an article last week  that said someone thought that Chachi PT and   and these generative AI chat Bots have passed the  Turing test and then I saw another one today that   said no it hasn't yet but it you know here's when  it will sort of he's thinking that gpt5 uh which   is scheduled for later this year to be released  uh that might be able to pass the touring test   um I have a t-shirt I forgot to bring  it in but I have a t-shirt of a robot   looking over someone a human shoulder  and says I cheated on the Turing test   um and I think it's a funny shirt but whenever I  wear it no one look no one understands what it is   they're like what is this are you the robot or are  you the human and which is kind of funny because   now no one knows you know the human versus if you  were to write that T-shirt and and human language   and give it to chat gbt would it get the joke yeah  yeah like hey do you understand this joke uh so   but like is there do you think that we're getting  closer with that in terms of like for those that   might not understand uh what the Turing test is  jodica can you can you sort of uh explain what   the Turing test is yeah I mean it's basically  a test to uh to give the situation to a robot   uh and the the judges don't really know if it's  coming from a robot or a human being and they're   supposed to convince the judges or convince the  panel that they are a particular human being so   really adapting to the situations and trying to  pose a convincing argument such that it makes   this difficult for people to understand whether  it's coming from an actual human being or not uh right and and so uh my uh my director here  actually said that uh ex machina was a great   movie on AI about the Turing test would you guys  agree with that I haven't seen it unfortunately   it's on my list I haven't seen it either Okay  it definitely speaks to the topic here because   I think the robot goes crazy and kills a bunch  of people at the end so oh good spoiler alert oh   geez I'm kidding it's probably like what 30 years  old or something 20 years old I think it was more   recent than that I think the dystopian robot movie  I mean it's something bad happens sorry Chris well   all right let's let's go down this path a little  bit too because you know there was this open   letter that came out last week of of Elon Musk and  a bunch of other AI um uh experts that have signed   you know saying like oh wow this we should hold  off on maybe putting us a six-month moratorium on   on the research and um first of all um have you  either been approached to either sign this or um   or have you signed it as or is this more of like I  just wanted to hear what your thoughts on this are   anyone um Nick why don't you start I'll I guess  I could jump in I I did not sign it I have not   signed it okay um I thought it was pretty it's  it's interesting that they're getting a lot of   credit for this this letter um if you if you  read the letter it's kind of interesting it   cites a lot of the work of uh timick group who's  a researcher who was at Google for a long time who   uh was fired for publishing a paper to point out  some of these issues about three or four years ago   um and so it's it's been interesting that that's  specifically about large language models and about   say gpt2 right so these are these ideas at least  in the research Community been bumping around a   little while I think the letter itself um is  not specific enough for me like not to be the   professor guy around here but like it talks about  power it doesn't Define power it talks about you   know research it doesn't Define what that means  um and I think it's these these bands are really   uh on Research I I don't know it's it's it seems  tough to put this Genie back in the bottle I guess   um I I was talking to a colleague here you know  there's all of these new you know mix and match   versions of these large language models that  are already out there then there's more and   more every day um and so slowing this down might  be is might be a good idea but at least the letter   itself without any sort of details uh just leaves  a little bit to be desired in my in my opinion so   that's why I sort of held off on signing up do  you have an opinion on this do you do you have   go ahead uh I haven't signed it either uh I have  read it and uh I think some of the points that   are in the letter they make sense right like  the concern is the spread of uh misinformation   through these AI generated content uh you know  deep fakes uh potential misuse of AI for other   malicious purposes uh and just the concern  that the technology is developing so far is   that people are not able to fully understand it  yet uh and you know so what I've seen is those in   favor of uh this uh particularly talk about  like uh having some shared safety protocols   uh for AI systems in general that they can  Implement you know worldwide uh essentially   um and how they can ensure uh like policies  put policies in place that ensure that these   AI systems uh have a positive impact have like  manageable risks uh and also give time for the   society to adapt to this AI different world so I'm  not saying like those are bad things I think we do   need some AI governance here uh and uh those are  concerns like the spread of misinformation is a   big concern I think it was 2020 when the number of  deep picks out there was reported to be more than   85 000 but today after the development of all the  technology since then it's potentially going to be   in the millions if not yes so there needs to be  some regulations but I don't know if a six-month   moratorium really does that um yeah yeah but it  did you know I think the best part of that letter   was that it did sort of raise awareness that that  there are people that are concerned about this   um if it was just a bunch of people that maybe  were were known within the AI space but were   not known sort of broadly um then that might  not have has made a bigger splash the other   thing that concerned me is that it did feel  like there was a lot of Doom and Gloom around   this it's like yes if we don't do this the  world will end tomorrow and it's like okay   well you can't really like that's not gonna  get much attention right Chris did you did   you um were you approached to to sign it or  did you look at it uh yeah definitely look at   it I was not approached uh I have not signed it  although I very much support the the sentiments   um not so much about like the Doomsday but just  the the reality that these are very important   um very important aspects that we need to reflect  on within society and consider uh reallocating   resources not a policy maker I have no idea  the right feasibility of any of these things   um but yeah my my biggest kind of uh concern or  question marks or whatever is kind of to Nick's   point that uh how would this play out and what  are the details I think it's just very unrealistic   unfortunately like there's no easy solution  for any of this and the letter is very forward   thinking as it should be the reality and even even  some of the the experts within this space um have   kind of criticized it because the reality is uh  even our current situation or even six months ago   we should have been dumping more resources into  uh better you know having better explainability   you know helping mitigate bias and all these  things I mean people have been saying this for   for years but the reality is that all this is a  Continuum and uh we should have we we could have   and should have been putting more emphasis on this  right a while back right well let's talk about   explainability and the the bias part of it is that  are those the two biggest concerns that a lot of   people still have about uh this technology or or  is it something else everybody else just jump in   with yeah okay you think it's something else I  mean it's in addition okay to explainability and   bias well I mean kind of yeah it's for let's take  deep fakes for example of course there are tons   of bots on any social media platform but now  because the technology is getting so realistic   like what's the prevent just just  incredibly massive amounts of bots   going everywhere and it being really hard  for any user to discern what is real or not   that's just one tiny example but you have  a point like there are multiple issues that   people are concerned with rightfully so okay  well what about the explainability part like   like again every time someone tells talks to  me and says explainability is important in Ai   and then I go well can you explain that more and  then they go no my car but we don't know we don't   know why some of these things are producing  the the results that they are and that just   starts I start scratching my head I'm going well  if you guys don't understand it well now you can   understand why we would be concerned from a from  a Layman's perspective uh Jonica do you have any   thoughts on sort of the explainability thing  like how do we explain explainability better   yeah it's a very challenging one right it's true  like the people who build the models are not going   to be able to tell you exactly what the model is  gonna output for a particular input like we just   don't know because the models are developed in  a way that they are learning from these patterns   that they are Auto uh understanding Auto detecting  from the data so explainability is difficult   uh and I think you know what might make people  uncomfortable is that if they don't understand   how something works at all uh and then there  are all these concerns about AI getting smarter   uh than humans and taking over the world yeah I  think it's just important for them to understand   some process that goes into building a technology  like that so it may not be in detail like what   are large language models and what do they do  it may even be that but just an understanding   of you know what data is used to train this  what is the overall principle uh just educating   people on what that is increasing visibility  uh I think would help people a lot there   yeah Nick do you have anything to add on that one  or I gotta because I got another question for you   um no no I I think explainability is a good one  explainability mean it's like the word fairness   right it's one of these Concepts that really come  to us from Philosophy from moral philosophy that   have lots of different meanings and I think  that's why it's tough to really pin down   um a lot of times most of the conversation I have  a talk that I give sometimes in class it's like   here's all the definition of like ethics and  AI or fairness and AI that people want to talk   about it's like 37 things yeah um explainability  to a lot of the conversations I've been having   around explainability especially for these large  language models are really about um what someone   from software engineering might call traceability  right or might call like why is it saying like   where did it get this thing that it's saying to me  right so can I take an output of the system like a   sentence like you know the moon is made of cheese  and figure out where in all the training data like   where is the sentence that it's using to justify  that right so that's a lot of times what most   folks are thinking about when they're thinking  about explanations Within These large language   models is why is this saying it because I think  people do have some most people that I interact   with have some some intuitive understanding that  these things are basically reading the internet   and then spitting things back out at us right  and they want to know by explanation like where   did you read this thing or where did this come  from you know what is your Source material for   why you're saying this thing right the reality  is it's just a giant text prediction machine   like the thing on your phone and it is just  there's probabilities on it and that's what   it comes from but people want this explanation  of like where did that where did you get that   right and that's really kind of some most I think  most of the thing that people are concerned about   it's like why are you saying this like where did  it come from when I figure when I get an answer   from from chat EPT in terms of uh my question  of like oh what is this what is this technology   um about or you know when I type in something  like hey explain to me zero trust security   for example it scans all of the articles  that are out there that have P of people   that have already written this stuff and  then it waits whatever it assigns weights   to you know some of these answers and that's  what I get back and then I get back sources   as well I get links and things like that  but when I ask Chachi well you hope so so it's not doing what I think it's doing okay  because right right because my next one would   be then I I tell it to come up with 10 fake  names for my Dungeons and Dragons character and   now does that mean that it's going to these these  random name generators that are already out there   on the internet and then running those or is it  just throwing out four random names and you know   I don't know it's I mean in general it's it's  designed to uh put together strings of words   that are syntactically reasonable right so what I  mean by that is like the sentence that one of the   famous examples that GPT the chat GPT can handle  now is the question did Aristotle own an iPhone   okay right so Aristotle is it clearly a historical  figure the sentence Aristotle did not own an   iPhone is probably not present on the internet  right because it's not a sentence that anybody   bothered would would have bothered to write  down right but you and everybody here knows   that clearly like cell phones were invented after  this guy was alive right so so semantically right   it makes no sense um because it's it's but it's  likely because everybody owns a cell phone so if   you think about it as dead person owned cell phone  then it it semantically makes sense to put those   words together right it's semantically very common  but it's not it's not it's not meaningful right   um or sorry syntactically very common get my words  mixed up right um uh and so chat GPT actually can   handle this question now this was my go-to example  but if you ask it that question now it'll work   um but if you find the older version uh gpt2  won't answer that question correctly so oh okay um that brings out a thing right these  things are changing right like even   if you have these I have this key  example that I used for for like   two years and now I can't use it  anymore now you need a new example yeah I was just gonna say you know just  understanding these these little like high-level   principles of how these models work will also  enable people to know that you know if it's   giving you a source that may not be right like the  information may not be accurate uh so know what   you want to use it for and know what you don't  want to use it for uh and just that knowledge will   just help people use the tool better as well right  so that goes a little bit into explainability too   just or I would say understanding like people able  to understand uh how to use a tool what to use it   for and what to expect and what to not yeah and  and so the other the other question I wanted to   ask the you guys was um do you think that sort of  AI in general and maybe some of these companies   need a better job at sort of explaining the  benefits of this you know we've attached   sort of this idea of the singularity to a lot of  negatives like you know Terminators taking over   the world blah blah blah blah blah Killer Robots  Etc but I've never seen really any sort of good   um powerful benefit statements to this is why  we're doing it we're not doing it because we want   to achieve we don't want to get to that point  where we have Killer Robots but we're going to   keep developing it so that we do so that we get  a b and c and so you know I've asked individuals   and and they've given me some great answers but do  you think there needs to be sort of uh hey this is   all good and this is what we're going to do we're  gonna you know this will help us cure cancer or   this will help us get to Mars uh you know et  cetera et cetera do like you know do you agree   with that statement or or is it just me being  paranoid about like why we need a better statement   about this that was a really open-ended question  I guess Chris why don't you jump on on this one does AI need better PR is that maybe  that's the question I think it could   yeah it's tough to answer because  because basically I think that   the more obviously the more that Society is  informed period it doesn't matter if it's good   or bad like the negatives are the the pros of the  technology the better right the more informed we   are the better but I I don't know if it's the  researchers or scientists responsibility to   play a large role in that that branding that that  PR and in fact one could argue that intrinsically   we already do because what we're working on for  example let's just take machine translation like   we're trying to say yeah we want to get better at  providing this good to the world of translating   languages from one language to another or somebody  who's researching a slightly different area within   machine learning same exact concept like this  is what their focus is and like that should be   evident by the work that that they're doing but  then you know once these Technologies get really   good of course it's just up to the wild west  of what this reception is and what all gets   circulated you know on the Internet it's like  that's we kind of can't control that and I don't   know whose responsibility that would be but I  guess to your point um it would be good to have uh   good reliable trustworthy resources where people  could go to see kind of the wide spectrum of the   benefits and the cons of this technology yeah  yeah and I think it's almost like a resource like   so it's a resource you can use it really well and  you can also use it negatively so it's it's really   about understanding that as well like it has a lot  of potential for doing really great to the society   right for example it's already shown uh evidence  of helping medical diagnosis really significantly   so that's a big deal uh and then even other  Technologies like a robotic vacuum cleaners   like they save me time so that I can do other  things yeah so there's a lot of good that has   already happened and there's a lot of good that  can you know happen in the future as well simple   example chat GPD it's just so good at like grammar  and structuring sentences uh and it does surpass a   lot of uh you know people's skill of writing such  great sentences so hey that's a great application   you know you can use it to communicate better  create more understandable documents uh and really   use it to help you in that way uh but you know  on the other side I was just reading this article   yesterday uh which was about how AI has also  helped uh you know enhance some of the negatives   like uh cyber crimes and uh you know trying to  guess people's passwords and using ML and AI   you are able to do that also much better uh right  you're able to write more realistic spam uh emails   uh and really go up with it that way as well so I  guess it's it's technology which is amazing and it   it can have a lot of great users that come out of  it but at the same time you know people can use it   for uh malicious purposes and that's where uh you  know I think some of that conversation of you know   why is it scary and we really understand uh the  negatives that it could bring on with it and what   do we need to do to address it you know perhaps  deep fakes is such a big problem so should these   AI Technologies be watermarking any product of AI  in a way that can help us uh you know with that   problem uh yeah yeah like a lot of such examples  yeah and again we've we've we've dealt with this   before I mean the early days of the internet  you know we're all thought hey the internet   is great we're gonna be able to send emails to  everybody we're going to connect with all these   people and and it was all these great things uh  and then a lot of bad guys discovered there's a   lot of bad things you can do on the internet and  then you know stealing money and passwords and   all this other stuff um it felt like any sort of  Technology development we've always seen a list   of good things that happen but then we've also  recognized that there's bad things that happen too   but like we didn't try to ban the internet when  it when it came out we there wasn't there wasn't   sort of an open letter sort of you know so this  feels a little different but it also but also   we've got history on our side of like yeah we  you know people stealing money is bad you know   for those people that lose their money but we  we didn't shut down everything because of that   um I I don't know maybe I'm just I'm just  rambling at this yeah I don't know if we   some people tried to ban the internet there's  still plenty of people that like don't use   Facebook we're still talking about regulations  for some of these communication Technologies   um um for for teens I think there's a bunch of  law some laws of stocks in California maybe Utah   um so I I think that's true you've got all the  tick tock stuff too going on yeah right exactly   yeah so I I think it's I think your larger Point  Keith is that you know these Technologies come   out and there there's always a conversation  about how we integrate them into society   um and who are the benefits accruing to and where  are the costs right and that's a conversation that   doesn't just happen from technologists that also  needs to include communities that are affected by   it it needs to include people that have no idea  what this Technologies are um you know and it   needs to include government and policy makers and  things like that and I think these conversations   do happen they just happen very slowly right and  that's the you know the it's a lot easier for uh   for a future of Life Think Tank to come out and  and you know put this letter out there it's much   harder to really write rules around um how these  things get done um how the how this regulation   happens I know I've been on these uh National  Institute for standards and Technology panels for   the last three years trying to write these large  policy documents like the you know about like how   we govern different aspects of AI decision making  yeah and it's you know it's slow and it's very   unsexy um but coming out and saying oh let's  ban it um you know this is a quicker way but   this it is a big conversation um and the costs and  benefits of any technology is is a is an ongoing   one you know it's not one that we sort of say  Okay up we're done with that right like you know   it's always something that it's always where the  rules are changing and where the people who are   being harmed or benefited you know change and we  need to revisit that conversation right right so   um Chris I wanted to bring this up with you  because I know that when we talked ahead of time   um there was a discussion of like where are we  if there was if this was a curve and you get that   hockey stick sort of uh effect in terms of where  AI is at the moment are we still at that sort of   the Baseline of of the hockey stick or are we now  like on that trajectory upward or you know you   know if if you had if you had to make a guess of  you know April 2023 this is where we're at on on   the on the graph or is it is it even too hard to  do this at this point while we're going for it of   course it's yeah of course it's like essentially  impossible to predict these things uh and things   have been accelerating at an exponential rate I  would say for the past I mean like in some ways   always right you're always on that Continuum but  especially the last 10 years have been absolutely   nuts um but if I if I had to guess all right it's  just a fun thought exercise I would say we're   we're at the early slash mid point of the uh the  crazy high derivative yeah before any Plateau okay   um because yeah because it's just so hard to um  so hard to even guess what this is all going to   enable and I think like not to backtrack but  to the previous question I love that analogy   of comparing it to the internet and to Nick's  Point like yeah even things like the internet   took a long time the first internet connection  was in 1969 and then you know it really started   to enter a lot of people's households in like the  mid 90s yeah but it was it was uh it received I   think slower pushback on the internet than  the pushback we're starting to see with for   example chat gbt I think it's because it was  hard to anticipate and imagine what all the   internet would afford whereas which had gbt  it was almost like a stepwise function to the   public that technology has been building all of  our researchers knew the capabilities of NLP but   all of a sudden bam it was like very in our face  very easy for anybody with a computer to just play   with chat GPT and see the benefits of it and  thus also you know the pros and the negatives right now I yeah does anyone else want to jump  in like are do you agree with Chris in terms of   where we're at right now or are we still really at  the early days and you know it's going to be even   crazier next year I'm sure it will be crazier next  year like every day I see a new story I'm going oh   my gosh like I can't believe I can do that now  yeah I agree with Chris and uh you know large   language models for example have existed for a  few years it's just this has been this first time   where something like this like charge GPT has been  opened up to the public and so many people are   using it and that's why you know there's there's  a lot of attention that it has been getting but   I agree with Chris I think we are somewhere  we are yet to Peak for sure all right Nick   I you know you asked me about this question  I still haven't really I don't know right   because it's it's it's the idea of like do  you think the curve is going up or do you   think there's eventually going to be sort  of like diminishing returns right like is   it going to sort of slow down and like Crest off  right because if we are we gonna accelerate are   we going to slow down and I I honestly have no  idea yeah so it feels like we're accelerating   at the moment given that you know the this  first started in December of or November   2022 and you know since then it's just been  non-stop news and non-stop sort of discoveries   right but I mean you could you could say the same  thing about like you look at something like plane   flight or the speed of cars right like it  was going faster and faster and faster and   faster we got on rockets and then all of a sudden  we're not really going that much faster as humans   anymore right so it's I I think it's really tough  to Chris's point it's really tough to to know you   know are we going to keep going up or are we  going to Plateau off like I mean it's it's it's   really tough and I think the only constant is  that it's going to keep changing right and that   our perspective tomorrow is going to be probably  different than our perspective today which um I   don't know all right so is it crazy and liberating  at the same time that's what makes this this topic   so fun to talk about um exactly I think one way of  trying to ground it would be to think about what   are the big remaining things that we've barely  been able to make any progress on and kind of to   kind of to what you're saying earlier Nick I  forgot to which question but to think of like   multimodal stuff you know like it's so hard to  mix video with with language and we've been doing   you know we've had incredible success over the  last three years but there's still so much room   for improvement and I think we're gonna see some  really impressive gains in the next two to three   years well well does that you know that made me  think of like do we need an AI moonshot sort of   sort of approach maybe from and again I don't want  to bring the government into it but you know when   um you know back in the 60s there was like Hey  we're gonna go to the moon and you know they had   a goal and and they got there and they landed  on the moon and that it galvanized everybody   do we need something similar in AI or is it is  it is it just that's not a big that's not an   idea because we don't know exactly what the goal  would be like but again practitioners huh yeah   Chris what I'm sorry what'd you say well I didn't  mean to dominate here but I was like I think we   already have like just millions of practitioners  who are already doing moonshot approaches which   is good I don't see this disparagingly right  yeah just so many folks in high school right   without even taking formal classes in this like  we've democratized it and we've lowered the   barrier of entries so I think people are trying  outlandish things and stuff is happening progress   is happening all right so so I want to and again  I think go ahead I got another question I think   the um I wanted to tie it back to you talked  about a moonshot I think the Turing test as a   as a concept which we talked about before right  that was a moon shot right it's that this this   you gotta remember at the time computers was this  thing you ran with a crank yeah like that you like   you punched some cards and you turned to crank and  like the idea that this thing was ever going to be   able to convince someone that it was a person  right that it was intelligent which is really   what the attorney Test's all about it's saying  okay well if you don't know what intelligence   is I don't know what intelligence is but if I can  convince you I'm a intelligent thing AKA another   person then I must be doing some intelligent stuff  but the Turing test on its own could be understood   as a moonshot and that's why it's it's really been  an animating concept for the field for so long um   is because you know in order to do all the stuff  that you need to pass the Turning test you have to   you know get over all these little challenges  yeah that's why it's been such so it's why it's   captured so much imagination there like you said  because it is a moonshot and I think it's you know   we maybe need to refine it a little bit right  because because now we're kind of at this point   where it's like okay well the computer can kind of  convince us it's a person sort of uh and you know   is that you know what's sort of the next thing  but you'll see some of these right there's the   idea some of these protein folding things some of  these um AI for science type ideas like there's a   few of these in the research Community but they're  not as like big and grabby I think as as kind of   um like you said going to the moon or maybe the  Turing test originally was so yeah maybe it's   time to rethink a like what's a new sort of  cohesive one um would be a good question yeah   I'm still not convinced that I'm I'm a human  it's like so I I may be a robot at this point   there's still there's still some debate um  especially for my family uh guess what you   you passed the touring test I think you I I  thought you're a human well yeah some of my   family might disagree with some of the decisions  that I make um jodica I'm going to ask you sort   of a hypothetical question I'm going to put each  of you um you are now in charge of AI like we we   I made this decision because I'm emperor of the  world okay so I said all right jodica you are   now in charge of AI development what is sort  of the first thing you would do like do you   do anything do you do you know like where do you  if if you could steer the direction of where AI   goes from this point what would you do well that's  a very difficult question uh do you have the power   that's a very tough one yeah assume that you have  the power to do whatever you want and and so where   where do you and I'm gonna ask the uh the Nick  and Chris the same question so I'm not putting   all the pressure well I think I'm going to first  spend some time just thinking about everything   and every aspect that I haven't already right  because that is like really a lot of power to have   um I think yeah development is has the potential  to do a lot of good but then as mentioned earlier   right that there there is a downside to it there  needs to be some regulation so it would perhaps be   a lot of conversations about how we can uh develop  AI as well as uh these policies and AI governance   and some regulation alongside AI development so  there's not that big gap that we already made so   many advancements and now there's so many problems  but the policies and the regulation takes time to   really get there so how can we uh you know do that  hand in hand like more together more in parallel   uh I think that might be you know one of the one  of the concerns that I do have today so that might   be the direction that I would be thinking okay  all right Nick I've now placed you in in charge   of of AI you are the man so what's what's what's  the first couple things you do like do you do you   go to Congress and go we need more policies I mean  it feels like regulation is going to be way behind   on this but it's like what what's the first time  that's kind of there's a there's a it's called   policy vacuum rate is that technology moves so  fast that we have this like vacuum where there's   no policies that exist um that seems to be what  we do in the US right that's why we still don't   have like a coherent crypto policy and things  like that we sort of let things happen and then   um and then figure it out later  and then clean up yeah we could   have the best after it's a very American  way to look at things but I kind of love   um some people do some people think it's  terrible uh I just got back from an overseas   trip so I have you know very different  perspective than I normally would yeah   um the uh if I'm in charge though if I'm king of  the world what would I do um king of AI yeah uh you're only king of AI Nick you're not you're  not I'm the emperor of the world sorry sorry   sorry I'm stuck at a university now you know  so you think we're Kings of everything um I I really like jonica's point about trying to  um uh and this is something that we that we try   to do here you know a lot of is is really do more  Community engagement with this development stuff   right I think a lot of the the AI development  you look at chat GPT right like what it costs   to train one you know one iteration of chat  GPT is something like four million dollars or   something that came out and sort of said it right  so it's it's so inaccessible to so many people   um and I think that you know and and Chris picked  up on this earlier too like really democratizing   these Technologies and democratizing one in terms  of making it available and you know online open   source Publications things like that but two  and like making sure that the communities who   are going to be impacted the communities that  that maybe don't normally have access to these   things do and and that includes the knowledge  how to use them and the resources to use them   right and so I think really trying to push into  um those spaces is is is a good way to go I think   all too often you know the these a lot of these  AI developments you know by necessity because   they're big technology projects are really housed  in these large tech companies yeah um are really   you know things that happen at universities  with you know in the closer Labs where we all   think we're king of the world anyway um you  know so this is getting out of that right and   like really making sure that that's a first and  foremost kind of policy and sort of design space   um would be would be really good okay Chris  you're now you're now head of AI or king of AI   what yeah I I haven't thought about this before  but the last ago so I've had more time to think   about this because anybody else here but yeah the  way that I've always uh kind of framed AI machine   learning in general is that there are a few  pillars and this is kind of the next point that   some of those pillars are towards like kind of  democratizing things some of those some of those   pillars that have really allowed a lot of great  Innovation especially over the last five years are   things like uh allowing reproducibility of models  and historically I'm going to try to make this   sustained but historically somebody could research  a model and come up with a really great model but   then nobody could reproduce it right the source  code isn't available or just be really messy and   you would spend years of your life I've spent  years of my life trying to reproduce somebody   else's code that I needed to compare against so  this is like one of the pillars you know so like   hugging face for example is one company that  has really kind of LED this space and and has   really enabled a lot of progress within machine  learning because they make models available and   they make data available I'm not trying to like  just advertise for certain companies here but   I'm addressing that there are certain things  that have really allowed Innovation to fuel and I think we're getting to a point this  is also to Nick's point that that there are   a few companies that are training these  very large language models and of course   we don't have to limit our conversation  to just large language models but it's   a compelling example yeah and and I think  the the danger is that that it could shift   things to just essentially being a monopoly  of kind of the same way that we saw it with   operating systems but at least with Linux  there's this open source operating system   and you could Tinker around and make things your  own we do have some players within the space of ml   who are making open source great large language  models Bloom is the best example that I know of   but that's one solution and I think to really  kind of help make sure that that everybody gets   a fair shot and that things are as democratized as  possible maybe you would also involve things like   um yeah focusing on on like the computational  power right not just the organizations who can   make these things available but allow it to be  possible for any group of dozens or hundreds of   folks to get together so I don't know how that  would play out but you know maybe just making   gpus like maybe more people competing in the  GPU space just making it easier so that it's   not just you spend four million dollars and then  you can get something I don't know how to enact   that I don't know how to do that I would also  like to put more resources kind of to what we   were talking about earlier with explainability  I would love to somehow make that possible maybe   dump tons of funding because it's it's not it's  not glamorous to work on yeah feasibility it's not   glamorous to work on fairness and bias everybody  just wants the flashiest best performing models   because that's fun um yeah I don't know how to  invoke this change but seeing changes towards   that would be good all right so what I'm going to  do is I'm we're going to bring you all back uh at   the end of the year and we're going to sort  of say you know we're going to probably see   if like if you were still king of AI um or you  know that that's offensive on my end I'm sorry   king or queen of AI sorry to make sure that you're  the head of AI uh you're in charge of everything   um but that you know I also wanted  to ask sort of as a final question   um kind of getting back to the original ideas you  know are we getting closer to the AI Singularity   I was going to say like by the end of the year do  you think we'll have achieved it but I'm getting   a sense of based on the definitions that we've  had and talking about general intelligence that   we feel that it's probably we're probably not  going to be there by the end of the year but   do these developments get us closer so yes or no  I guess from for me to the things are are we are   we moving towards that idea of either HEI or or a  singularity why don't you start yeah yeah I mean   every development is getting us closer to that  yes I think that these developments help okay Nick uh yeah I would I would I would typically agree  with jodica I yeah all of these developments are   moving us in a Direction Where We have sort of  more technology right and that's functionally   what the singularity is kind of postulating at  the end of the day is that like there's more   technology and things are going to get faster and  faster right so yeah these are all sort of moving   Us in that direction and Chris yeah I mean I I  agree but if you focus on the control aspect like   losing control for Singularity then yeah that's  to be determined that I'll just relies on how much   uh power we give the models in terms of what they  have influence over but definitely in terms of AGI   and the capabilities yeah we're definitely making  serious strides we're definitely not there but   it's headed in that direction yeah and I guess  I probably should have asked sort of a secondary   question of this like are we getting there and  is it a good thing like do we think that we are   on the right track at the moment or do we feel  like that that we're going in the wrong direction   without a couple of course Corrections that's  what I I get a sense of sort of from The Angst   of the AI community at the moment I mean yeah go  ahead okay a tricky question right but of course   of course there's a stuff that's not really good  that's happening right now so uh I guess we have   a bit of both uh right and uh I do think  there needs to be some character measures   or just again AI governance things we need to  think about I love Chris's uh point on you know   the shareability of these models as well uh not  to drag race too much but you know every time a   large language model is uh you know built there it  just is associated with a lot of carbon emission   as well so there's impact on the environment but  just in in general like for example uh we we don't   stop medical research so anytime a new drug is  developed it goes through certain testing and   safety uh you know guidelines that it has to  follow uh before it's actually released uh to   the public for open use uh not saying that's  the exact path for AI but we definitely need   to think in the direction of how we can make it  more safe so again great stuff associated with   the charge GPT for example I've heard people use  it for so many you know really good applications   that saves people time uh it's amazing but then  also uh you know we do need to address uh the   other side of things as well okay and Nick are  we headed on are we on the right track or are   we heading over that cliff on the train I don't I  mean I I guess maybe I'm overly optimistic I don't   know right like I don't like the this sort of this  x risk you know the technology is going to take   over right like I just I don't I don't see how  we ever get there with better language models or   you know jodica always talks about like what ai's  doing for us like I just got back again you know   all the routing of airplanes and you know getting  me to work on time because it knows all the bus   schedules and all the stuff like I don't I don't  understand how my bus scheduler is going to take   over the world um I guess and that's that's where  I kind of come from like and I don't I don't see   how you know these as they're called sometimes  stochastic parrots like are going to get better   and take over the world right just because  they get better at imitating human language   um it's a it's a fun thing to think about but  I like I guess I've kind of an optimistic slash   naysayer in the room it's like I see all these  benefits from technology getting smarter and I   guess maybe I'm driving at the cliff much faster  than I should be but okay all right and did I   ask you Chris did you ask did you have any final  thoughts on this like we are headed in the right   direction yes yeah I definitely think that we're  heading in the right direction but I'm cautious of   our current climate of how we will use this  technology the technology itself is amazing   but yeah I don't know go back to my point about  social media right like we've already seen that   it can have huge adverse negative uh impact  on society people don't know what's real   information what's fake information and this  is just fueling the fire more and more and more   um so yeah to Nick's Point like it's probably  not gonna be anything doomsday for train   transportation for example yeah um but in terms of  social media you know and whatever we hook these   things up to like our emails or whatever yeah like  it's a it's a bit alarming we need to be cautious   I think but the technology is great all for it  all right all right so we're gonna we're gonna   reconnect in about six months and we'll we'll see  how or at the end of the year and we'll see how uh   everyone feels about it at that point um because  again I just said my chat GPT representative   sure just the GPT and like let it do the talking  yeah sure sure we'll we'll we'll just we'll just   do that and because I I'm sure that by then we'll  have you know voice to text to translation and it   I'll just be able to talk to other computers and  there'll be an avatar representing you Nick and so   um yeah yeah personality they downloaded  into the internet and we'll be all we'll   be all we'll be all set there or that  that might be a little too ambitious   but hopefully you guys will come back onto the  show is that you know can I get you guys to say   that at least at this point yeah of course  yeah all right and was that too ambitious   or not ambitious enough and that's the moon shot  we need to have a deep fake of Nick by next week   well I all right I could probably have a deep  fake of Nick probably by the end of the year   I think we have enough audio and video of  him now that we could actually create this   might be actually now I'm scaring myself  like thinking that I could do this like   I'm just gonna try to find tools on  the internet that could do this for me like I said like you know you you know once once  the AIS take over for you know podcast hosts you   know then I'm doomed so yeah all right I think  we're I think we're we're good thank you guys   uh so much for for being on the show today and  uh we will we'll catch up in about six months   sounds good thanks for having us all right  sounds great yep all right that's all the   time we have for today's episode don't  forget to like the video subscribe to   our Channel and add any comments that  you have below join us every week for   new episodes of today in Tech I'm Keith  Shaw thanks for watching thank you foreign

2023-04-10 21:08

Show Video

Other news