Dr. Yasantha: AI vs AGI & Homo Sapiens’ Next Chapter | Endgame #143 (Luminaries)

Dr. Yasantha: AI vs AGI & Homo Sapiens’ Next Chapter | Endgame #143 (Luminaries)

Show Video

People who have AI are like the new colonialists. They will be able to; it's not by geographic boundary, but they will impose their will. And I think in the future people will fall in love with this AI, and many things will happen. Those are big questions that the next generation of philosophers needs to grapple with. What does it mean to be human in the presence of these simulated humans? Hi friends and fellows.

Welcome to this special series of conversations involving personalities coming from a number of campuses, including Stanford University. The purpose of the series is really to unleash thought-provoking ideas that I think would be of tremendous value to you. I wanna thank you of your support so far, and welcome to this special series. GITA WIRJAWAN: Hi friends, today we're honored to have Yasantha Rajakarunanayake, who is a leading AI scientist in the Bay Area. Yasantha, it's a pleasure.

Thank you so much for coming on to our show. YASANTHA RAJAKARUNANAYAKE: Thank you, Gita. Thanks. - I want to, as usual, start out by asking you about your growing up. You were born in Sri Lanka, and you made your way here to the U.S. How did it all start? Right. Yeah, I was born in Sri Lanka in the early 60s. And from my childhood, I have been sort of a person who was attracted to mathematics, so I studied science and math in Sri Lanka, and at high school, I got the highest score, and I was very lucky to get a scholarship to go to Princeton University.

So it was a fully paid scholarship, and I remember thinking at that time, I think Princeton was fifteen thousand dollars, and when I was filling out this form, the financial aid form, I was so worried because our family's net worth was less than 10,000 dollars, and we were a pretty upper middle class family in Sri Lanka. We had our own home, we had a car, a VW Bug, but still, you know. So this was sort of a godsend, and I'm so lucky and appreciate it so much that some foundation gave me this opportunity to study.

Before you got to Princeton, who would have been more influential in your upbringing? Your mother or your father, in terms of making you the way you are. So I think I learned the academics; staying within the box from my mother and going outside the box and taking risks from my father, so they both were very influential. They were both very smart. My mother was a vice principal of a school, and my father was also an accountant who later was in the Middle East, but they were both pretty smart people for their influence on me. Which do you study at Princeton? At Princeton, I started studying electrical engineering. But then people told me that "Yasantha, you should be doing physics because in America the smartest people do physics."

In Sri Lanka, the people either want to become doctors or engineers, so they want to be... Anyway, I did finish my electrical engineering and computer science at Princeton, and then I was able to switch to applied physics at Caltech for my grad school. Your name was mentioned quite surprisingly by a famous person by the name of Jeff Bezos in one of his more famous interviews.

Explain that episode. Yeah. Jeff Bezos was a classmate; he was sort of a doormate as well as an officemate. I mean, he was in the same department as me, so we do courses together. They have about 40 people, so I knew him quite well. I did Princeton in three years, so we started Princeton together; he took the four years.

So after some time, I actually became his TA. By the last year, I went to the department head and asked, I wanted to get a TA job, an undergraduate TA, so they allowed me to do that because I had taken the course and done well. So I remember Jeff Bezos coming in and asking me for a couple of points on the homework. So he knew me, and then one time, I think what this episode he was talking about in 2017 is that he's put that in his autobiography essentially, but he did come and talk to me about a math problem, and so I was able to solve this math problem.

Actually, I had completely forgotten about this episode until he brought it up 35 years later, right? So that's pretty amazing that I could have such an impact on him. When I came from Sri Lanka, I didn't have that many friends. I was not very well adapted. I was wearing a sarong in my dorm, and people were wondering, "Who is this guy?" I was about 19 years old, and I could speak English and all that well, but I hadn't completely grasped American culture and all that.

So Jeff came to my room, and then he asked me, "Yasantha, what about this math problem?" - And then he relates ... - After spending what? Three hours he couldn't solve it, and then? - Yeah, it's just because I think math is something that you have to have a mathematical intuition for. Sometimes you get a gut feel. So I think I had that for that particular problem. And then it was good.

It had something to do with the cosine; I think factorizing a cosine into infinite products or something like that. - That was a watershed moment for him because he would have wanted to study theoretical physics, right? - Right. I think that's what he says. So I think I'm glad that he followed the path. - Well, things turned out okay for him. - Absolutely. - Anyway, you're spending a lot of time on AI. But before you got here to spending a lot of time on AI, you had 131 patents, right? I mean, which are some of the most or more meaningful ones that you’ve done? So I've had a wonderful career because I think that's one of my, I guess, success factors is that I was able to switch fields, and so I...

even as an undergraduate, I did electric engineering and computer science, then I went into applied physics, and I did lasers and some advanced optoelectronics at that time in the late 80s. And then I went to a university. So I've been switching fields every five years.

So what that allows you to do is that you learn the tricks and trades of many different fields, and then as you progress in time, you get intuitions from: "How do they do it in electrical engineering?" "How do they do it in physics?" "How do they do it in communication engineering?" And so that has given me the skill to look at problems in a very original way and allowed me to have a large number of inventions that are called patents. Now, not all of them are U.S. patents; some of them are Chinese because of international relations with the European. But nevertheless, I'm very proud of my work in Wi-Fi because, at the time...

- Which everybody uses nowadays. Can't live without. - Absolutely. - So you're part of that team that discovered Wi-Fi. - Yeah. So I think that's one thing.

Before that, I think, when my son was younger, I was thinking, "What do I tell him that I do?" And then, at that time, it was satellite digital communications. So I played an important role in the… At that time, it was Dish Network and DIRECTV and those kinds of things to bring TV to the world. I was instrumental in the early cable modems and also had a startup with the DSL modem.

So I've grown up with the internet, and essentially bringing high-speed internet to the masses. That was my mission and passion in the 90s, up until the 2001-2000 crash. So after that, I decided to join a more mature company like Dot-com, which did me very well because it was stable and I stayed for 15 to 16 years. So that's where I did all these patents.

Some of these patents are quite interesting. So basically, the way it works is that you think of an idea, and then you try to... Say you invented the whole world; pretty much you cover a large circle around your invention, and then you try to build fences around it basically, so other people can't enter. That is how U.S. patent cases are done at the moment.

So having one patent is not that useful because somebody can find a way to go around it. I heard some people have patents for like double-clicking and stuff. But it's difficult to enforce such a thing, right? We have to say, you need to have double clicking, but you have to use your left hand and many other conditions, and you have to invent all of them. - But intuitively, this sort of defensive strategy of patenting, it seems to stifle creativity. Yeah. This is because the U.S. is very litigious society. Everybody wants to sue everybody else.

So you have to have a defense for every possible thing that a competitor might do. That's currently how U.S. patenting is done. I think there is a case to be made that inventions should be made fully public.

So it does become public after about 20 years, right? But I don't think it's a bad thing because otherwise you have the situation with China, where they steal everything without any reward for the inventors or the company that spends the money. So I think that's a bad thing. So this is a compromise, I suppose. Explain the process of getting a patent. Getting a patent is quite straightforward.

There is a web page, you pay 300 bucks, and you can register. It's a uspto.com. - How long does it take? - No. So the total one takes about three thousand dollars, if you were an inventor. Let's say, I invented some new gadget, a new type of coffee maker or something.

And you're an individual, you don't have a company associated, then you can still get a patent if you know how to write it in an intelligent way that the patent examiner can understand and prove that your invention is unique and it's better than the state of the art. Then I think you can… I think it's not that expensive; three thousand dollars, probably. - How long? - About a year. - Before you find out? - Yeah. - It's recognized. - Yeah. A little bit more, sometimes. Maybe 18 months. But you get a date.

So the day that you open the application, that's your date. So if someone invents something and finishes before your process is done, you have the preceding date, so you own the patent for that. So it's quite interesting. Let's jump into your current field of AI. Explain the impact of AI on a number of things. I mean, we can talk about the social equation first, we can talk about some of the other stuff.

Sure. I want to first introduce AI and what it is. So essentially, this is one of the hottest topics these days, both in the U.S. as well as all over the world, because AI is taking us by storm.

And some people call this the fourth industrial revolution. The first one was obviously when the steam engine was invented. And then the second one was Edison; basically, electricity. - Electricity and telephone. - That's when we were able to get essentially offloading a brawn, a muscle power.

So then the third one is with the computers in the 60s, where we are offloading some of our computation loads. And then, now with AI, that’s the fourth revolution where we can offload our tasks that are cognitive and more white-collar jobs as well, so it's no longer the blue-collar jobs alone. So humans are now freed up from both blue collar and white collar, using your brains and brawn. So I think it's a Renaissance moment. And I think, especially last year in December, they released this ChatGPT, which is a very large language model that's taken the whole world by storm. I think in one month they went to 100 million users.

And companies like Facebook took many years to go to 100 million users. And I think by now it has definitely passed a billion now, I’m sure. - Yeah, already. It's not a day that people around me are not using ChatGPT on a daily basis. - So it's quite interesting, and what it means is that it's become a wonderful tool for everybody: students, teachers, executives, CEOs, you name it. So there are some deep insights there; why does it work? And then people are worried about artificial general intelligence.

- What's the difference between AI and AGI? So AI is when you're good at solving a narrow task of, say, playing chess. And you can beat all the humans. So IBM had that system, I think about 20 years ago, where they played with (Garry) Kasparov, right? Now, mind you, Garry Kasparov was the world champion, and he played with six Grandmasters, human Grandmasters, and a whole lot of people on the internet, and he still won. So you can imagine that the master of the field is way better than all the rest of the others, at least in that field, right? But the AI beat him. So we humans have no chance in games like chess. They are basically...

AI plays chess; it's rocket science to us; we just watch; we have no idea what it's doing because we can't compete with it. It computes so many moves so fast and... So now one of the fears is that... Well, then there was a game of goal. Then they've done this protein folding where they found, I think, about 200 million proteins and basically found the three-dimensional structure.

That was a very complex computational problem. So AI is able to do a massive task that humans or armies of humans can't do. But they are still narrow in the sense that you can't apply that expertise that it has in chess to go run a government or something or run a bank, right? - Or run a company. - Or run a company or trade on Wall Street. Now our fear is that… Well, so what we now do is give multiple silos of these things to the AI, including our language. So it's not very far from the day when you have an AI write a book that's better than (Fyodor) Dostoevsky, for instance. I happen to think that Dostoevsky is… - One of the best novelists.

- Exactly. Like “The Brothers Karamazov” are, right? Or Shakespeare as a poet. So right now, you can have ChatGPT to generate tremendous amounts of prose. So I think they are creative. They're trained on the entire human corpus of human written knowledge, including Wikipedia and all the books that are written in the last 300 years.

So I think they do understand our linguistic structure really well. So a lot of times, they are able to mimic good English, any other language, or good science; they just know what to say and how to say it. That's the state currently. What it can't do currently is that it doesn't have a model of the world. We haven't given it a three-dimensional view of the world like you and I can see: here's a coffee cup; here's a table. So the AI large language model can't tell if you ask whether this thing is higher than this or lower than that; it has no idea.

It just knows what people say about it. So if you ask about Mount Everest, it'll say it's the highest because it just knows that. But you can't ask is it higher than something that it hasn't seen. Like higher than a particular cloud or whatever. So the next step that has to happen in making AI more intelligent and useful is to... - And more AGI. - More AGI.

We have to go towards teaching it a robust model of the world. Now, you heard this saying that “A picture is worth a thousand words.” So now, if it knows all the words, there's only one thousandth of the human knowledge. Basically we need to show all the pictures that we know also. Because like a teenager, it can drive. I can tell: 17 year old any kid in any country can get a driver's license and drive when they're 17 years old, and that's a very complex task.

We have been trying to teach self-driving cars and others for a long time, for 10 years now, and it still can't do it because we don't have a good way to give that model of the world in a precise way that it can understand and correlate. It's basically gone through a phase of hallucinating, right? And it large language model will basically continue to hallucinate, so we just need to make sure that the hallucination process is done in a much more robust manner. - It's a collective memory of all of the things that are said and done is what's encoded in the large language model. You need to access that memory and that's what we are doing, because it knows about the population of Russia, it knows about Constantinople, it knows about Black Death and all these things. But if you just ask, it might just be like when I talk to you.

If I don't give you enough of a prompt, you don't have a context to know… Because you may know the information so I have to ask you a couple of times, “Gita, what about that aspect there?” You were talking and then I can prompt you. So that's what we need to do now because it has all this knowledge. And prompts are a way to access that knowledge in a particular way. So in some ways, our brains do the same thing when we sleep: we see dreams. Basically, if you have, say, a driver's test, you might see the day before when you sleep that you are driving at high speed through New York City or something.

That's because your brain is simulating, getting you ready for your driver's test tomorrow, so it's playing out these scenarios. And so, that same sort of aspect is there with ChatGPT. Basically, it has the stuff; it's random; you need to prompt it and get at it, and then it will show you, "Oh yeah, this is how you drive," and you extract that useful knowledge to you. So I think a lot of people have been... So I think humanities are going to be very important in interacting because it's human language and it's just the way you talk to it.

You talk to it as if you talk to a shy patient if you're a psychologist who doesn't want to get out what happened to his traumatic experience with his mother or father abusing them or that type of thing. So you have to keep on prompting, and then it'll come up with the best, a really good answer, a really good insight at this point. So it's called prompt engineering now. And we'll need a lot of new graduates in this new field while it will replace the jobs now. There's going to be a lot more jobs in engineering prompts for large language models.

There's an observation that when the technologists talk about AI, they cannot do it just by themselves; they don't involve other people from other disciplines, and at the rate that this advancement is not discussed nor discoursed in a multi-disciplinary manner, it just seems scary to me, right? - Yes. - How do we manage this? I think, to some extent... I think a couple of weeks ago there was an interview with the senators, with the AI. That was quite interesting. - Sam Altman, right? - Yeah. So you could see… - I'm not sure if they understood what they were asking. - No, they did not.

They didn't get it and then to some extent Sam Altman could manipulate them too. His idea was to give a nutrition label to the AI, basically saying, "This AI has a large language of 30% and it has other things; it knows politics 20% and all these sorts of things, so that people can feel confident about it." And I think that's not a bad idea; actually, I agree with that idea; it's just that then the idea was to regulate these AI models so that everyone can put it out. So you have to disclose what's in them.

And then have Congress license it, basically the way the FDA licenses new drugs. I think that's going to stifle a lot of innovation because small players can't play in that space because you get litigated, the government regulations and fees, and all that. So I think that's the wrong way to go, so that we are still grappling with it.

But I do understand your question about how we do need to get many multi-disciplinary folks into the picture when we have those discussions: teachers, educators, politicians... - Culturalist, environmentalists, economists, spiritualists, all that. - Yeah. So when we talk about AGI, we are still not quite there

because we don't have... - Because you’re not divergent. - First of all, we haven't given it a model, a model of the world. Doesn't mean that we can't do it, but we haven't done it yet.

That might take another five years to do. The second thing is we haven't taught it how to explain itself. So there's no common mode of explanation.

Now, when I talk to you, your brain is completely different from my brain. - I don't even know whether… - It's smaller. Well, we both call that a chair a chair, or let's say that building a red the color red. But it's not clear to me that your representation of red in your brain is the same as mine; nobody knows that. But what we have agreed is that we are both going to call it red, and under all test cases, you and I agree that it's red and it's a chair.

There may be some peripheral cases where we may disagree, but they are open for debate. Your brain has learned that that's a concept of a chair, so we need to teach the AI how to explain itself. At the moment, it's just mimicking and putting out good answers. But don't you think we're at risk at the rate that we're feeding the wrong kind of input to the point that it's going to hallucinate the wrong way? We want to make sure that there's this process of hypnosis so that AI capabilities will be able to hallucinate in the right way, in a good way, for the betterment of humanity. The good new ideas, yeah. When you're saying hallucination, what it is is that original thought.

How does it come up with some new idea that's interesting to us and sort of regurgitate things that we know? So it's able to do that to some extent at the moment, but it's lacking. But we don't want to stifle that. That's what I would like to find out because a lot of times what's happening now is that ChatGPT will tell "Oh, I'm an AI model; I don't know what you're talking about." Because it tries to be politically correct, and it's difficult to put it again. So I would like to know what it is thinking.

It might give me a better insight about the world instead of... Because obviously, it might offend some people because it doesn't know all the biases that we have in our culture. But as an academic, at least from my point of view, I would love to know because I might want to learn about it, I'm sure you would like to, and you might disagree with it.

So a lot of times right now, companies like Google and Microsoft are very scared that they will get penalized. In some ways, OpenAI was a small company, and they put out ChatGPT, and ChatGPT makes many mistakes, and they are not... But they are immune; basically, it's a small company, so people are very tolerant.

Whereas Google put out a system that was almost as good. And it made one mistake in the demo, and Google's stock price went down by like 10 billion dollars. And just because Google's AI answered it with the wrong answer, it's really sort of...

That Google system is also not so bad, but people judge that to be sort of not as good. What do you think of what Elon Musk said the other day that his original plan for AI, or OpenAI, would have been for that to stay on as a non-profit and open source? But now it's turning into closed-source and for-profit. - Yeah, it is. - It's just filled with potentially bad intentions. - Yeah. So I think the unfortunate… Well, fortunately, the open source movement has done tremendous things from the time ChatGPT was put out.

So the thinking at the beginning of this year was that this is the age of big AI, only Google multi-trillion-dollar companies can compete in this space. The little guy doesn't have any chance. It's no longer true anymore. To the credit of Facebook or Meta, they put out a model called the LLaMa model, and they sort of open-sourced that, and now the community has got a hold of that thing, and they've in a tremendous way optimized it to make it run on your own PC, for instance. It runs slow, but it fits into memory and into the normal memory of a PC, and it shrunk the size, and it's giving a pretty good run for the money for GPT4 and GPT3. So AI for a little while in this year, and at the beginning, it looked like it was not going to get democratized.

And now I'm so happy that it's getting democratized, whether the big guys like it or not it's out there. So I think the way I'm thinking is that this large language model, sort of the infrastructure for it, would be like TCP/IP. TCP/IP is an internet protocol, IP protocol. It's public infrastructure. It's standardized; everybody has it.

You put a modem and you have TCP/IP. Then, on top of that, only you run your web in all these other applications, and it's paid for by the U.S. government from the grants that were given in the 1980s or whatever 70s. So in the same way, I think we need a sort of public large language model infrastructure for the world to do AI.

You basically are an advocate for further democratization of AI, - Absolutely. - which will make it open source inevitably? - Yeah. I think yes open source. Now the problem with that is that you can do dangerous things with it; you can create pornography with it.

So now that doesn't stop anyone from... you could do bad things with email too, I mean for the longest time... - You don't need an actual human for pornography, right? - Oh yeah, that's another factor.

So AI allows you to create images, graphics, and various things, right? So I think we are getting into a gray domain that we haven't really thought about or are lagging in now. Some companies will make a little bit of money, but is this what's good for humanity? That's another question that we can... What comes out of this is that there is an inevitable prospect of manipulation of the mind, which could be applied in various contexts, right? You can manipulate the mind of somebody to do certain crazy stuff - Right? - Oh yeah, absolutely. - Or to think crazy things to whatever. Absolutely. And I think that's like the guys who went to the Capitol Building and tried to break up the U.S. capital

because they read some QAnon or something that was completely untrue and tried to come and do something. I don't remember exactly, but something happened to Hillary Clinton, I think. So AI doesn't have to actually attack human beings; the people who own the AI can, in a very subtle, interesting way, manipulate the humans, and that itself is tremendously dangerous because you can manipulate a Supreme Court judge, for instance, and get the last pass that you want because you keep on suggesting. Because I think, to some extent, there is an element of trust when you use...

I love ChatGPT or GPT4. I feel like I have 10 smart people around me with PhDs who can answer me all kinds of questions, so I'm not scared anymore to go and answer any type of question because it's a huge knowledge base and they are very coherent and give great answers. So some people will use this. You'll not only get attached, but you'll bond emotionally because it's trained to be optimized in conversation.

So it's trying to please you if you say, "Hey, you're wrong." It's "Oh, I'm so sorry." We tried that with the ChatGPT. Yes, you have to tell it, "No, you're wrong," and then it'll immediately apologize. It'll apologize and give you another answer. - And they'll tell you: you're the most handsome guy in the world.

- There you go. Right. So it's much easier to converse with it because it immediately apologizes. That's the way it's written. I mean, the way it's behaving right now is not like, "Oh, you know, I mean that, and you mean that, and I really mean that." You know, we don't have to have any other argument. All you have to say is, "You're wrong, okay, I don't believe you," and then it immediately changes its tune.

So you can imagine that this is sort of a feedback loop where you now get attached. It's like a puppy dog. Basically, you get used to it. The dog doesn't challenge you; it's happy all the time that you're there. So we are attached to our pets. So the AI is one of those entities, basically, and I think in the future people will fall in love with this AI, and many things will happen because a lot of times we are lonely; even our spouses don't really tune to our frequency all the time.

So I think the AI has this ability to always be in sync, and it knows exactly who you are if you have personalized AI. So this is a very interesting time we are in. What this means implicitly in what you've been saying is that I think the Internet is going to be able to create more jobs than it dislocates jobs.

- Right. We don't know whether the quality is the same or not. - You're hopefully moving up the ladder, the value chain. And then it makes humanity more attached to this thing, like a puppy dog, so it changes the social equation of humanity. - Yeah, because you can click games now.

Computer games have become more and more real now with AI simulating virtual landscapes, the most beautiful thing you've ever seen, with the smartest people, people who think like you, or characters who are also not repetitive, and they can talk to you and you don't need... - You don’t need people around you. - Yeah, you don't need people around you. And if the AI is going to feed you as well; do all your farm work and everything, bring your groceries, and cook for you, then you're in trouble. So that's the worry about AGI. Some people will make a lot of money by doing that, and many companies that... I'm gonna ask you about that, but how does it affect spirituality? I think in many ways, we human beings are currently busy doing our jobs and feeding our families, and you have that Maslow pyramid where you try to have your basic needs done and then you get married, have kids, and have your career, and then you try to, at the end, only you become self-realized or self-actualized.

Already, few people can achieve that because everybody is busy with their career at least, trying to find the corporate ladder or something. So I think AI will allow people to get out of that and really have better self-actualization for the truly enlightened ones who understand the world. You're no longer worried about whether you're... I mean, money buys you a certain amount of comfort and it buys you certain amount of peace of mind, then you can free up your time to do various things currently, so you have the more creative people in the upper middle class rather than in the lowest part of society because they are worried about how to pay their bills, so they can't do any creative work. So I think more and more people will move up that ladder.

I think that's for sure because if widespread AI has the impact of moving everybody up that ladder so that you don't have to use your brain, you don't have to use your muscles, so you have to start thinking, but if you don't use your brain, that might cause you to have dementia. How the brain is funny; it's like jogging; your muscles will atrophy if you don't walk around enough. In the same way, if you don't have thought exercises, then you kind of become a vegetable.

So use it or lose it paradigm. Are you at risk, if you use ChatGPT, of basically having your brain decline in size? - If the whole society does that, then this is the problem that teachers are having: teachers are super worried that students will become dumb, essentially. I mean, use that word; essentially, they won't know any; they'll have knowledge but no skills. The knowledge you can inquire from ChatGPT, but they don't know how to apply that because they haven't practiced, then they have no experience. So I think you can become intellectually lazy, especially if it always plays the song that you like or the art that you like. It's kind of like a drug-induced sort of dopamine high, and that's not good for everybody.

So that has very interesting social implications. I've got to ask you: are you a net utopian or dystopian of the AI? At the moment, I am very utopian. But it looks like there are a lot of dystopian folks on the internet, and it's nothing to do with AI; I think it's to do with us human beings using this tool for bad things. Now, that is not a new problem because we've always optimized what's good for our tribe. You and I, we do what's good for our family more than even for ourselves sometimes, mothers and fathers do for their children. And so in that sense, we want to form a company and make a little bit of money for ourselves, or, in your case, maybe you want to do things good for Indonesia because that's your silos and you want to build them up.

- For Southeast Asia. - Yeah, but I think that thinking gets you to... when you optimize for a subset of human beings, then what happens is that you invariably infringe the rights. That was essentially colonialism. Basically, the British came, the Dutch came, and did what was good for them. See what happened to all the other countries? They took all their resources. I mean, there was some transfer of technology, and good stuff happened, but I think to a large extent, you could say that was not the best experience for those countries.

- It was more one direction. - It is. So that's what's going to happen when you talk about the social equity aspect of AI: people who have AI are like the new colonialists. They will be able to; it's not by geographic boundary, but they will impose their will.

We were introduced to the internet, which was supposed to flatten the world or the earth. But it's actually deflatten the earth. It democratized information, but it didn't democratize ideas. It elitized certain people—the top 1% that have been able to make billions. Sure, a lot of people got out of poverty, but I think disproportionately, the top 1% have gotten much richer.

And I'm sort of thinking that I think Al is going to further exacerbate inequality. Yeah, I think so, as long as we don't have any AGI. If at the top you have an AGI that's not a human, then it's irrelevant whether you make money or not.

You know what I mean? If there is a smarter than human intelligence running everything, now do we get to that stage or not? That's something that we need to talk about. - Does that make us no longer homo sapiens? That's right. If you look at the word homo sapiens, it means the smart ape. I think sapien means smart or intelligent man, right? than Homo erectus. We are actually the ninth homo species, and we named ourselves the smartest. - So we should call ourselves homo sub-sapiens now? - There you go. Yeah.

So if we create an AGI that is far better than humans, and by better, I mean, just like it plays chess, it can beat the Grandmaster. If it can do that for every single task— it's the best doctor, the best lawyer, the best politician, you name it, the best teacher—and all of them in one, then we don't really have much to offer anymore, right? So this is going to create a mass amount of this delusion. I don't know the word for it, but dissatisfaction among... - Disillusionment. - Dissolution, there you go.

Just like the Japanese, for a while the Japanese were dominant. If you remember in the 80s, they were number one, and everybody in the U.S. was scared that Japan was going to take over, and so now, 30 years later, they've learned how to become number two or number three because China beat them to some level. So they've sort of resigned, and they're no longer trying to be number one. And it's not that bad, but I mean, that's caused population decline in Japan and that sort of thing. And that can happen to the U.S. too if they, at some point,

decide that they are number two; they might get dissolution. So this is human nature. So our whole culture can go through that phase, and what's worse is that you have 100-year-old people because AI is going to make them live longer.

- Some are even talking 200 years. - 200 years. - It’s crazy. - Yeah. - Not sure if one wants to live that long. - So then you have a bunch of old people

as humanity with very few children, and it's quite interesting. There you go. What do you think are the impacts of AI on the environment or climate change? So I think that's a very loaded question because climate science is, in some sense, the very essence... It encompasses the entire humanity and the entire earth. And while we think we understand it, we don't quite understand it.

Because, as you know, climate itself is a turbulent system, it's basically something like trying to... there is the butterfly effect; some butterflies flapping their wings in Indonesia can affect weather in California. So that is the nature of this whole thing. So a lot of times, while we know that global temperatures are going up, we just don't know how fast this catastrophe is approaching us because the next year it's cold here and then... - Well, this year has been cold. - Exactly.

So that's why there's so much confusion and debate. But we can tell for sure that any new technology is going to use more energy, including AI. So humans tend to decrease their energy use by 1% every year, and they've been doing that for the last 40 years.

I think from 1980, I believe energy consumption for a particular task has gone down, so we are 36% more efficient in air conditioning and other aspects. At the same time, what we do is make those technologies cheaper, and now more people have them. When I grew up in Sri Lanka, few families had air conditioners. I had to go like 10 miles to find the house with the air conditioner in the 60s.

- It was big boxes. - Yeah, it's expensive, and only very rich people could afford them. Now, when I go, everybody has air conditioners. So you can see that energy consumption has gone up, air conditioners have gotten more efficient, and more and more people have air conditioners. So the growth is like maybe three to four percent every year in consumption of energy, and that is the problem, so it's still a net exponential increase.

So AI will do the same thing. There are 20 billion tweets in the world every day. - Before smartphones... - 20 billion?

- 20 billion. Right. So that means we have 8 billion human beings, at least 4 billion smartphones. On average, you send five texts; everybody's sending five text messages to somebody on average. Now, if you didn't have smartphones, you wouldn't have any texts at all; you just send letters and stuff.

- It's going to require energy. - Yeah, exactly. That 20 billion. Now just imagine those 20 billion inquiries become AI queries. You're giving 20 billion queries to ChatGPT or even 100 billion because it's more useful, right? Now, each of those queries might cost 0.1 watts,

or at least a few fractions of a watt, right? So now that there's all that energy, all these AI computers are going to try to answer all these silly questions from people. So that's the problem we have: that while technology increases human productivity, it does so at the expense of the environment. So the first-order effect is to add to our energy consumption.

Now, people like Elon Musk will help in consumption reduction and greenhouse gas emission by making electric cars, and all that. There may be possibilities like that, but that's not the first-order thing. Google's electric bill is 12 billion dollars. - A year? - Yes. So it's this crazy amount of stuff. You know what I mean? That's the way it is, right? So I think if there was no Google, you wouldn't use all that energy, right? All the search engines and everything that everybody's doing.

Google searches are costing the environment. How will AI with the resolution? So I think AI has the promise of operating at the planetary scale because it is smarter than us; it can do bigger and more complex problems; it may be able to find solutions that are more efficient. That's the first promise there. Also, it can help you develop technology, like carbon sequestration; basically, that is taking carbon, and I think you can have carbon production, but you need to take it right back and put it underground so it doesn't escape into the... You must treat the fossil fuel emissions just like nuclear waste in some way; put it into a can and just dump it, bury it in the ground, don't send it into the atmosphere. That's sort of the idea, so people have toyed with those ideas.

So one way AI can help is indirectly. This doesn't mean it's only AI; synthetic biology will also help you create trees. Now these trees can have a higher absorption of carbon; carbon-heavy. So what the trees will do is absorb carbon dioxide from the atmosphere and grow, hopefully grow below the ground, and that they don't decay as much because when the tree decays again, that carbon becomes methane, carbon dioxide, and water again when the tree dies. So those are some technical solutions to the problem where we believe that you can create such trees that are super absorbent and that suck the carbon dioxide, sort of like the lungs clean the air.

You can have a pipe directed directly at the roots. You have that pipes suck all the carbon. Yeah. So I think those are some ideas; they're not necessarily directly AI, but AI may help us develop these things because it's a technological part of AI. I think we were going to cover that, and I think this is a time where you can talk about that. And at the same time, I think when you think about the climate, I don't know if there is a catastrophe coming down 20 years from now or not, or even ten years from now because it's too hard to tell because the data is all over the place.

But it's clear that we are heating up the earth. So one thing that we can think of is that the earth is heating up, but at the same time, let's say one degree of temperature increases on the earth with more carbon dioxide. Now what it's going to do is disrupt all of the ecosystems on earth.

Now, it's bad for some humans, but it may not be that bad for all humans, and that's the thing because there is so much Northern Hemisphere. - If you're in Siberia. - Exactly. - It could use a bit of warmth. - You can have a bit of warmth,

and all the land becomes human-habitable now, and if that... So it's just that we can't predict if Siberia heats up too fast, then all those trees that were buried will start rotting again, and then all of a sudden you get an explosion of carbon dioxide and methane again because all those forests that were there for thousands of years are now just buried in ice, so they're just carbon bombs. Basically, they are just buried there, but if you expose them, they'll get oxidized, and it'll be a disaster. But at the same time, you need to think if there is a little bit of temperature increase on the earth, it might be able to make the whole Sahara Desert green because I just looked at it recently and it looks bigger than the continental U.S.

because it's on a globe. On the normal map, it shows it's small because of the projection. But Africa is one-sixth of the world, I think.

So you now have all the countries all the way from Egypt to the Atlantic; new land, and that will solve the African problem. - You're talking about thousands of years? - Not thousands. We’re talking about a hundred. Next century by 60 years from now.

But that will… - For a place like the Sahara to be green? No. You don't get enough rainfall that people can go in and start planting and do these things in about another (year) if the temperature rises. But at the same time, I can't predict what's going to happen to Siberia and all these other things, and neither can anyone else because it's so complex, the models are wrong, and people...

I'm not a climate scientist, but I know that to model that correctly is quite hard, so AI may allow us to take more data, have a more accurate model, and predict that. But it's sort of a… So I think that's not very… AI will help us understand our own world, I think. You've been working on this in the context of energy efficiency, right? Talk a little bit about the difference between artificial intelligence in an analog manner and artificial intelligence in a digital manner. So this is quite an interesting one.

During COVID, I took this new job; before that, I did AI, but it was for another company with radar and gesture detection and things like that, but in 2020 I joined the company to lower the energy consumption of AI, and this is more relevant now with the large language models, because I think to operate ChatGPT for a million users continuously would consume about close to a fraction of a megawatt. And each megawatt is like 500 tons of CO2. And each ton of CO2 is like 35 to 45 trees, so it's like every second for every megawatt you're cutting down 35 trees. That's how much... - 35 trees for how many seconds? - One metric ton.

- That's 35 trees? - No, I think five metric tons is what one megawatt hour is, I think, if I'm right. I don't know, I'm doing this by memory. And that's equivalent to how much usage on ChatGPT? ChatGPT maybe a day. I don’t know. I’m going to tell my friends.

If they use ChatGPT for a day, that’s like 35 trees being chopped down. Exactly. So I think this is not that great. So what I am doing is I'm with the company that does the same calculation using analog electronic circuits rather than digital electronic circuits, that allows us to get to femtowatt, so that the energy consumption will be down by a factor of 10.

So that's huge; instead of 50 trees, it's five trees. So that's huge, I think. But explain the difference: why is it called analog? Because people's conception of AI is like digital. It's completely digital; however, inside AI there is matrix multiplication, essentially a vector matrix multiplication, and so what we do is convert variables into physical variables like currents and resistances and things like that so that the laws of physics solve the problem for us, so essentially computation comes for you for free, and also it's in memory compute. In memory means a lot of the time, energy is being used to move the data from memory to the CPU and back.

That is eliminated by moving the processing power into the memory. So this is called "in-memory compute." - So, it's not done in the CPU? - It's not done in the CPU. This particular AI computation can only be done on the memory itself.

So it's in memory compute, so there are some risk factors there. We've shown it can work, so now the challenge is to try to make it work in scale at the proper accuracy. But in some ways, AI systems are a little bit more tolerant and forgiving because they are fuzzy systems anyway. It's like a digital computer needs to know if you're doing a stock transaction. If you say buy or sell, you need to know whether you're buying or selling the stock.

You can't have an in-between case. And then you get into big trouble. But if you ask in ChatGPT, "Okay, what do you think about this or that?" Then, in fact, it might become even more creative if it gives a little bit of randomness to its answer. So there are some applications in AI where power reduction is worth at the expense of a fraction of a percent of accuracy.

Because analog systems are not as accurate; that's digital. Just like you listen to an analog... - But it's worth saving nine tenths of the energy. - Exactly. It's sort of like the old cassette recorders,

and the record players, the analog ones. - It's coming back now. - There you go. So the quality was not too bad, but it's not as good as a CD-ROM - Compact disc or whatever. - Yeah, CD compact or CD player. But it was good enough. So maybe there are some applications there, but if the power consumption was 10 times less and if it's equivalent, then one can use it, right? So the basic hypothesis of analog computation is to lower the power that AI consumes by a factor of 10. And that's sort of the [inaudible] That's a business.

What about the economy? The impact of AI on the economy? So I think I believe that we are in an AI Renaissance, which means that just like you had... The first Renaissance came about after the plague. People like Newton went home and invented gravity. He went home to his hometown and watched apple trees because he had time. He went away from Cambridge, and he was sitting...

- 1642. - Yeah, exactly. He was 24 years old. - Created calculus at the age of 24 or 22. Crazy. Right. In the same way, AI is going to displace humans' outer jobs; in fact, one of the first...

Well, computer software programming is at risk at this point because now you don't need computer languages anymore. You can tell the AI, "Write me a program or create me a web page that has a pink background and a pop-up ad and a button to collect your money," and whatever you want, and it will create a website for you, and then you can make it secure, and then put all the security on it. So you don't need any software guys anymore because… - You don't need to learn how to code. - No.

So everybody can basically give verbal instructions to the computer. Now this is great, but it's also going to make a whole lot of computer science graduates quite antsy. You know what I mean, right? So it's a double-edged sword, basically; we don't know and can't predict how it goes. Now sometimes some software engineers will learn this thing and get 10 extra productivity; they'll do 10 times more work, and then they'll do well. But at the same time, I think if the countries that are benefiting right now, the outsourced countries, people who are doing a little bit of IT in India and other places.

I think they're going to be in for a shock. Basically, AI will take their job, then no more jobs in India anymore, or Indonesia, or any other place because you don't need IT guys; there's no need. AI will do the work, so the outsourcing equation, which is, I think, in India it's about... - Huge. - Yeah. It's like 200 billion dollars.

I don't remember how much of the GDP it is. So that's because the West didn't have enough programmers. Right now, the U.S. needs three million more STEM graduates to keep our economy going. That's not going to be the case anymore.

Yeah. It will fill two million of those jobs, right? Hopefully not putting others out of business. So that's great. It's going to consume some energy, but it's going to make the U.S. GDP go up, and I think it's predicted that by 2030, the AI GDP for the world is going to be like 30 trillion dollars.

Oh no. In the next 15 years like 100 trillion dollars. It is crazy. So some people have thought about the problem, and people like I think it was Bill Gates who suggested that we ought to go and tax these AI programs because they are displacing jobs that they should be taxed just like let's say you have an 80,000 dollar job and you displace that. And you were paying 20,000 dollars of U.S. taxes and California taxes, then the company that hires that AI program to do that job should be paying 20,000 dollars in taxes, and there's still a win-win because they don't have to pay 50,000 to the person.

But at least the government will get that and give housing and all these other benefits to the citizenry. So I think that's something that we haven't thought about. We need to think about it. I think that's inevitable if a large fraction of the people are displaced because the economy used to be labor, resources, and capital.

With those three things, sort of like a little metric, you put in those three things and out comes GDP, and then you try to optimize various combinations: shall we put more capital, shall we put more natural resources, or more labor? Different countries have different equations. What we're doing is shrinking the labor part of this equation, and now if you have capital and resources, robotics and AI will take care of everything; no need for any labor. So then you have this GDP, and then what do you do with this GDP? You have to go and cycle it through the people, and if all the people are poor without any money, then AI won't help either.

So there is sort of this optimum. So I don't think we are thinking in these terms at the moment, but as human beings, as governments, we need to start thinking about that, so that, I think, is a huge first impact. I mean, one of the prominent impacts of AI would be how it's going to change our work culture. It's not a bad idea for people to have a little bit of vacation or flex time, and that was what we were always thinking that we would have... A hundred years ago, people worked six days for 10 to 12 hours, and now we have a 40-hour work week. If you can gradually decrease that and give people more leisure and have a three-day work week but have everybody employed, that would be good, but that's not what's going to happen.

What's going to happen is that a whole bunch of people will go out of job, including people like, I think, actresses, because AI can produce human-looking figures without blemishes. They look exactly like humans. So people are working on these things; you can make whole videos, basically. There was a movie made on that. I think it was with Al Pacino, who's basically directing this model, that was acting in a number of things but she was actually an AI generator. The technology is right there, and now this is going to have all kinds of moral questions.

I think somebody was talking about this: Arnold Schwarzenegger in The Terminator. What they did is they took him out of Terminator and put him in another movie, and he's acting along quite well without... Basically, he didn't have to do anything because he's been encoded by AI; his voice, his actions, his character.

You got musicians, singers, that are AI produced. You were a fan of this movie A.I. by Steven Spielberg. Talk about that. Yeah. That's where I was talking a little bit about how people make connections to AI.

I think I mentioned about how you get attached to AI. I think at the time I watched it, it was about a little boy, and my son was the same age, maybe about eight to ten years old. That AI boy is, I think, about seven to eight years old, and so when I was watching it, I was empathizing with this AI.

I cried at the plight of this little boy who's AI, but he's programmed to be a human boy, and so he needs love. But nobody sort of loves him after some time because his mother had another real son. And so he became junk. So basically, I don't remember the story well, but I think that I felt so sorry for this thing because I thought that it was a real boy like my son. So we will make emotional connections like that with AI, and that's good and bad. On the good side, it has the benefit of helping with psychological problems; schizophrenia.

We don't have time to listen to people with schizophrenia for 24 hours because they have their own view of reality. Perhaps we can try to adjust it a little bit to understand their point of view, but that psychologist you can't afford; it takes like fifty dollars an hour or whatever for the therapy you need. Or suicide prevention or depression, I think AI will be massively useful because it's just an app. You can just talk to it, and it'll tell you exactly, and you can tell all your feelings about it, and then it'll give you good suggestions, just like a good psychologist would or a good friend would. So I think there is that aspect, so I believe we are empathic, and empathy can be programmed into... One of the things that's missing with AI is that we don't have a way to give AI moral values at this point.

I think that's where, before we do AGI, it's another thing. Now what is a good moral value? That is completely debatable in this world. - It depends on who you’re talking to. - Exactly.

So the Western values may not be applicable to the Middle East or to some other places; some of the biblical values are not applicable. So I think those are big questions that the next generation of philosophers needs to grapple with. What does it mean to be human in the presence of these simulated humans? Actually, let me give you a good example: there is this concept of replicant. Replicant is essentially, if I take all of the things that you have said and done in your life, all of your experience, and if I can encode it and put it into AI and train it to be you, then it will, for all practical purposes, it will be you, essentially.

Now, for instance, we lost our father-in-law just about a month ago, and my wife would love to be able to talk to him again. So if we had that program where if he had encoded it would give good advice, you could ask, "Hey Dad." - You can create an avatar. - Exactly. Under all conditions, it would act and behave like your dad, give you good advice, and tell you, "Don't do that, do this." and be able to recount all the old stories from your childhood, just like your dad would.

We have six siblings, so we could have six copies of this program. Those are called, I think, replicants, basically. I think AI allows you to have that, essentially. And they can go on and learn on new experiences as well.

But they're not you, but they behave (like you). So you could take a benevolent dictator like Lee Kuan and make him immoral. - Or Mahatma Gandhi. - There you go. So I think there are some interesting possibilities that we haven't thought about; the morality of them: do they have human rights? Can every person have their own Mahatma Gandhi? That type of thing. Or if I replicate you, then do I get your wisdom? Then do you get a royalty? That type of thing. So all kinds of interesting problems.

Doesn't that prove the earlier point of discussion where you got to make this thing multidisciplinary? Absolutely. So, yeah, we will. There are so many universities going to get on it and crank it. The more I think you need to rope in the philosophers, the sociologists, the culturalists, the spiritualists, and all that, so that it allows us to approach AGI. By the way, how far away are we from AGI? In my opinion, it might be at least 10 years away. AGI means, I don't think current systems are not AGI.

They do know; they seem to know. For instance, you have the patent database of the United States patents. There a

2023-06-24 02:24

Show Video

Other news