Dr. Lex Fridman: Machines, Creativity & Love | Huberman Lab Podcast #29

Dr. Lex Fridman: Machines, Creativity & Love | Huberman Lab Podcast #29

Show Video

[bright music] - Welcome to the "Huberman Lab Podcast," where we discuss science and science-based tools for everyday life. I'm Andrew Huberman, and I'm a Professor of Neurobiology and Ophthalmology at Stanford School of Medicine. Today I have the pleasure of introducing Dr. Lex Fridman as our guest on the "Huberman Lab Podcast." Dr. Fridman is a researcher at MIT specializing

in machine learning, artificial intelligence and human robot interactions. I must say that the conversation with Lex was without question, one of the most fascinating conversations that I've ever had, not just in my career, but in my lifetime. I knew that Lex worked on these topics. And I think many of you are probably familiar with Lex and his interest in these topics from his incredible podcast, the "Lex Fridman Podcast."

If you're not already watching that podcast, please subscribe to it. It is absolutely fantastic. But in holding this conversation with Lex, I realized something far more important. He revealed to us a bit of his dream. His dream about humans and robots, about humans and machines, and about how those interactions can change the way that we perceive ourselves and that we interact with the world. We discuss relationships of all kinds, relationships with animals, relationships with friends, relationships with family and romantic relationships.

And we discuss relationships with the machines. Machines that move and machines that don't move, and machines that come to understand us in ways that we could never understand for ourselves, and how those machines can educate us about ourselves. Before this conversation, I had no concept of the ways in which machines could inform me or anyone about themselves. By the end, I was absolutely taken with the idea, and I'm still taken with the idea that interactions with machines have a very particular kind, a kind that Lex understands and wants to bring to the world, can not only transform the self, but may very well transform humanity. So whether or not you're familiar with Dr. Lex Fridman or not,

I'm certain you're going to learn a tremendous amount from him during the course of our discussion, and that it will transform the way that you think about yourself and about the world. Before we begin, I want to mention that this podcast is separate from my teaching and research roles at Stanford. It is however part of my desire and effort to bring zero cost to consumer information about science and science-related tools to the general public. In keeping with that theme, I'd like to thank the sponsors of today's podcast. Our first sponsor is ROKA.

ROKA makes sunglasses and eyeglasses that are of absolutely phenomenal quality. The company was founded by two All-American swimmers from Stanford, and everything about the sunglasses and eyeglasses they've designed had performance in mind. I've spent a career working on the visual system.

And one of the fundamental issues that your visual system has to deal with is how to adjust what you see when it gets darker or brighter in your environment. With ROKA Sunglasses and Eyeglasses, whether or not it's dim in the room or outside, or not there's cloud cover, or whether or not you walk into a shadow, you can always see the world with absolute clarity. And that just tells me that they really understand the way that the visual system works. Processes like habituation and attenuation.

All these things that work at a real mechanistic level have been built into these glasses. In addition, the glasses are very lightweight. You don't even notice really that they're on your face. And the quality of the lenses is terrific. Now, the glasses were also designed so that you could use them, not just while working or at dinner, et cetera, but while exercising.

They don't fall off your face or slip off your face if you're sweating. And as I mentioned, they're extremely lightweight. So you can use them while running, you can use them while cycling and so forth. Also the aesthetic of ROKA glasses is terrific.

Unlike a lot of performance glasses out there, which frankly make people look like cyborgs, these glasses look great. You can wear them out to dinner, you can wear them for essentially any occasion. If you'd like to try ROKA glasses, you can go to roka.com. That's R-O-K-A .com and enter the code Huberman

to save 20% off your first order. That's ROKA, R-O-K-A .com and enter the code Huberman at checkout. Today's episode is also brought to us by InsideTracker.

InsideTracker is a personalized nutrition platform that analyzes data from your blood and DNA to help you better understand your body and help you reach your health goals. I'm a big believer in getting regular blood work done for the simple reason that many of the factors that impact our immediate and long-term health can only be assessed from a quality blood test. And now with the advent of quality DNA tests, we can also get insight into some of our genetic underpinnings of our current and long-term health.

The problem with a lot of blood and DNA tests out there, however, is you get the data back and you don't know what to do with those data. You see that certain things are high or certain things are low, but you really don't know what the actionable items are, what to do with all that information. With InsideTracker, they make it very easy to act in the appropriate ways on the information that you get back from those blood and DNA tests.

And that's through the use of their online platform. They have a really easy to use dashboard that tells you what sorts of things can bring the numbers for your metabolic factors, endocrine factors, et cetera, into the ranges that you want and need for immediate and long-term health. In fact, I know one individual just by way of example, that was feeling good, but decided to go with an InsideTracker test and discovered that they had high levels of what's called C-reactive protein.

They would have never detected that otherwise. C-reactive protein is associated with a number of deleterious health conditions, some heart issues, eye issues, et cetera. And so they were able to take immediate action to try and resolve those CRP levels. And so with InsideTracker, you get that sort of insight.

And as I mentioned before, without a blood or DNA test, there's no way you're going to get that sort of insight until symptoms start to show up. If you'd like to try InsideTracker, you can go to insidetracker.com/huberman to get 25% off any of InsideTracker's plans. You just use the code Huberman at checkout. That's insidetracker.com/huberman

to get 25% off any of InsideTracker's plans. Today's podcast is brought to us by Athletic Greens. Athletic Greens is an all-in-one vitamin mineral probiotic drink. I started taking Athletic Greens way back in 2012. And so I'm delighted that they're sponsoring the podcast.

The reason I started taking Athletic Greens and the reason I still take Athletic Greens is that it covers all of my vitamin mineral probiotic basis. In fact, when people ask me, what should I take? I always suggest that the first supplement people take is Athletic Greens, for the simple reason, is that the things that contains covers your bases for metabolic health, endocrine health, and all sorts of other systems in the body. And the inclusion of probiotics are essential for a healthy gut microbiome. There are now tons of data showing that we have neurons in our gut, and keeping those neurons healthy requires that they are exposed to what are called the correct microbiota, little microorganisms that live in our gut and keep us healthy. And those neurons in turn help keep our brain healthy. They influence things like mood, our ability to focus, and many, many other factors related to health.

With Athletic Greens, it's terrific, because it also tastes really good. I drink it once or twice a day. I mix mine with water and I add a little lemon juice, or sometimes a little bit of lime juice. If you want to try athletic greens, you can go to athleticgreens.com/huberman.

And if you do that, you can claim their special offer. They're giving away five free travel packs, little packs that make it easy to mix up Athletic Greens while you're on the road. And they'll give you a year supply of vitamin D3 and K2. Again, go to athleticgreens.com/huberman

to claim that special offer. And now, my conversation with Dr. Lex Fridman. - We meet again. - We meet again. Thanks so much for sitting down with me. I have a question that I think is on a lot of people's minds or ought to be on a lot of people's minds, because we hear these terms a lot these days, but I think most people, including most scientists and including me don't know really what is artificial intelligence, and how is it different from things like machine learning and robotics? So, if you would be so kind as to explain to us, what is artificial intelligence, and what is machine learning? - Well, I think that question is as complicated and as fascinating as the question of, what is intelligence? So, I think of artificial intelligence, first, as a big philosophical thing.

Pamela McCormick said AI was the ancient wish to forge the gods, or was born as an ancient wish to forge the gods. So I think at the big philosophical level, it's our longing to create other intelligence systems. Perhaps systems more powerful than us. At the more narrow level, I think it's also set of tools that are computational mathematical tools to automate different tasks. And then also it's our attempt to understand our own mind. So, build systems that exhibit some intelligent behavior in order to understand what is intelligence in our own selves.

So all of those things are true. Of course, what AI really means as a community, as a set of researchers and engineers, it's a set of tools, a set of computational techniques that allow you to solve various problems. There's a long history that approaches the problem from different perspectives. What's always been throughout one of the threads, one of the communities goes under the flag of machine learning, which is emphasizing in the AI space, the task of learning. How do you make a machine that knows very little in the beginning, follow some kind of process and learns to become better and better at a particular task? What's been most very effective in the recent about 15 years is a set of techniques that fall under the flag of deep learning that utilize neural networks.

When your networks are these fascinating things inspired by the structure of the human brain, very loosely, but they have a, it's a network of these little basic computational units called neurons, artificial neurons. And they have, these architectures have an input and output. They know nothing in the beginning, and their task with learning something interesting. What that's something interesting is, usually involves a particular task. There's a lot of ways to talk about this and break this down. Like one of them is how much human supervision is required to teach this thing.

So supervised learning is broad category, is the neural network knows nothing in the beginning and then it's given a bunch of examples in computer vision that will be examples of cats, dogs, cars, traffic signs, and then you're given the image and you're given the ground truth of what's in that image. And when you get a large database of such image examples where you know the truth, the neural network is able to learn by example, that's called supervised learning. There's a lot of fascinating questions within that, which is, how do you provide the truth? When you given an image of a cat, how do you provide to the computer that this image contains a cat? Do you just say the entire image is a picture of a cat? Do you do what's very commonly been done, which is a bounding box, you have a very crude box around the cat's face saying, this is a cat? Do you do semantic segmentation? Mind you, this is a 2D image of a cat. So it's not, the computer knows nothing about our three-dimensional world, is just looking at a set of pixels.

So, semantic segmentation is drawing a nice, very crisp outline around the cat and saying, that's a cat. That's really difficult to provide that truth. And one of the fundamental open questions in computer vision is, is that even a good representation of the truth? Now, there's another contrasting set of ideas that our attention they're overlapping is, well, it's used to be called unsupervised learning.

What's commonly now called self-supervised learning. Which is trying to get less and less and less human supervision into the task. So self-supervised learning is more, it's been very successful in the domain of language model, natural English processing, and now more and more as being successful in computer vision task. And the idea there is, let the machine without any ground-truth annotation just look at pictures on the internet, or look at texts on the internet and try to learn something generalizable about the ideas that are at the core of language or at the core of vision.

And based on that, we humans at its best like to call that common sense. So with this, we have this giant base of knowledge on top of which we build more sophisticated knowledge. We have this kind of commonsense knowledge. And so the idea with self-supervised learning is to build this commonsense knowledge about, what are the fundamental visual ideas that make up a cat and a dog and all those kinds of things without ever having human supervision? The dream there is the, you just let an AI system that's self supervised run around the internet for awhile, watch YouTube videos for millions and millions of hours, and without any supervision be primed and ready to actually learn with very few examples once the human is able to show up.

We think of children in this way, human children, is your parents only give one or two examples to teach a concept. The dream with self-supervised learning is that will be the same with machines. That they would watch millions of hours of YouTube videos, and then come to a human and be able to understand when the human shows them, this is a cat. Like, remember this' a cat. They will understand that a cat is not just the thing with pointy ears, or a cat is a thing that's orange, or is furry, they'll see something more fundamental that we humans might not actually be able to introspect and understand. Like, if I asked you, what makes a cat versus a dog, you wouldn't probably not be able to answer that, but if I showed you, brought to you a cat and a dog, you'll be able to tell the difference.

What are the ideas that your brain uses to make that difference? That's the whole dream with self-supervised learning, is it would be able to learn that on its own. That set of commonsense knowledge, that's able to tell the difference. And then there's like a lot of incredible uses of self-supervised learning, very weirdly called self-play mechanism. That's the mechanism behind the reinforcement learning successes of the systems that won at Go, at, AlphaZero that won a chess. - Oh, I see.

That play games? - [Lex] That play games. - Got it. - So the idea of self-play is probably, applies to other domains than just games. Is a system that just plays against itself.

And this is fascinating in all kinds of domains, but it knows nothing in the beginning. And the whole idea is it creates a bunch of mutations of itself and plays against those versions of itself. And the fascinating thing is when you play against systems that are a little bit better than you, you start to get better yourself. Like learning, that's how learning happens. That's true for martial arts.

It's true in a lot of cases. Where you want to be interacting with systems that are just a little better than you. And then through this process of interacting with systems just a little better than you, you start following this process where everybody starts getting better and better and better and better until you are several orders of magnitude better than the world champion in chess, for example. And it's fascinating because it's like a runaway system. One of the most terrifying and exciting things that David Silver, the creator of AlphaGo and AlphaZero, one of the leaders of the team said, to me is a, they haven't found the ceiling for AlphaZero.

Meaning it could just arbitrarily keep improving. Now, in the realm of chess, that doesn't matter to us. That it's like, it just ran away with the game of chess. Like it's like just so much better than humans. But the question is what, if you can create that in the realm that does have a bigger, deeper effect on human beings and societies, that can be a terrifying process.

To me, it's an exciting process if you supervise it correctly, if you inject, if what's called value alignment, you make sure that the goals that the AI is optimizing is aligned with human beings and human societies. There's a lot of fascinating things to talk about within the specifics of neural networks and all the problems that people are working on. But I would say the really big, exciting one is self-supervised learning.

We're trying to get less and less human supervision, less and less human supervision of neural networks. And also just a comment and I'll shut up. - No, please keep going. I'm learning. I have questions, but I'm learning. So please keep going.

- So, to me what's exciting is not the theory, it's always the application. One of the most exciting applications of artificial intelligence, specifically neural networks and machine learning is Tesla Autopilot. So these are systems that are working in the real world. This isn't an academic exercise. This is human lives at stake. This is safety-critical.

- These are automated vehicles. Autonomous vehicles. - Semi-autonomous. We want to be. - Okay.

- We've gone through wars on these topics, - Semi-autonomous vehicles. - Semi-autonomous. So, even though it's called a FSD, Full Self-Driving, it is currently not fully autonomous, meaning human supervision is required. So, human is tasked with overseeing the systems. In fact, liability-wise, the human is always responsible. This is a human factor psychology question, which is fascinating.

I'm fascinated by the whole space, which is a whole 'nother space of human robot interaction when AI systems and humans work together to accomplish tasks. That dance to me is one of the smaller communities, but I think it will be one of the most important open problems once they're solved, is how the humans and robots dance together. To me, semi-autonomous driving is one of those spaces. So for Elon, for example, he doesn't see it that way, he sees semi-autonomous driving as a stepping stone towards fully autonomous driving. Like, humans and robots can't dance well together. Like, humans and humans dance and robots and robots dance.

Like, we need to, this is an engineering problem, we need to design a perfect robot that solves this problem. To me forever, maybe this is not the case with driving, but the world is going to be full of problems with always humans and robots have to interact, because I think robots will always be flawed, just like humans are going to be flawed, are flawed. And that's what makes life beautiful, that they're flawed.

That's where learning happens at the edge of your capabilities. So you always have to figure out, how can flawed robots and flawed humans interact together such that they, like the sum is bigger than the whole, as opposed to focusing on just building the perfect robot? - Mm-hmm. - So that's one of the most exciting applications I would say of artificial intelligence to me is autonomous driving, the semi-autonomous driving. And that's a really good example of machine learning because those systems are constantly learning. And there's a process there that maybe I can comment on, the, Andrej Karpathy who's the head of autopilot calls it the data engine.

And this process applies for a lot of machine learning, which is you build a system that's pretty good at doing stuff, you send it out into the real world, it starts doing the stuff and then it runs into what are called edge cases, like failure cases, where it screws up. We do this as kids. That you have- - You do this as adults.

- We do this as adults. Exactly. But we learn really quickly. But the whole point, and this is the fascinating thing about driving, is you realize there's millions of edge cases. There's just like weird situations that you did not expect. And so the data engine process is you collect those edge cases, and then you go back to the drawing board and learn from them.

And so you have to create this data pipeline where all these cars, hundreds of thousands of cars are driving around and something weird happens. And so whenever this weird detector fires, it's another important concept, that piece of data goes back to the mothership for the training, for the retraining of the system. And through this data engine process, it keeps improving and getting better and better and better and better. So basically you send out a pretty clever AI systems out into the world and let it find the edge cases, let it screw up just enough to figure out where the edge cases are, and then go back and learn from them, and then send out that new version and keep updating that version. - Is the updating done by humans? - The annotation is done by humans. The, so you have to, the weird examples come back, the edge cases, and you have to label what actually happened in there.

There's also some mechanisms for automatically labeling, but mostly, I think you always have to rely on humans to improve, to understand what's happening in the weird cases. And then there's a lot of debate. And this, the other thing, what is artificial intelligence? Which is a bunch of smart people having very different opinions about what is intelligence. So AI is basically a community of people who don't agree on anything. - And it seems to be the case.

First of all, this is a beautiful description of terms that I've heard many times among my colleagues at Stanford, at meetings in the outside world. And there are so many fascinating things. I have so many questions, but I do want to ask one question about the culture of AI, because it does seem to be a community where at least as an outsider, where it seems like there's very little consensus about what the terms and the operational definitions even mean. And there seems to be a lot of splitting happening now of not just supervised and unsupervised learning, but these sort of intermediate conditions where machines are autonomous, but then go back for more instruction like kids go home from college during the summer and get a little, moms still feeds them then eventually they leave the nest kind of thing. Is there something in particular about engineers, or about people in this realm of engineering that you think lends itself to disagreement? - Yeah, I think, so, first of all, the more specific you get, the less disagreement there is.

So there's lot of disagreement about what is artificial intelligence, but there's less disagreement about what is machine learning and even less when you talk about active learning or machine teaching or self-supervised learning. And then when you get into like NLP language models or transformers, when you get into specific neural network architectures, there's less and less and less disagreement about those terms. So you might be hearing the disagreement from the high-level terms, and that has to do with the fact that engineering, especially when you're talking about intelligence systems is a little bit of an art and a science.

So the art part is the thing that creates disagreements, because then you start having disagreements about how easy or difficult the particular problem is. For example, a lot of people disagree with Elon how difficult the problem of autonomous driving is. And so, but nobody knows. So there's a lot of disagreement about, what are the limits of these techniques? And through that, the terminology also contains within it the disagreements. But overall, I think it's also a young science that also has to do with that.

So like it's not just engineering, it's that artificial intelligence truly is a large-scale discipline, where it's thousands, tens of thousands, hundreds of thousands of people working on it, huge amounts of money being made as a very recent thing. So we're trying to figure out those terms. And, of course, there's egos and personalities and a lot of fame to be made. Like the term deep learning, for example, neural networks have been around for many, many decades since the '60s, you can argue since the '40s. So there was a rebranding of neural networks into the word, deep learning, term, deep learning, that was part of the re-invigoration of the field, but it's really the same exact thing.

- I didn't know that. I mean, I grew up in the age of neuroscience when neural networks were discussed, computational neuroscience and theoretical neuroscience, they had their own journals. It wasn't actually taken terribly seriously by experimentalists until a few years ago. I would say about five to seven years ago. Excellent theoretical neuroscientist like Larry Abbott and other colleagues, certainly at Stanford as well that people started paying attention to computational methods.

But these terms, neural networks, computational methods, I actually didn't know that neural network works in deep learning where those have now become kind of synonymous. - No, they're always the same thing. - Interesting.

It was, so. - I'm a neuroscientist and I didn't know that. - So, well, because neural networks probably means something else and neural science not something else, but a little different flavor depending on the field. And that's fascinating too, because neuroscience and AI people have started working together and dancing a lot more in the recent, I would say probably decade. - Oh, machines are going into the brain. I have a couple of questions, but one thing that I'm sort of fixated on that I find incredibly interesting is this example you gave of playing a game with a mutated version of yourself as a competitor.

- Yeah. - I find that incredibly interesting as a kind of a parallel or a mirror for what happens when we try and learn as humans, which is we generate repetitions of whatever it is we're trying to learn, and we make errors. Occasionally we succeed. In a simple example, for instance, of trying to throw bulls eyes on a dartboard. - Yeah.

- I'm going to have errors, errors, errors. I'll probably miss the dartboard. And maybe occasionally, hit a bullseye.

And I don't know exactly what I just did, right? But then let's say I was playing darts against a version of myself where I was wearing a visual prism, like my visual, I had a visual defect, you learn certain things in that mode as well. You're saying that a machine can sort of mutate itself, does the mutation always cause a deficiency that it needs to overcome? Because of mutations in biology sometimes give us super powers, right? Occasionally, you'll get somebody who has better than 2020 vision, and they can see better than 99.9% of people out there. So, when you talk about a machine playing a game against a mutated version of itself, is the mutation always say what we call a negative mutation, or an adaptive or a maladaptive mutation? - No, you don't know until you get, so, you mutate first and then figure out and they compete against each other. - So, you're evolving, the machine gets to evolve itself in real time. - Yeah.

And I think of it, which would be exciting if you could actually do with humans. It's not just. So, usually you freeze a version of the system. So, really you take on Andrew of yesterday and you make 10 clones of them. And then maybe you mutate, maybe not.

And then you do a bunch of competitions of the Andrew of today, like you fight to the death, and who wins last. So, I love that idea of like creating a bunch of clones of myself from like from each of the day for the past year, and just seeing who's going to be better at like podcasting or science, or picking up chicks at a bar or I don't know, or competing in Jujitsu. That's the one way to do it, I mean, a lot of Lexes would have to die for that process, but that's essentially what happens, is in reinforcement learning through the self-play mechanisms, it's a graveyard of systems that didn't do that well. And the surviving, the good ones survive. - Do you think that, I mean, Darwin's Theory of Evolution might have worked in some sense in this way, but at the population level.

I mean, you get a bunch of birds with different shaped beaks and some birds have the shaped beak that allows them to get the seeds. I mean, is a trivially simple example of Darwinian in evolution, but I think it's correct even though it's not exhaustive. Is what you're referring to? You essentially that normally this is done between members of a different species, lots of different members of species have different traits and some get selected for, but you could actually create multiple versions of yourself with different traits. - So, with, I should probably have said this, but perhaps it's implied with machine learning, with reinforcement learning through these processes. One of the big requirements, is to have an objective function, a loss function, a utility function, those are all different terms for the same thing, is there's a like any equation that says what's good, and then you're trying to optimize that equation.

So, there's a clear goal for these systems. - Because it's a game, like with chess, there's a goal. - But for anything. Anything you want machine learning to solve, there needs to be an objective function.

In machine learning, it's usually called Loss Function, that you're optimizing. The interesting thing about evolution, it's complicated of course, but the goal also seems to be evolving. Like it's a, I guess, adaptation to the environment, is the goal, but it's unclear that you can convert that always.

It's like survival of the fittest. It's unclear what the fittest is. In machine learning, the starting point, and this is like what human ingenuity provides, is that fitness function of what's good and what's bad, which it lets you know which of the systems is going to win. So, you need to have a equation like that. One of the fascinating things about humans, is we figure out objective functions for ourselves. Like it's the meaning of life, like why the hell are we here? And a machine currently has to have a hard-coded statement about why.

- It has to have a meaning of- - Yeah. - Artificial intelligence-based life. - Right. It can't. So, like there's a lot of interesting explorations about that function being more about curiosity, about learning new things and all that kind of stuff, but it's still hard coded.

If you want a machine to be able to be good at stuff, it has to be given very clear statements of what good at stuff means. That's one of the challenges of artificial intelligence, is you have to formalize the, in order to solve a problem, you have to formalize it and you have to provide both like the full sensory information, you have to be very clear about what is the data that's being collected, and you have to also be clear about the objective function. What is the goal that you're trying to reach? And that's a very difficult thing for artificial intelligence.

- I love that you mentioned curiosity, I am sure this definition falls short in many ways, but I define curiosity as a strong interest in knowing something, but without an attachment to the outcome. It's sort of a, it could be a random search, but there's not really an emotional attachment, it's really just a desire to discover and unveil what's there without hoping it's a gold coin under a rock, you're just looking under rocks. Is that more or less how the, within machine learning, it sounds like there are elements of reward prediction and rewards. The machine has to know when it's done the right thing. So, can you make machines that are curious, or are the sorts of machines that you are describing, curious by design? - Yeah, curiosity is a kind of a symptom, not the goal.

So, what happens, is one of the big trade-offs in reinforcement learning, is this exploration versus exploitation. So, when you know very little, it pays off to explore a lot, even suboptimal, like even trajectories that seem like they're not going to lead anywhere, that's called exploration. The smarter and smarter and smarter you get, the more emphasis you put on exploitation, meaning you take the best solution, you take the best path. Now, through that process, the exploration can look like curiosity by us humans, but it's really just trying to get out of the local optimal, the thing it's already discovered. From an AI perspective, it's always looking to optimize the objective function, it derives, and we can talk about the slot more, but in terms of the tools of machine learning today, it derives no pleasure from just the curiosity of like, I don't know, discovery.

- So, there's no dopamine for machine learning. - There's no dopamine. - There's no reward, system chemical, or I guess electronic-reward system. - That said, if you look at machine learning literature and reinforcement learning literature, that will use, like deep mind, we use terms like dopamine, we're constantly trying to use the human brain to inspire totally new solutions to these problems. So, they'll think like, how does dopamine function in the human brain, and how can it lead to more interesting ways to discover optimal solutions? But ultimately currently, there has to be a formal objective function.

Now, you could argue the humans also has a set of objective functions we try and optimize, we're just not able to introspect them. - Yeah, we don't actually know what we're looking for and seeking and doing. - Well, like Lisa Feldman Barrett who we spoken with at least on Instagram, I hope you- - I met her through you, yeah.

- Yeah, I hope you actually have are on this podcast. - Yes, she's terrific. - So, she has a very, it has to do with homeostasis like that.

Basically, there's a very dumb objective function that the brain is trying to optimize, like to keep like body temperature the same. Like there's a very dom kind of optimization function happening. And then what we humans do with our fancy consciousness and cognitive abilities, is we tell stories to ourselves so we can have nice podcasts, but really it's the brain trying to maintain a, just like healthy state, I guess. That's fascinating. I also see the human brain, and I hope artificial intelligence systems, as not just systems that solve problems, or optimize a goal, but are also storytellers. I think there's a power to telling stories.

We tell stories to each other, that's what communication is. Like when you're alone, that's when you solve problems, that's when it makes sense to talk about solving problems. But when you're a community, the capability to communicate, tell stories, share ideas in such a way that those ideas are stable over a long period of time, that's like, that's being a charismatic storyteller.

And I think both humans are very good at this. Arguably, I would argue that's why we are who we are, is we're great storytellers. And then AI I hope will also become that.

So, it's not just about being able to solve problems with a clear objective function, it's afterwards, be able to tell like a way better, like make up a way better story about why you did something, or why you failed. - So, you think that robots or, and/or machines of some sort are going to start telling human stories? - Well, definitely. So, the technical field for that is called Explainable AI, Explainable Artificial Intelligence, is trying to figure out how you get the AI system to explain to us humans why the hell it failed, or why it succeeded, or there's a lot of different sort of versions of this, or to visualize how it understands the world.

That's a really difficult problem, especially with neural networks that are famously opaque, that we don't understand in many cases, why a particular neural network does what it does so well, and to try to figure out where it's going to fail, that requires the AI to explain itself. There's a huge amount of money, like there's a huge amount of money in this, especially from government funding and so on. Because if you want to deploy AI systems in the real world, we humans at least, want to ask it a question like, why the hell did you do that? Like in a dark way, why did you just kill that person, right? Like if a car ran over a person, we want to understand why that happened.

And now again, we're sometimes very unfair to AI systems because we humans can often not explain why very well. But that's the field of Explainable AI that people are very interested in because the more and more we rely on AI systems, like the Twitter recommender system, that AI algorithm that's, I would say impacting elections, perhaps starting wars, or at least military conflict, that algorithm, we want to ask that algorithm, first of all, do you know what the hell you're doing? Do you understand the society-level effects you're having? And can you explain the possible other trajectories? Like we would have that kind of conversation with a human, we want to be able to do that with an AI. And in my own personal level, I think it would be nice to talk to AI systems for stupid stuff, like robots when they fail to- - Why'd you fall down the stairs? - Yeah. But not an engineering question, but almost like an endearing question, like I'm looking for, if I fell and you and I were hanging out, I don't think you need an explanation exactly what were the dynamics, like what was the under actuated system problem here? Like what was the texture of the floor? Or so on. Or like, what was the- - No, I want to know what you're thinking. - That, or you might joke about like, you're drunk again, go home, or something, like there could be humor in it, that's an opportunity.

Like storytelling, isn't just explanation of what happened, it's something that makes people laugh, it makes people fall in love, it makes people dream, and understand things in a way that poetry makes people understand things as opposed to a rigorous log of where every sensor was, where every actuator was. - I mean, I find this incredible because one of the hallmarks of severe autism spectrum disorders is, a report of experience from the autistic person that is very much a catalog of action steps. It's like, how do you feel today? And they'll say, well, I got up and I did this, and then I did this, and I did this. And it's not at all the way that a person who doesn't have autism spectrum disorder would respond. And the way you describe these machines has so much humanism, or so much of a human and biological element, but I realized that we were talking about machines.

I want to make sure that I understand if there's a distinction between a machine that learns, a machine with artificial intelligence and a robot. Like at what point does a machine become a robot? So, if I have a ballpoint pen, I'm assuming I wouldn't call that a robot, but if my ballpoint pen can come to me when I moved to the opposite side of the table, if it moves by whatever mechanism, at that point, does it become a robot? - Okay, there's 1 million ways to explore this question. It's a fascinating one. So, first of all, there's a question of what is life? Like how do you know something as a living form and not? And it's to the question of when does sort of a, maybe a cold computational system becomes a, or already loading these words with a lot of meaning, robot and machine, So, one, I think movement is important, but that's a kind of a boring idea that a robot is just a machine that's able to act in the world. So, one artificial intelligence could be both just the thinking thing, which I think is what machine learning is, and also the acting thing, which is what we usually think about robots.

So, robots are the things that have a perception system that's able to take in the world however you define the world, is able to think and learn and do whatever the hell it does inside, and then act on the world. So, that's the difference between maybe an AI system learning machine and a robot, it's something that's able, a robot is something that's able to perceive the world and act in the world. - So, it could be through language or sound, or it could be through movement or both. - Yeah. And I think it could also be in the digital space as long as there's a aspect of entity that's inside the machine and a world that's outside the machine. And there's a sense in which the machine is sensing that world and acting in it.

- So, we could, for instance, there could be a version of a robot, according to the definition that I think you're providing, where the robot, where I go to sleep at night and this robot goes and forges for information that it thinks I want to see loaded onto my desktop in the morning. There was no movement of that machine, there was no language, but it essentially, has movement in cyberspace. - Yeah, there's a distinction that I think is important in that there's an element of it being an entity, whether it's in the digital or the physical space. So, when you have something like Alexa in your home, most of the speech recognition, most of what Alexa is doing, is constantly being sent back to the mothership.

When Alexa is there on its own, that's to me, a robot, when it's there interacting with the world. When it's simply a finger of the main mothership, then the Alexa is not a robot, then it's just an interaction device, then may be the main Amazon Alexa AI, big, big system is the robot. So, that's important because there's some element, to us humans, I think, where we want there to be an entity, whether in the digital or the physical space, that's where ideas of consciousness come in and all those kinds of things that we project our understanding of what it means to be a being. And so, to take that further, when does a machine become a robot, I think there's a special moment. There's a special moment in a person's life and in a robot's life where it surprises you. I think surprise is a really powerful thing, where you know how the thing works and yet it surprises you, that's a magical moment for us humans.

So, whether it's a chess-playing program that does something that you haven't seen before, that makes people smile like, huh, those moments happen with AlphaZero for the first time in chess playing, where grand masters were really surprised by a move. They didn't understand the move and then they studied and studied and then they understood it. But that moment of surprise, that's for grandmasters in chess.

I find that moment of surprise really powerful, really magical in just everyday life. - Because it supersedes the human brain in that moment? - So, it's not supersedes, like outperforms, but surprises you in a positive sense. Like I didn't think he could do that, I didn't think that you had that in you. And I think that moment is a big transition for a robot from a moment of being a servant that accomplishes a particular task with some level of accuracy, with some rate of failure, to an entity, a being that's struggling just like you are in this world. And that's a really important moment that I think you're not going to find many people in the AI community that talk like I just did.

I'm not speaking like some philosopher or some hippie, I'm speaking from purely engineering perspective. I think it's really important for robots to become entities and explore that as a real engineering problem, as opposed to everybody treats robots in the robotics community, they don't even call them or he or she, they don't give them, try to avoid giving them names, they've really want to see like a system, like a servant. They see it as a servant that's trying to accomplish a task. To me, and don't think I'm just romanticizing the notion, I think it's a being, it's a currently perhaps a dumb being, but in the long arc of history, humans are pretty dumb beings too, so- - I would agree with that statement. [Andrew laughing] - So, I tend to really want to explore this treating robots really as entities, yeah. So, like anthropomorphization, which is the sort of the act of looking at a inanimate object and projecting onto it life-like features, I think robotics generally sees that as a negative, I see it as a superpower.

Like that, we need to use that. - Well, I'm struck by how that really grabs onto the relationship between human and machine, or human and robot. So, I guess the simple question is, and I think you've already told us the answer, but does interacting with a robot change you? In other words, do we develop relationships to robots? - Yeah, I definitely think so.

I think the moment you see a robot or AI systems as more than just servants but entities, they begin to change you, in just like good friends do, just like relationships just to other humans. I think for that, you have to have certain aspects of that interaction. Like the robot's ability to say no, to have its own sense of identity, to have its own set of goals, that's not constantly serving you, but instead, trying to understand the world and do that dance of understanding through communication with you. So, I definitely think there's a, I mean, I have a lot of thoughts about this as you may know, and that's at the core of my life-long dream actually of what I want to do, which is I believe that most people have a notion of loneliness in them that we haven't discovered, that we haven't explored, I should say.

And I see AI systems as helping us explore that so that we can become better humans, better people towards each other. So, I think that connection between human and AI, human and robot, is not only possible, but will help us understand ourselves in ways that are like several orders of magnitude deeper than we ever could have imagined. I tend to believe that [sighing] well, I have very wild levels of belief in terms of how impactful that will be, right? - So, when I think about human relationships, I don't always break them down into variables, but we could explore a few of those variables and see how they map to human-robot relationships. One is just time, right? If you spend zero time with another person at all in cyberspace or on the phone or in person, you essentially have no relationship to them.

If you spend a lot of time, you have a relationship, this is obvious. But I guess one variable would be time, how much time you spend with the other entity, robot or human. The other would be wins and successes. You enjoy successes together. I'll give a absolutely trivial example this in a moment, but the other would be failures.

When you struggle with somebody, whether or not you struggle between one another, you disagree, like I was really struck by the fact that you said that robot saying no, I've never thought about a robot saying no to me, but there it is. - I look forward to you being one of the first people I send this robots to. - So do I.

So, there's struggle. When you struggle with somebody, you grow closer. Sometimes the struggles are imposed between those two people, so called trauma bonding, they call it in the whole psychology literature and pop psychology literature. But in any case, I can imagine. So, time successes together, struggle together, and then just peaceful time, hanging out at home, watching movies, waking up near one another, here, we're breaking down the elements of relationships of any kind.

So, do you think that these elements apply to robot-human relationships? And if so, then I could see how, if the robot has its own entity and has some autonomy in terms of how it reacts you, it's not just there just to serve you, it's not just a servant, it actually has opinions, and can tell you when maybe your thinking is flawed, or your actions are flawed. - It can also leave. - It could also leave.

So, I've never conceptualized robot-human interactions this way. So, tell me more about how this might look. Are we thinking about a human-appearing robot? I know you and I have both had intense relationships to our, we have separate dogs obviously, but to animals, it sounds a lot like human-animal interaction.

So, what is the ideal human-robot relationship? - So, there's a lot to be said here, but you actually pinpointed one of the big, big first steps, which is this idea of time. And it's a huge limitation in machine-learning community currently. Now we're back to like the actual details. Life-long learning is a problem space that focuses on how AI systems can learn over a long period of time. What's currently most machine learning systems are not able to do, is to all of the things you've listed under time, the successes, the failures, or just chilling together watching movies, AI systems are not able to do that, which is all the beautiful, magical moments that I believe are the days filled with, they're not able to keep track of those together with you.

- 'Cause they can't move with you and be with you. - No, no, like literally we don't have the techniques to do the learning, the actual learning of containing those moments. Current machine-learning systems are really focused on understanding the world in the following way, it's more like the perception system, like looking around, understand like what's in the scene. That there's a bunch of people sitting down, that there is cameras and microphones, that there's a table, understand that. But the fact that we shared this moment of talking today, and still remember that for like next time you're doing something, remember that this moment happened. We don't know how to do that technique-wise.

This is what I'm hoping to innovate on as I think it's a very, very important component of what it means to create a deeper relationship, that sharing of moments together. - Could you post a photo of you in the robot, like selfie with robot and the robot sees that image and recognizes that was time spent, there were smiles, or there were tears- - Yeah. - And create some sort of metric of emotional depth in the relationship and update its behavior? - So.

- Could it... It texts you in the middle of the night and say, why haven't you texted me back? - Well, yes, all of those things, but we can dig into that. But I think that time element, forget everything else, just sharing moments together, that changes everything. I believe that changes everything. Now, there's specific things that are more in terms of systems that I can explain you.

It's more technical and probably a little bit offline, 'cause I have kind of wild ideas how that can revolutionize social networks and operating systems. But the point is that element alone, forget all the other things we're talking about like emotions, saying no, all that, just remembering sharing moments together would change everything. We don't currently have systems that share moments together. Like even just you and your fridge, just all those times, you went late at night and ate thing you shouldn't have eaten, that was a secret moment you have with your refrigerator.

You shared that moment, that darkness or that beautiful moment where you were just like heartbroken for some reason, you're eating that ice cream or whatever, that's a special moment. And that refrigerator was there for you, and the fact that it missed the opportunity to remember that is tragic. And once it does remember that, I think you're going to be very attached to the refrigerator. You're going to go through some hell with that refrigerator. Most of us have, like in the developed world, have weird relationships with food, right? So, you can go through some deep moments of trauma and triumph with food, and at the core of that, is the refrigerator.

So, a smart refrigerator, I believe would change society. Not just the refrigerator, but these ideas in the systems all around us. So, I just want to comment on how powerful that idea of time is.

And then there's a bunch of elements of actual interaction of allowing you as a human to feel like you're being heard. Truly heard, truly understood, that we human, like deep friendship is like that, I think, but there's still an element of selfishness, there's still an element of not really being able to understand another human. And a lot of the times when you're going through trauma together, through difficult times and through successes, you actually starting to get that inkling of understanding of each other, but I think that could be done more aggressively, more efficiently.

Like if you think of a great therapist, I think I've never actually been to a therapist, but I'm a believer I used to want to be a psychiatrist. - Do Russians go to therapists? - No, they don't. They don't. And if they do, the therapist don't live to tell the story.

I do believe in talk therapy, which friendship is to me, is it's talk therapy, or like you don't even necessarily need to talk [laughing] it's like just connecting through in the space of ideas and the space of experiences. And I think there's a lot of ideas of how to make AI systems to be able to ask the right questions and truly hear another human. This is what we try to do with podcasting, right? I think there's ways to do that with AI. But above all else, just remembering the collection of moments that make up the day, the week, the months, I think you maybe have some of this as well.

Some of my closest friends still are the friends from high school. That's time, we've been through a bunch of together, and that like we're very different people. But just the fact that we've been through that, and we remember those moments, and those moments somehow create a depth of connection like nothing else, like you and your refrigerator. - I love that because my graduate advisor, she unfortunately, she passed away, but when she passed away, somebody said at her at her memorial all these amazing things she had done, et cetera.

And then her kids got up there, and she had young children and that I knew as when she was pregnant with them. And so, it was really, you're even now I can feel like your heart gets heavy, thinking about this, they're going to grow up without their mother. And it was really amazing, very strong young girls, and now the young women.

And what they said was incredible, they said what they really appreciated most about their mother, who was an amazing person, is all the unstructured time they spent together. - Mm-hmm. - So, it wasn't the trips to the zoo, it wasn't she woke up at five in the morning and drove us to school. She did all those things too. She had two hour commute in each direction, it was incredible, ran a lab, et cetera, but it was the unstructured time.

So, on the passing of their mother, that's what they remembered was that the biggest give and what bonded them to her, was all the time where they just kind of hung out. And the way you describe the relationship to a refrigerator is so, I want to say human-like, but I'm almost reluctant to say that. Because what I'm realizing as we're talking, is that what we think of as human-like might actually be the lower form of relationship.

There may be relationships that are far better than the sorts of relationships that we can conceive in our minds right now based on what these machine relationship interactions could teach us. Do I have that right? - Yeah, I think so. I think there's no reason to see machines as somehow incapable of teaching us something that's deeply human.

I don't think humans have a monopoly on that. I think we understand ourselves very poorly and we need to have the kind of prompting from a machine. And definitely part of that, is just remembering the moments.

I think the unstructured time together, I wonder if it's quite so unstructured. That's like calling this podcast on structured time. - Maybe what they meant, was it wasn't a big outing, there was no specific goal, but a goal was created through the lack of a goal. Like we would just hang out and then you start playing, thumb war, and you end up playing thumb war for an hour.

So, it's the structure emerges from lack of structure. - No, but the thing is the moments, there's something about those times that creates special moments, and I think those could be optimized for. I think we think of like a big outing as, I don't know, going to Six Flags or something, or some big, the Grand Canyon, or go into some, I don't know, I think we would need to, we don't quite yet understand, as humans, what creates magical moments. I think it's possible to optimize a lot of those things. And perhaps like podcasting is helping people discover that, like maybe the thing we want to optimize for isn't necessarily like some sexy, like quick clips, maybe what we want, is long-form authenticity.

- Depth. - Depth. So, we were trying to figure that out, certainly from a deep connection between humans and humans and NAS systems, I think long conversations, or long periods of communication over a series of moments like my new, perhaps, seemingly insignificant to the big ones, the big successes, the big failures, those are all just stitching those together and talking throughout.

I think that's the formula for a really, really deep connection. That from like a very specific engineering perspective, is I think a fascinating open problem that has been really worked on very much. And for me, from a, if I have the guts and, I mean there's a lot of things to say, but one of it is guts, is I'll build a startup around it. - So, let's talk about this startup and let's talk about the dream. You mentioned this dream before in our previous conversations, always as little hints dropped here and there.

Just for anyone listening, there's never been an offline conversation about this dream, I'm not privy to anything, except what Lex says now. And I realized that there's no way to capture the full essence of a dream in any kind of verbal statement in a way that captures all of it. But what is this dream that you've referred to now several times when we've sat down together and talked on the phone? Maybe it's this company, maybe it's something distinct. If you feel comfortable, it'd be great if you could share a little bit about what that is. - Sure.

So, the way people express long-term vision, I've noticed is quite different. Like Elon is an example of somebody who can very crisply say exactly what the goal is. Also has to do with the fact that problems he's solving have nothing to do with humans. So, my long-term vision is a little bit more difficult to express in words, I've noticed, as I've tried, it could be my brain's failure, but there's a way to sneak up to it. So, let me just say a few things.

Early on in life and also in the recent years, I've interacted with a few robots where I understood there's magic there. And that magic could be shared by millions if it's brought to light. When I first met Spot from Boston Dynamics, I realized there's magic there that nobody else is seeing. - Is the dog. - The dog, sorry.

The Spot is the four-legged robot from Boston Dynamics. Some people might have seen it, it's this yellow dog. And sometimes in life, you just notice something that just grabs you. And I believe that this is something that this magic is something that could be in every single device in the world.

The way that I think maybe Steve Jobs thought about the personal computer, laws didn't think about the personal computer this way, but Steve did. Which is like, he thought that the personal computer should be as thin as a sheet of paper and everybody should have one, and this idea, I think it is heartbreaking that we're getting, the world is being filled up with machines that are soulless. And I think every one of them can have that same magic.

One of the things that also inspired me in terms of a startup, is that magic can engineered much easier than I thought. That's my intuition with everything I've ever built and worked on. So, the dream is to add a bit of that magic in every single computing system in the world. So, the way that Windows Operating System for a long time was the primary operating system everybody interacted with, they built apps on top of it.

I think this is something that should be as a layer, it's almost as an operating system in every device that humans interacted with in the world. Now, what that actually looks like, the actual dream when I was officially a kid, it didn't have this concrete form of a business, it had more of a dream of exploring your own loneliness by interacting with machines, robots. This deep connection between humans and robots was always a dream. And so, for me, I'd love to see a world where there's every home as a robot, and not a robot that washes the dishes, or a sex robot, or I don't know, think of any kind of activity the robot can do, but more like a companion. - A family member. - A family member, the way a dog is.

- Mm-hmm. - But a dog that's able to speak your language too. So, not just connect the way a dog does by looking at you and looking away and almost like smiling with its soul in that kind of way, but also to actually understand what the hell, like, why are you so excited about the successes? Like understand the details, understand the traumas.

And that, I just think [sighing] that has always filled me with excitement that I could, with artificial intelligence, bring joy to a lot of people. More recently, I've been more and more heartbroken to see the kind of division, derision, even hate that's boiling up on the internet through social networks. And I thought this kind of mechanism is exactly applicable in the context of social networks as well. So, it's an operating system that serves as your guide on the internet. One of the biggest problems with YouTube and social networks currently, is they're optimizing for engagement.

I think if you create AI systems that know each individual person, you're able to optimize for long-term growth for a long-term happiness. - Of the individual, or- - Of the individual, of the individual. And there's a lot of other things to say, which is in order for AI systems to learn everything about you, they need to collect, they need to just like you and I when we talk offline, we're collecting data about each others secrets about each other, the same way AI has to do that. And that allows you to, and that requires you to rethink ideas of ownership of data. I think each individual should own all of their data and very easily be able to leave just like AI systems can leave, humans can disappe

2021-07-25 17:21

Show Video

Other news