A Techno-optimist Look at AI - Jeff Dean | Endgame #166 (Luminaries)

A Techno-optimist Look at AI - Jeff Dean | Endgame #166 (Luminaries)

Show Video

what are these AI systems doing today because I think a lot of times people are using various AI models without necessarily realizing it right you know often these features are are feel pretty magical but they're actually you know often powered by these machine learning models lots of data about some of these you know uh problems or domains right but we don't necessarily have the deep understanding to make sense of that and so this is where sometimes neural networks which can learn you know fairly complex patterns from data can actually uh be useful and can create new insights or create new capabilities that didn't exist [Music] before hi friends and fellows welcome to this special series of conversations involving personalities coming from a number of campuses including Stanford University the purpose of the series is really to unleash thought-provoking ideas that I think would be of tremendous value to you I want to thank you for your support so far and welcome to the special series hi we're honored to have Jeff Dean who is the chief scientist at Google Jeff thank you so much for coming on to our show ingame thank you for having me you're you've been at Google for a very long time you're employee number 29 point something uh I want to ask you about how you grew up you're born in Hawaii and tell us about how you grew up and how you got interested in computers and how you've transformed Google into what it is today sure yeah that's that's a loaded start with your childhood yeah I mean I think I had a somewhat interesting childhood in that my my dad did tropical disease research for the first part of my childhood and then switched to being a public health epidemiologist and my mom did medical anthropology and so they they somehow like to move a lot I'm an only child so uh every so often we would move I went to 11 schools in 12 years um and so I you know was born in Hawaii then we moved to Boston and then you Uganda and Boston and Arkansas and Hawaii and Minnesota and Somalia and Minnesota and Atlanta for the last two years of high school um then I went back to Minnesota for college um and then I uh ended up working for the World Health Organization in Geneva for a year and a half before going to grad school in Seattle and then I came to the Bay Area and I haven't moved since my kids did not get the same tour of the world that I did but uh you know it was interesting because you know seeing a lot of different environment in a lot of different schools and a lot of different ways of teaching and so on it was you know different than than many many people's upbringing was there was there a time when or a few times when you were pushed or attempted as to pursue medicine you know the area of expertise of both your parents yeah I mean I think I've I have had an interest in in uh not as a doctor but as like uh you know obviously conversations around the table growing up were all always about you know Public Health often uh my dad actually the the way I got into Computing which is something you asked was um my dad was always interested in how could you use you know better information to help make better Public Health decisions and so he you know was kind of frustrated at the time this was kind of the the the mid to late 70s um where mostly people didn't get to use computers themselves there was a big main frame in the basement of some some thing and you would go tell some programmer uh no this is actually in Hawaii okay um and you know you would go tell someone what you wanted the computer to do and they would do it and then you would not have this nice kind of more natural interactive experience with Computing that that has come to pass with personal Computing um but my dad saw an ad in the back of a magazine for a uh solder it to solder it together yourself kind of Kit computer called an I 880 um that he when I was uh I guess I was nine so he bought one of these and I I was probably not much help but I like held the soldering iron a little bit and um helped him solder it together uh it didn't do much at first because it was really like it didn't have a keyboard or a screen really you could toggle together uh you know toggle in individual bits and then like enter them into memory and you could get little programs going that way then we we got a keyboard that was a huge Improvement so you could actually type you know actual characters and so on um but I kind of got interested in that as it we got a couple more peripherals you could start to type in uh computer games in basic so you type in the source code of a game there was a book I got with uh source code for a whole bunch of different games and you could type it in and then you could play the game and then I started to get interested in you know modifying the game a little like this is a good way for I think young kids to get interested in programming is have some they want the computer to do and it's kind of very motivating because you can kind of then figure it out uh on your own or like how how would I make the you know the Torpedoes go twice as fast in this game or whatever um so that was my introduction to Computing um and then we were actually quite fortunate to move to Minnesota which uh at the time had a interactive time sharing system for all the middle schools and high schools in the state and so every state every school got a you know a computer account and some Computing Hardware to dial into this centralized system and actually they had um uh kind of these interactive chat rooms at the time so it was sort of like what the internet has become right but you know 20 years before that and so kids in Minnesota were living the Internet dream uh earlier than most then then you moved on to Seattle for your graduate studies right after Minnesota yeah why Seattle um well my wife and I were applying to graduate schools together and you know the the complexity of matching programs that were good in her field and my field uh University of Washington was an awesome pick for us we love Seattle a little gray sometimes and rainy little rainy uh my Hawaiian upbringing is you know uh kind of spoils the weather for most other places but um but Seattle was great you know I really liked my time in graduate school there and and you know I learned a ton and uh then I came to the Bay Area now what what made you join Google in 1999 yeah so uh I'd actually come down to the Bay Area to work for uh digital Equipment Corporation uh a small research lab in downtown palalo um and actually that was the lab that uh created alter Vista which was an early search engine so some of colleagues there did the early sort of uh you know key work on on that system and altta Vista uh you know at the time had a much larger index than most people it was a very fast responding system um and one of my colleagues had put together from the crawled pages in the alterist index a system where you could actually in sort of programmatic form see what pages point to which other Pages which is not so hard because you could just look at the contents of the page but also what pages point to each page so going backwards in some sense and so you can navigate this sort of computational graph right forwards and backwards with this kind of uh set of API calls um and that proved to be quite interesting so a colleague uh Monica hensinger and I were working on how could you find related pages to Any Given web page in the in in the web um and we thought we'd have to try fairly complicated things but we said oh let's just try something really simple first uh which is let's look at a page and let's look at what pages point to that page and then what other Pages do they point to and then you just do some counting and frequent things and then you divide by uh to normalize the probabilities and all of a sudden you know from The Washington Post you would get a list of like CNN and the Wall Street Journal on New York Times or you know some page about hiking in the Bay Area You' get a bunch of other pages about hiking in the Bay Area wow and that that caused me to think there's actually a lot of information in the link structure of the web um and ultimately I decided I wanted to be at a smaller company then uh it was actually you know a little challenging sometimes to get research you'd done out into the world through a very big company I found it was uh just a little indirect um and so I decided I would uh come to Google I knew a small company yeah we were all wedged actually we were all of us were wedged in this tiny little area above what's now a T-Mobile store in downtown palalo uh when I when I started um but I knew Oris huta who's one of our earlier employees and I knew he'd come here um he and I were sort of he's my academic Uncle I guess wow um and so I had you know chatted with him many times at different compiler conferences because we both had a background in compilers and uh program optimization when when did you get the sense that Google was going to be as big as it is today did you think like that already in 1999 yeah I mean we we clearly at that time had a really successful in growing service and you could see it like the whole company could see that like our traffic was growing um you know six s 10% a week at times and you know if you do 1.1 to the 52nd that's a huge amount of growth year yeah um and you know a lot of the first couple of years were really how can we avoid melting every Tuesday Peak traffic time um so you know there was a whole bunch of work of deploying new hardware but that wasn't enough we had to do you know software performance optimization we had to sort of redesign the system because often when you have software that works at scale X it suddenly doesn't work at scale 10 time 10 x or 50x and so you're constantly refiguring out how are we going to you know redesign this part of the system because now it it's a big problem whereas it didn't used to be so so when when was it that you realized that this was going to be oh this big one of our early employees uh we actually had a giant wall really long wall and he put up a you know butcher paper kind of chart on the wall and it was called the crayon chart because every day he would plot like how many queries did we get on Wednesdays probably like this in different colors we had different partners and then you know you would go along a bit and then he would run out of room at the top of the paper oh my so he'd have to scale it down by a factor of five and then start over again and it would grow again to the top of the paper and then he would scale it down by another Factor five and so he did many many scalings and you know we went from you know very few queries per day when I started to you know a lot more queries and you know then obviously expanded into a bunch of other product areas and things like that so and anything that you think could have been done differently since the start of Google oh so many things okay I mean I think it's always good to to reflect on you know what are we doing well but also what could we do better I mean I think one thing that one lesson that sticks out is um you know whenever an organization is growing quickly you know we were also hiring people pretty quickly and I feel like every kind of double in company sized yeah caused something that used to work well to no longer work uh you know and there's kind of like these uh leaps of of change that that caused that you know sometimes it's you were all on one floor and now you're on multiple floors in the same building and then sometimes then you go to multiple buildings and then all of a sudden instead of all of your engineering being done in Mountain View we opened a New York office and a office in Zurich now now we have to figure out okay how do you have people in many different locations working together and this was before you know obviously all the technology we have today of video video chatting and that kind of thing it was you know more challenging to figure out who should do what who is doing what um you know but we we worked through that uh we had about five engineering locations for a while uh one one period that I think we could have done a little better is we decided we would great expand the number of engineering locations we had so we went from about 5 to 30 in a couple of years and really that was about hiring great people who in different places who didn't necessarily want to move to one of our locations but we're like oh yeah that person and this team of five people we should start an office around them and it's really you know it took a while to digest how should we work in 30 engineering locations instead of five because each one of these small locations would kind of look at the main engineering centers in New York and Mountain View and say we should do stuff just like they do which means work on everything um and I think that doesn't really work if you're working on everything but you're 15 people um so we triy to create a little bit of specialty and focus in some of these centers where you they get to work on you know really prominent important things but on like uh just a handful of of the different products we have that on [Music] you know you started studying neural network in 1990 we've been talking about that right and I thought time it was hugely inhibited by the lack of computational power yeah do you do you see the computational power having grown as exponentially as you would have thought then today yeah I mean I got introduced to neural networks in my senior year as an undergraduate and right um it was kind of a onewe module in some class I T I took but I was very intrigued by them it seemed like kind of the right abstraction yeah um and so I decided to then work with that faculty member to do an undergraduate uh research project uh undergraduate honors thesis on uh I felt like we just needed more computation so maybe we could do parallel training of neural networks so we could get the sort of 32 processor machine in the department um training a single neural network rather than you know just using One processor yeah um I was convinced if we could use 32 times as much computational power it' be amazing oh' be so great uh turns out I was wrong we need like saying a million times yeah a million times as much computational power which is sort of what the progress in comp General computation produced over 20 years or so and then you know through just general improvements computer architecture improvements in semiconductor manufacturing processes and Fabrication shrinks and so on yeah all of that you know compounded to the fact that our phones now are you know aund times or thousand times as powerful as the giant desktop machines we used to use um so I feel like uh once we started to have about a million times as much computational power in maybe like 2008 2009 2010 then it started to be the case that neural networks um could solve real problems not just kind of interesting small scale toy problems almost but they could actually start being applied to real problems in computer vision and speech recognition was like some of the earlier areas we started to look at and then you know various kinds of language tasks uh you know could they understand words in a way that was different than the way the surface form of the word but really in what context does this kind of word make sense and are there other words that are similar to that and you know what is the past tense of this word and right can you you know really understand language more deeply than it's just a sequence of of characters how how do you see the evolution of the tpus going forward is is it going to get much more exponential than what we might have seen in the last decade or two yeah I mean I think what TPU what V4 now or uh we've just announced our V5 uh through our Cloud TPU program okay um yeah so we've been building specialized hardware for machine learning and in particular n networks now for you know quite a while I think our first TPU V1 was uh discussed in 2015 um but really it's about one of the nice properties that neural networks have is that they're sort of all described by different sequences of kind of linear algebra style operations you know different kinds of Matrix multiplies or vector operations um and that's a very restricted set of things you need the computer to do it's not like you needed to do all kinds of different things like general purpose Computing coulde um and so general purpose CPUs are great for running your word processor but they're not exactly what you want for running machine learning computations because they're they're too General that generality costs you performance and instead if you build Hardware that is very specialized to you know exactly what the kinds of computations neural networks right embody you will be able to uh you know get huge performance improvements you know better performance per watt better performance per dollar um performance per overall chip um and and the other property that neural networks have is unlike a lot of kind of uh traditional scientific Computing where you need actually a fair amount of precision uh they're actually very tolerant of reduced Precision arithmetic so you could do computations in you know 8bit integer format or 16bit floating Point format unlike 32 or 64-bit floating Point format which is typically used for you know weather simulation code or whatever um and that means you can squeeze more multipliers into the same chip area yeah and get higher performance okay you've you've talked quite frequently about some of the constraints with respect to the neural network the modalities multitasking versus single tasking and sparcity versus density yeah talk about those sure I mean so I mean neural networks are sort of loosely inspired by how real biological neural network neuron neurons work right in that they the individual unit in a artificial neural network is something that takes in some inputs right and then has weights on those inputs how important does it think this input is versus that one yeah um and importantly those weights are learned by through a learning process uh and then the neuron takes all that input and then decides the artificial neuron the artificial neuron and and Loosely inspired by what real neurons is it decides what output it should produce should it fire in some sense or should it produce nothing right um or and how strongly should it fire and so that's really what a neural network is is it's composed of a whole bunch of these individual artificial neurons y all typically arranged in layers so you have you know the lowest layers that take in very raw forms of data be it you know pixels a small patch of pixels of an image or a little bit of audio data in You Know audio form uh or you know a few characters of uh textual input um and then they sort of uh build interesting features let's let's discuss images because I think that's like a very you know easy way to think about the kinds of features that get built up through this learning process so um you the lowest level features tend to learn very simple things like is there a line at this orientation in this part of the image or this orientation or is it like mostly gray or is it mostly red or is it like uh you know a different color right um and so different neurons will get excited by when they see different patterns like this one gets really excited it's like bright red wow wow exciting and this one has a line like this um and so as you move up the layers what's happening is these neurons are taking input from the lower level ones and they're learning kind of more interesting and intricate patterns that are based on combinations of the features that cause these lower layer neurons to get excited so like now it's like oh it's red and it's got a line through it like this that's really exciting um or it's got an edge with red mostly on one side and not on the other right um and as you move up farther and further the the features become more and more complex so you might have something that's got like something that looks like a wheel or something that looks like a nose or a you know eyebrow or things like that and even higher you get sort of fully featured things like oh yeah this one fires when there's a car with a Fronton view of the car or something like that and and I think that that kind of process happens because typically you are training the neural network um you know there's a lot of different ways of training it but one one of the simplest is um what's called supervised learning where you have some say image data and then you have labels associated with those images you say okay that one's a car that one's a cheetah that one's a you know a tree right um and so the output of the model at the top level is trying to predict which of these many different categories of images is it and um the way the training process works is you make a pass upwards through the model forward pass it's called and you see what the model predicts yeah and it says okay well that that looks like a you know a tower and but maybe it's really a tree and so what you can do is then make little teeny adjustments to all the weights in the model so that it is more likely when it sees this image or a similar image to say give the right answer just say it's a tree actually not a tower um and the training process is just repetition of that that observing of real data and what it should be and then uh producing you know adjustments to the weights of the model how do you make sure that you can actually weed out this seeming bias by way of the weaker ones get weeded out and then the stronger ones or influences get promoted that that just sounds like an inherent bias individual neurons correct yeah I mean I think um what actually tends to happen is different neurons will latch onto different kinds of patterns and some of those patterns are Irrelevant for any particular image like if it's a image of the outdoors Z scenes all the things that detect vehicle parts are kind of mute like they don't actually produce large outputs um but all the things that are about you know foliage and green and Trunks of trees and so on are are like very exciting yeah um and so I think you know part of training a Neal network is you want this diversity of uh different kinds of patterns that the model can learn and also you need the model to have enough capacity enough you know neurons and that it can absorb and and learn from the data that you're exposing it to like if you have only five neurons and you give it a million images it's not going to do very well to generalize to new examples I mean that's really one of the things in machine learning you're trying to do is learn from representative data but not just completely remember exactly what that data was because you want to learn when you're confronted with a new image or a new piece of text yeah to generalize to those examples how how optimistic are you with respect to being able to basically address these three concerns or constraints with respect to modality density and single tasking yeah I mean I think I I'm or how soon do you think those are going to get optimally remedied by way of the exponential growth and TPU capabilities yeah I mean I think we're we're making a lot of progress on them so we're actually uh pretty far along at generalizing some models that are previously mostly text only or sort of code you know software code kind of like uh into models that can actually understand you know text code audio input uh image inputs um and so that I think is is starting to be well understood through research uh my colleagues and others in the community have been doing over the last three or four or five years um you know I think in terms of uh multitask uh capability one of the things we're seeing with these models that are trained on you know large corpor of General text right or images and text or whatever is that that gives them the ability to actually generalize quite well to new things you ask them to do like you say okay and you draft me a letter to my to my veterinarian about my dog you know uh the dog is not feeling well and you know the model has never seen exactly that right requirement or request but it is able to sort of understand what it is you want and to produce you know plausible sounding text that actually fulfills that that person's need Y and you're starting to see not just generalizing from one data example to another in the same kind of overall category like um 10 years ago the generalization you wanted was uh take an image and be able to predict which category it's in from having trained on a bunch of images and those categories now you're seeing the ability to generalize across tasks in some sense like asking the model to do something it's never been asked to do but is kind of close enough to things it knows how to do and it is able to generalize um and then the third one is sparcity so most most machine learning models today these days are dense which means you have all these neurons artificial neurons and the entire model is kind of activated for every example or every input um and there's a form of model that we've done a fair amount of work in in called sparse models where you have actually have different pieces of the model and the model can turn on and off different pieces and can actually learn which pieces are most relevant for which kinds of inputs so you might have some inputs that are about uh Shakespeare and so maybe there's a part that's really good at kind of Shakespeare stuff but the part that knows about C++ code or Java programming is probably not active there yeah and there's another part that is really good at identifying garbage trucks in visual images that's probably not active either yeah but you want these this model to have a lot of capacity right so it's got a lot of pieces of it that it can call on but it doesn't need to call on all of them for everything and that creates a much more efficient model uh because now instead of activating the whole model you're may be activating 5% of it and that makes it much more energy efficient uh but you still have this capacity to remember a lot of stuff it's it's probably not going to be too far in future right when you think you're going to be able to address these constraints oh yeah like I think the mult mul modality stuff we've already seen a bunch of work from Google research and Google deepmind uh on multimodal models of various kinds you know that can take in you know visual inputs and language and answer questions in text form or that can generate you know we've seen a lot of work on generating images or generating audio uh from various kinds of other inputs like can you take a text prompt and generate an image um and those models for that have been improving steadily you now can take text plus an image and say okay generate me a picture of you know uh a giant castle with this dog in front of it right like it's cool that it can generate uh a picture of a castle with a dog in front of it but often what you really want is your dog in front of the castle you you've recently or sometime ago gave a lecture or a talk in front of quite a bunch of computer signs students or experts you talked about the five Trends with respect to machine learning general purpose efficiency benefit to society community and people yeah benefit to engineering science and health and broader and deeper yeah talk about those sure yeah I mean I think you know the the first part was about these trends of improving you know the multimodalities cap abilities of these models and sparity and so on and the underlying hardware and systems we use to to train them getting more capable um the second part it was about uh or another part of the talk was about you know what is what are these AI systems doing today because I think a lot of times people are using various AI models without necessarily realizing it so on for example the Android phone there's a lot of capabilities in that phone that are powered by various kinds of models so it can like screen your calls for you and it can you know you can say yeah I don't want to pick up my phone yet I just want to like understand what it is this person wants and then it can relay a transcript of what they said you can play say oh this is hi I'm red I've got a delivery for you at the front door or whatever um or that can um you know on the phone you know do various kinds of computational Photography techniques te to enhance the images to be able to remove you know that annoying unsightly telephone pull in the background when you took your photo right um or a variety of other things and and I think you know often these features are are feel pretty magical but they're actually you know often powered by these machine learning models and then another part was about how um Ai and machine learning is really accelerating a lot of uh aspects of science ific Discovery right I mean I think um one of the things that particularly in in fields where there's a fair amount of data and you're trying to like pick up on complicated patterns that are not well understood um genetics or health care or uh you know various kinds of weather prediction you know a lot of these things have the the property that there is lots of data about some of these you know problems or domains right but we don't necessarily have the deep understanding to make sense of that data and so this is where sometimes neural networks which can learn you know fairly complex patterns from data can actually uh be useful and can create new insights or create new capabilities that didn't exist before um weather is kind maybe maybe I can use weather prediction as a good example so traditional numerical weather forecasting has like a set of physics based equations about how you know the um the weather and the wind and the atmosphere uh interact um in order to make predictions of like what's the on the ground weather going to be like 12 hours from now three days from now um and that's great but those simplified equations probably leave out a lot of things that we don't fully understand um and so when you actually try to apply neural networks to weather forecasting you you approach the problem very differently you actually have a fair amount of historical data about the weather conditions four days ago were this and now you know three and a half days ago they're this or you know even three years and one day ago they were this and now three years ago they were this and this sort of gives you the ground truth of what your model should predict so given the WEA a thousand days ago can you predict the weather 999 days ago um and that actually turns out to be a fairly successful approach for weather prediction and you have kind of ample amounts of data to train on and then you want to generalize to you know new weather situations you've never seen before and also to the Future so how how soon do you think we're going to be able to I mean machine learning has done so much so well so fast with respect to reading texts M understanding to some extent then audio then visuals all kinds of visuals right yeah what about smell ah so we actually have uh done some research temperature you can you can do that already so within within Google research we've done some work uh in this space uh about starting about four or five years ago right um and it turns out out that with uh there's various kinds of instruments that can uh sense the old factory characteristics of you know the air right and can give you very raw data about you know what what things are hanging in the air but it's hard to really then put highle labels on that so in the same way you have't the pixels of an image and you can train a neural network to say okay well when I see that kind of thing that's a leopard you can do the same thing with these old factory you know signals to say okay that's a lot like lemon with a hint of pine needles or something um and this actually works uh okay um the the actual device that gathers the data is still a little big so it's not like a portable thing you can put in a cell phone yet but but it turns out this is an important problem for a variety of reasons one is there's actually a lot of you know some industries that want to create particular scents and they want to be able to um you know understand a scent but also do the reverse you know create a scent like in the same way you want to uh you've seen these image models where you can say please give me a dog in front of a castle you'd like to say please give me what what I would need to mix together to make a scent of you know uh of a woman goulash and cinnamon or something um and so that's one application is for perfume Industries or consumer package Goods uh but another one is potentially in healthcare related things there's some evidence that dogs which have particularly sensitive noses can actually you know pick up on subtle subtle signs of of cancer in some cases and so that indicates that maybe there's signal in these old factory raw data that could actually be used for health purposes any anything else that we should anticipate in terms of what could be cool about what ml could be doing to humanity oh yeah I mean uh I I'm pretty optimistic about a bunch of different application domains I mean I think one is in the area of Education right so you know we're not quite there yet but it's close to being able to say can you please tutor me on this wow and you know you take in a chapter of a textbook yeah and you can imagine a system that you know absorbs that that chapter or you know maybe multiple chapters from different books and then you know asks you questions assesses the the the correctness of what you answered you know identifies areas where you could use more um you know more depth and ask you more questions about that kind of thing fewer questions about stuff that you seem to already know pretty well um so imagine being able to do that for any anything you want to learn like either as a kid in school or you know yeah English is is great there's all kinds of interesting language learning applications you know I think you you might be able to create really interesting dialogues that help people learn language and they're going to be more interesting than the kind of pre-crafted Fairly rudimentary things like you could say I want to learn English and I want to talk about you know hiking in the forest or something today or whatever and it could probably you know help you a those two objectives you know a pleasant conversation about hiking and you're learning English at the same time you know I I come from a a region it's called southeast Asia and it it kind of gets it's it's a bit under narrated you know people here in Silicon Valley tend to talk about other places around the world as opposed to Southeast Asia yeah and and I think part of the structural problem with that is we just don't speak English not of us I mean singaporeans all of them speak English yeah good chunk of the Filipinos they speak but the rest of Southeast Asia MH so if you if if we would have had this conversation five years ago I would have been a lot more pessimistic about the future where we could actually communicate with the International Community now with the Advent of ml or Ai and all that stuff I'm a lot more optimistic about getting a 100 million people in Indonesia to be speak English maybe 400 million people in Southeast Asia out of the total population of 700 million people to be able to speak English yeah it's it's it's a breakthrough yeah it's lifechanging yeah enabling people to communicate with each other I think is a hugely impactful thing and whether that's through teaching people to learn a second or third language uh or whether it's enabling people who don't speak the same language to communicate effectively you know have some uh some some of our products like Google translate can actually you know you put the phone on the table and you're speaking one language and I'm speaking another and it will actually produce transcribed versions of what we're saying uh and we also have versions that can like uh transliterated into actual audio in people's ears and I think that's you know a really important capability because the more we're able to communicate as all people yeah the better it is and it's also something where machine learning can actually really help because we've seen just dramatic improvements in the capabilities of you know the the quality of translations right uh through these larger neural network based models as well as speech recognition and speech uh production for you know not just five or 10 languages but actually for you know 100 languages Google's translate supports more than 100 languages today um which I think is you know really important we have actually have an ambitious goal to support a thousand languages in our products and this is sort of a well we have 700 of us in just one country India yeah we call them dialects yeah yeah so I mean a thousand languages is not you know I think there's something like 7,000 spoken languages in the world and a thousand covering the top thousand would be amazing um even you know the top 100 covers a lot but there's still in the next 900 there's a lot of speakers who are sort of left out if we don't support uh those languages and so we definitely want to do that I want to I want to talk to you about sustainability okay and and I'll I'll draw a picture in terms of how things are a little bit different in developing countries right I mean the I've been saying The Narrative of sustainability is elitist because you know it resonates to about 15% of the population of the world whereas the 85% they're they're a lot more worried about putting food on table right right yeah and and you know they don't mind stopping you know using Cal today as long as the alternatives are right affordable yeah technologically alternatives are available but economically they're just not affordable for most people on the planet yeah what what what do you think Google or you as a scientist could be thinking about what can be done to bridge the gap between The Narrative of sustainability and The Narrative of development because I think it's important for the planet to be Collective about this right in terms of attaining carbon neutrality by 2050 or 60 yeah there there just seems to be no realism when when we hear the rhetoric of attaining carbon neutrality by 2050 at the rate that we're seeing a bunch of these people here that just can't afford it you know the technology and and I'm sure you got a lot of smart people here in this building that can figure out how to make things a lot cheaper economically by way of technological innovation that's been very exponential yeah I mean I think this is clearly a planet-wide issue right and we all need to be working together on this yeah uh and you know it's definitely the case that the more economically developed countries have produced way more emission don't have to get there yeah but uh you know I think there are a few sort of positive signs so one is the cost of renewable energy like solar panels uh have been kind of on a you know dramatically improving curve a bit like computation was you know 15 years ago we're now seeing still high though still High um Battery Technology is improving a lot and so the combination of solar and Battery Plus wind is becoming much more affordable in fact in many parts of the world I think if you look to install new power capacity right that actually becomes the economically rational choice so that that that's a good thing because I think um one of the issues we've had is uh in the world is just there's things that are not factored into people's decisions like you know via install another coal plant it's cheaper for me even if it necessarily causes a indirect emissions that um you know impact everyone um so we've been looking at what are things that we can do to improve sustainability and reduce emissions with technological solutions right um so uh one of them is a project called Greenlight where basically by using traffic patterns that we can observe through Google Maps uh we can actually identify ways in which cities around the world can make improvements to their traffic infrastructure to signals and so on to actually reduce idle time at uh you know intersections that's actually a major source of emissions is Just Cars not going anywhere you know it's also a situation where people don't really like not going anywhere you're in your car to go somewhere uh and the emissions like from the idling engines actually are pretty harmful um and so with green light we can actually make suggestions to different cities all around the world uh and have them adjust the stoplight timing you know most stoplights in the world have fairly simplistic methods it's sort of like is it rush hour or not is kind of the the level of sophistication in in many of them uh but now we can actually say Okay Tuesdays between you know 10:37 and 11:30 you know you should set the signal timing to 42 seconds instead of you know 35 and you'll get way more throughput on your your roads you you know 90% reduction in the number of people who need to wait through a second light cycle for example um and so we've actually got Pilots going with 12 cities all around the world I think it's on like four or five continents and Jakarta is one of them um and so we're actually seeing quite positive results from that okay uh and we're sort of learning from that early uh experience partnering with those cities and trying to sort of expand that program but that has you know rolled out much more broadly would have a huge potential impact on reducing emissions and it also would help people by you know getting them where they want to go faster which is also a nice side benefit um another area we all talk about is uh contrails so you know the the the sort of long linear clouds you see behind airplanes sometimes yeah turns out those are actually quite harmful for carbon emissions because they trap in heat in at certain times of the day okay and actually the contrails produced by airliners are roughly onethird of the total contribution to warming of the aviation industry no kidding the entire Aviation like kind of surprising but onethird of the actual burning of car one third of the overall footprint of the aviation industry is related to contrails um we we've actually been doing U but contrails are actually avoidable so if you're a plane and you're flying and the conditions that this altitude would actually produce contrails uh at a time when that's a bad idea uh because the conditions seem like that contrail would be harmful you can actually change your altitude or with going up or or down a little bit you can actually get to a situation where you won't create a contract because it's really just the exact temperature and Ice Crystal formation that you know it's really just ice crystals forming around sort of s from the exhaust of the plane that causes contrails um and so we've actually partnered with American Airlines to do a a controlled study where we took uh I forget exactly how many about 100 flights and we took 50 of them and we gave them uh commands about you know where we thought contrails would be produced and and whether they should go up or down on their flight path um and what we saw was a 50% reduction in contrails for the flights where we were controlling that versus the ones where we did not and and how are you actually controlling oh uh so we control it by saying okay flight you know uh American Airlines flight operations would tell them to go up to 31,000 ft instead of 30,000 feet or something okay um and then actually it's kind of cool how we closed the loop and figured this out so now you have all these flights and so we use real-time satellite imagery of when the flights occurred and the paths they took and then you can use computer vision to detect was there a contrail produced by this flight versus is this one you know if you take a look at the some of the Publications by experts in energy the demand for fossil from Automotive is going to continue declining right but demand for fossil from Aviation is go up because yeah and so this this you just can't Electrify airplanes that fly Long Haul yeah you know this approach seems like it might reduce about half of the uh the warming related to contrails which is a third of the overall impact of the industry so that might be like a sixth of the aviation industry uh impact what about food security I mean there's there's a lot that can be done technologically or scientifically right to to improve upon a pre-existing convention yeah absolutely uh I mean I think that has a very broad set of of ways that you can approach improving that situation so one is just helping Farmers understand their you know their crops and you know is this what it I'm getting these weird patterns on my leaves of this particular crop you know is that a disease I should worry about or is it fine yeah and computer vision models can actually be helpful with this uh you know we've done some deployments uh with um other nonprofits working in um I think it was Kenya or Tanzania uh helping understand uh cassava Leaf uh you know images and helping uh tell cassava Farmers you know is this a disease they should worry about and if so how should they they treat it um another is just predicting where food insecurity is likely to occur because we know you know when you already wait until a population is in crisis it's actually kind of late at that point you'd rather direct give sort of assistance to people that are not yet in crisis so that they can sort of um you know plant more crops or or do things that to help them avert the the most dire situations and using machine learning to help make predictions there is something that we're partnering actually with Google research is partnering with the the FAO um organization to to sort of help help with that [Music] prediction let's let's talk about aii okay convince us that it's going to lead up to a good future yeah I mean I think obviously there's a lot of discussion around this as let me let me just you know put some context to this there is a sense at least from a lay man like me that it's not being pushed forward in a in an adequately multidisiplinary manner mhm it just seems highly technological right without roping in those people that are expert in culture economics environment spirituality philosophy and all that good stuff I just think that those are important right yeah to make sure that this goes to the end of the pipe in a benign or judicious or wise manner right yeah I I I I definitely agree with that I mean I think one of the things you want to do whenever you're thinking about applying technology to some you know problem is you want to bring in people who have a lot of knowledge about that area right and work with them you know some of the most interesting projects I've worked on uh are ones where you know I might have some technical expertise but where I learn a lot from colleagues who have other domain expertise they're clinicians and they understand this kind of healthcare problem extremely well and they can say yeah if we could do this that would be really helpful this isn't that helpful this is a big problem this is not a big problem we should look out for that um so whenever we're approaching you know the use of AI and machine learning in uh different domains we want to bring in you know that comprehensive set of people who are thinking about you know that domain who are thinking about you know issues of representation and fairness right if it's you know technolog is going to affect you know people in one community you want people who are from that Community or who speak that language or you know understand the situation in you know that that City or that country uh more deeply uh so that they can provide feedback and advice and work together to to improve things this is one one of the things that Google has done in as we sort of saw more use of AI in our products and you know thinking about it in terms of uh you know where was it going to be applied in in other areas uh we actually put together a set of principles by which we think about um you know how do we make sure that we're responsible in in thinking about um how AI is applied to different problems you know we want to avoid creating unfair bias we want to avoid creating harm we want to sort of focus on you know positive use cases and our AI principles which we published in 2018 have a set of seven principles by which we think about these we evaluate uh Downstream uses of machine learning and AI in terms of those principles um and I think it's actually been a helpful thing for us to put those out externally y because you know we'd been thinking about AI for for a while but other organizations were starting to think about using machine learning or AI in you know their you know whatever whatever environment you know whatever problem they're they're uh engaged in in their their discipline or their domain and I think it was helpful for us to put out those principles so other people could or other organizations could reflect on them you know say yeah that makes a lot of sense or you know in our industry you know this one doesn't necessarily make as much sense but you know these other ones resonate how do you make sure that you're going to be able to find the right balance between humanity and profitability yeah uh It's Tricky I mean I think um we actually do a fair amount of work that we don't worry too much about is this going to be profitable because it's just the right thing to do yeah um you know like uh our contrails or our green light work I think we don't really worry about that it's just the right thing to do for the planet y um or a lot of the healthcare related work we've been doing in in developing uh countries I think is a um low and middle income countries uh We've deployed some you know retinal uh image uh based uh machine Learning System systems to help with diagnosing diseases like diabetic retinopathy in partnership with you know ey hospitals in India or or in other other locations uh and I think that's a that's a pretty good thing to be doing regardless of whether that's profitable or not um in other areas you know we we think there's really important uses of AI and machine learning and they provide economic benefit and we create you know business models around that so like our some of our cloud-based AI products you know people pay money to use them because they're useful and and that's fine um so I think getting that balance right is is an important thing but you know it doesn't have to be an either or you know there was an announcement today uh respect to the leadership of one of the AI players oh yeah uh without mentioning names I want to put that in the context of what I want to hear from you in terms of whether this should be open source or close source for the benefit of humanity for the time being for uh what what should be open source or close Source ai ai in general okay yeah I mean I think uh it's a complicated question I think um you know we've actually had a long history of Open Source uh releases of sort of basic building blocks of uh AI toolkits so things like tensorflow or Jacks we we've been working on for many years and releasing uh and actually huge number of developers around the world have created all kinds of amazing things with tensorflow I think you know there's 40 million uh downloads of that system uh maybe probably more now um that have enabled things like the cassava detection uh example I mentioned um at the same time I think these the most capable models uh you know I think it's it's really good to make sure that they are deployed in a safe Manner and when you completely release the model uh to the world um you know it can have all kinds of amazing uses but it can also be used in ways that maybe are you know less desirable and you don't really have control of that uh that doesn't mean we shouldn't open source models I think it it it's a balance right like we want you know amazing models that are open sourc that people can do all kinds of good things with uh but the most capable models um you know I would be a little more circumspect you can offer API access to to people and they can build things on top of that uh but that doesn't necessarily mean that we want them you know to be completely uh open and available China is is a large population right you know if you hear some of the experts on AI from China they seem to think that they're going to be ahead of the United States in AI because they've got more data points is that the right logical way to think about how AI is going to move forward or there are other variables that need to be taken into account yeah I mean I think uh obviously AI is being worked on all across the world including in the US including in China many people in Europe and you know all over uh Asia and Southeast Asia I think it's a you know it's it's a technology that is very relevant to many many things and so it's natural that there's lots of work on it everywhere uh I mean I think um it's not so much about getting ahead it's about everyone working on you know improving the capabilities of what these systems can do uh making sure that they are deployed in ways that that benefit people people's citizens or you know uh users of of the company's products or you know improving the lives of patients uh or clinicians there there's a lot of things that can be done with AI and you know I think having a responsible approach where you're looking at the ways in which agree this technology is being used and deployed and contemplating how it should be used in the future is a really helpful thing you know if if if you take a look at some of the reports on the value proposition coming from AI right I mean some pontification might say it's going to be between 50 to1 trillion dollar worth of economic value in the next 10 to 15 years right right it you know I come from a developing country and you've grown up in some developing countries right in Africa and all that it it just seems from an intu standpoint that most of the value is going to ACR to just the United States and China right I mean this is coming for me right my perspective right y sure I I want to hear you know what your views are with respect to how people in Southeast Asia could actually feel confident about being able to capture a little bit of that value proposition which could amount to 50 to1 trillion doll globally speaking in the next 10 to 15 years yeah I mean I think uh what what do we need to do to make sure that we're relevant that we're participatory in this narrative it's actually a a really good question I mean I think the the sort of increasing interest in Ai and machine learning is something that I think you want to encourage people in you know your country and other countries to to learn about these Technologies to identify ways in which they can be applied by local companies by local you know universities and and people developers in in your country I think that is a way in in to make sure that everyone participates in what is the potential you know benefits of AI both kind of societally but also economically you were asked as at the T talk I'm going to ask you again how do you how do you make sure that you're going to carry forward the pre-existing negativity into the future that we don't carry forward that yeah yeah um in terms of like well I mean you know inherently there's something that's negatively biased right with the pre-existing technology right what do you do to make sure that that's not carried forward yeah I mean I think with respect to what's good for Humanity with respect to what's good for the community and the person and all that that stuff yeah I mean I think this is definitely one of the risks of AI and machine learning is the systems learn from observations about the world right and if they observe the way the world works and we are unhappy with the way the world Works in certain certain ways these systems will learn to replicate that behavior yeah and maybe even accelerate it because now you can have an automated decision about you know uh as one example who should get a home loan or not you know we know uh you know those are not always based entirely on Fair decisions they're sometimes biased in various Ways by you know human fallibility and that can be perpetuated if you train a machine learning model on biased home loan decisions you will now have an automated system that makes biased home loan decisions so there's a lot of work on how do you take data that itself is biased yeah make sure that you can can correct a model so that it doesn't have that form of bias uh but it does have uh you know other kinds of properties you know a lot of these things are learn

2023-12-15 12:01

Show Video

Other news