Decoding AI | Session 3: AI and People || Harvard Radcliffe Institute

Decoding AI | Session 3: AI and People || Harvard Radcliffe Institute

Show Video

welcome back we will now shift into session three of the symposium titled ai and people which will explore a range of topics in how artificial intelligence and robots interact with people in a range of environments and conversely the ways in which people interact with utilize and learn to optimize both functionally and economically the deployment of artificial intelligence we will again again hear from three speakers in this session followed by a moderated conversation and audience q a our speakers in this session are professor daniela roos andrew 1956 and erna viterbi professor of electrical engineering and computer science and director of the computer science and artificial intelligence laboratory at the massachusetts institute of technology dr rana l kaliyubi deputy ceo of smart eye co-founder and former ceo of affectiva an executive fellow at the harvard business school and professor aj agrawal jeffrey tabor chair in entrepreneurship and innovation and professor of strategic management at the rotman school of management the university of toronto and our moderator for this session is professor jonathan zitrane george bernice professor of international law and vice dean for library and information resources at the harvard law school director and faculty chair of the berkman client center for internet and society at harvard university professor of computer science at the harvard john a paulson school of engineering and applied sciences and professor at the harvard kennedy school it is now my pleasure to pass the virtual floor to danny albers hello everyone i am so delighted to be here with you today to discuss one of the central topics that will decide what our collective future looks like now in my role as the director of csail i'm often asked to talk about the impact of ai what will rapid advancements in this technology mean for our lives our jobs our futures well ai is central to much of our work and our research and i'm very optimistic about the future i believe ai will improve our lives in many ways some of which we've only began to imagine but my optimism rests on the belief that we as educators business leaders policy makers we have the ability to take action to control the challenges that this technology can create and i'd like to highlight uh some of these opportunities and challenges let me begin with a good news because there is so much of it so um remember when spock shows up as a hologram in star trek in around 2300 well the good news is that you don't need to wait till 2300 because we have ai tools that support rapid holographic generation so next time we have a virtual meeting we will see each other in 3d and i remember when mickey mouse summons the broomstick in the sorceress apprentice well you don't need magic to make that happen you need uh robots and ai and in fact uh recent research in human robot interaction and more generally inhuman machine interaction is enabling machines to adapt to people rather than the other way around so here's a robot that learns how to be a teammate to help a a worker install a cable and the way the robot knows to respond is by monitoring the muscle activity of the human now we are we are really imagining uh extraordinary opportunities uh in human machine interaction and ultimately we might ask can we get to the point where machines can actually read our brain waves and and really understand what we want and the answer to this question is very complex and in a nutshell it's no there is too much complexity in that in the brain activities that can be detected by simple sensors like the capsules you have seen but we have made some progress in this respect as well and in fact there is a signal that our brain makes that we can detect using advanced machine learning this is the signal our brain naturally makes when something is wrong when we notice that something is wrong and it doesn't matter what language you're thinking or what language you speak this signal called the error related potential is is generated by everyone it has a unique profile and it is localized so in this video you see an example of using this idea to correct a robot's action the robot in this picture is is trained to sort paint can in bins labeled paint and wire spools in bins labeled wire a human is watching the robot perform when the human notices a mistake uh the eeg cap that the human carries detects oh that's that was wrong and that signal that can be detected in about 100 milliseconds is enough uh to um to get to reach the robot and tell the robot to do something different so here's the video that shows you the robot putting the paint can correctly and but now the robot tried to go to the spool and the human said no no that's wrong and here we go again uh the paint uh is uh is uh dropped correctly and here's the spool going to the wrong bin and uh the human through its um through its uh brainwave activity has managed to to correct the robot's mistake so this is the ultimate human machine interaction i would say that the idea is a fairly uh new fairly preliminary uh and we have a long way to go uh to where we have intuitive human machine interactions but we are making uh great progress and we are making extraordinary progress with with developing these kinds of tools and integrating them in our environments and enabling us to even carry those tools with us so that we can have uh intelligent gloves and intelligent uh coats and and jackets uh embedded sensors could monitor everything from your posture to your digestion helping you make just the right adjustments to avoid uh back pain and so i would like to say that these technologies are and these this great promise are enabling um changes to our lives and this is enabled by three interconnected fields we have robotics that puts computing in motion and gives machines the ability to move we have ai that gives machines intelligence in making decisions and then we have machine learning which aims to learn from data and make predictions on data and in the future any fields that have data can stand to benefit and i'd like to give you a very quick example of how humans and machines can form a much improved teams than humans working alone or machines working alone and this example is in the context of of healthcare where in this experiment doctors and ai systems were shown scans of lymph node cells to diagnose cancer on its own the uh the ai system had an error rate of 7.5 percent which is worse than the 3.5 percent rare uh rate of error for the human pathologist but when both the ai system and the pathologists worked together the error rate went down by 80 percent to 0.5 percent accuracy and so this is really quite uh quite extraordinary just imagine a future where every practitioner even those working in small practices or in rural settings had access to these kinds of systems an overworked doctor may not have time to review every new study in clinical trial but working in tandem with these systems the doctor will offer patients the most cutting-edge diagnosis and treatment options now there are a lot of possibilities but then there are also a lot of challenges and on the technical side it is important to remember that today's greatest advances advances in ai are due to decades-old ideas that are enhanced by vast amounts of data and computation without new technical ideas and funding to back them more and more people will be plowing the same field and the results will be increasingly incremental so we need major breakthroughs if we're going to manage the major technical challenges of this field uh we also need to ensure a future where we can deliver data and computation like we deliver water and energy today with a simple uh turn of a knob but now if we go a little bit more and and uh to look under the hood uh we will see that first among the ai challenges is the data itself ai requires data availability meaning massive data sets that have to be manually labeled and are not easily obtained in every field the quality of the data needs to be very high and it needs to include include the critical corner cases for the application at hand and if the data is biased then the performance of the system will be equally biased uh two other things i want to highlight are that the current um or let me rephrase most current machine learning solutions are black boxes meaning there's no way for users of the system to truly learn anything based on the ai systems workings and and we also have robustness challenges now we also have significant societal challenges and while the spread of ai will make our lives easier it's important to remember that many of the roles it can play will displace work done by humans today so we need to anticipate and respond to the economic inequality this could create also the lack of interpretability might lead to significant issues around trust and privacy and we need to address these issues we also need to consider how data can can lead to to to misuses of of the system and in particular as deep fakes get better and more widespread the problem with in misinformation will become more urgent for national security and for everyday applications as we gather more and more data the risks to privacy will grow and so will opportunities for authorian governments to leverage the tools and so it's we have a lot of challenges but we also have a lot of opportunities and when i think about challenges i like to underscore that the problems aren't like the pandemic we're experiencing today we know these problems are coming and we can set out to find solutions at the intersection of policy technology and business in advance now and we can begin by asking five important questions what can technology do what can't technology do what should technology do what shouldn't technology do and what must we do because i believe that we all have a more obligation to take uh these advancements in technology for the greater good um so with that i will stop sharing and see if jay-z has a question for me thank you so much daniela and uh talk about sharing what an incredibly kaleidoscopic overview a snapshot of where we stand in late 2021 both with incredibly um appealing and evocative visuals and uh with a summary of so many of the issues and i think my only question uh before we turn to our next speaker has to do with um how much you think we're in an equilibrium with the uh humans and ais working together say in cancer diagnosis for a micronic tumor or what it might be which is to say isn't that ai accuracy asymptotically improving and at some point the doctor is just going to be clicking yes rather than offering their own distinct insight well yes the the accuracy of the ai systems is getting uh better and better and better but that doesn't mean that we don't need the doctors because radiology is so much more than than making a diagnosis based on a on a cell scan and however there is a lot of routine work that the ai systems can take on and and really simplify and uh summarize uh potential uh challenges or or issues for doctors i also um think that ai can be very good at noticing um at examining data and noticing patterns that are too subtle for the naked eye to see and so while i don't think we're ready to offload control to machines i think we stand a lot to benefit from having machines uh call our attention and and provide suggestions to us um much like um how we would use an intern today right so we might ask an intern to go and pour over vast amounts of data and make suggestions to a decision maker for where to investigate more and how to think about the global picture well that indeed paints an optimistic one and uh when thinking about uh the displacement of work or the complementarity to to work um it's good to have an optimistic picture and your first uh answer calls to mind isaac asimov's uh observation that anything that a machine can do is beneath the dignity of a human to be required to do as part of their work they might choose to do it but i can tell them well jay-z if you think about all the the paints that many industries are experiencing today uh in uh with hiring uh then um then the the pro the issue of having machines whether they're embodied like robots or whether they are computational tools like the ai systems we're talking about that becomes to look uh quite appealing i think and to me uh the excitement is the the ability of offloading uh some of the routine tasks to machines so that we can focus on more creative thinking on more interesting problems and so it's all about building increasingly more capable tools that support people for con with cognitive and physical work ai is really a tool uh it's not it's not any kind of magic and by ai i mean ai machine learning robotics it's important to think of them as a as a trio and and just like any other tools these these systems are not intrinsically good or bad they are what we choose to do with them and i believe we can choose to do extraordinary things while keeping in mind that there could be negative repercussions and uh and engaging in conversations like the ones we are we're having today to make sure that we avoid issues and that the benefits stand to benefit the greater good thank you so much more discussion to come and uh by way of transition i i think also of daniella slide showing the arm trying to sort uh the two objects into the bins from a rear view of the human with the eeg uh sort of scalp attachment which can evoke a kind of horrified fascination but within that system then is a machine trying to glean the states inside the cranium of a person and so uh who better to uh tell us a little bit about that next than rana uh el kylubi uh who is about not just reading the cognitive states a yes or a no but the emotional states of people eeg not required so uh over to you thank you jay-z and thank you daniela very excited to be here with you all today uh and and build on daniella's work so i've been on a mission to humanize technology by bringing emotional intelligence to our machines and i first did this in academia i did my phd at cambridge university then came to mit media lab and then within a few years spun out of mit co-founded my company affectiva to be on this mission to bridge the gap between humans and machines and uh very recently we got acquired uh by a swedish company called smart eye that shares this vision and mission so what is uh emotional intelligence and how does this apply to ai if you take a step back and think about human intelligence of course our iq matters our cognitive intelligence but our emotional intelligence our social intelligence matters just the same in fact we know that people who have higher eq's tend to be more likable they're more persuasive they're able to motivate behavioral changes and i believe that this is true for technology that is so deeply ingrained in our everyday lives and and ai certainly is it's becoming mainstream it's taking on rules that were traditionally done by humans we saw some examples from daniela's work and uh this will continue to happen and it it's it's it's important that these technologies and these ai systems have iq but it's so important that we take a very human-centric approach to build empathy and emotional intelligence into these machines so i've been on this quest i guess to build emotion ai and uh to do that i had to really dissect how do humans communicate their mental and emotional states and it turns out only ten percent of how we communicate is in the actual choice of words we use 90 percent is nonverbal and it splits um kind of equally between your facial expressions you know are you smiling are you frowning um your body posture and and body gestures um as well as your vocal intonation how much energy is in your voice how how fast are you speaking so if you combine all of these nonverbal signals um that gives you a really good picture of of the emotional and cognitive state of a person and you are then able to to leverage this information to make all sorts of decisions humans do that all the time to build empathy um and to build trust which is so much needed in in our human machine interfaces the way you actually build a technology or algorithms to detect that is essentially a combination of using machine learning techniques such as deep learning i personally did my phd you know over 15 years ago in bayesian networks so i spent a lot of time thinking about that but all of this is driven with tons and tons and tons of data you basically need to have hundreds of thousands of examples of people smiling and frowning and looking tired or excited and then you feed it into a say a deep learning algorithm and then um it learns over time to detect these human um states it's it can be quite challenging because humans are complex and a lot of these expressions can be very subtle and nuanced but we're making a lot of progress and at the moment we're actually able to detect about 40 different emotional and cognitive states in real time so what are the applications you may be asking we partner with 28 of the fortune global 500 companies in 90 countries around the world to help them understand the emotional engagement they have with their customers so you can imagine a video ad or an online lecture where you want to understand okay where were the students excited i mean think about you know the symposium right now wouldn't it be awesome if uh with everybody's consent of course i was able to see a real-time graph of uh the level of engagement of all of you because we can't be together in the same room so there's a lot of applications everything from market research to online learning and i wanted to give you a couple of examples so i'm going to share my screen and share a few videos of some of the applications that we are focused on at the moment so one of the applications we are very focused on is in the automotive industry we're seeing a lot of legislation and safety regulations that are requiring cars to be able to detect if a driver is tired we have four levels of drowsiness everything from attentive all the way to literally asleep and uh we've been collecting a ton of data over the last four or five years and uh it's it's quite actually scary um the the real-time behaviors we've been seeing in market we've had to retrain all of our algorithms to detect people wearing masks obviously with with the pandemic um we're able to detect your gaze direction to detect if you are you know texting while driving or distracted body posture recognition are you holding a phone um so combining that with object detection and i want to point out that this is implemented in a car but a car is only a robot on a wheel right and and you can imagine um again some of the robots that daniella showed could definitely have emotional and social intelligence and we're starting to see more of these social robots deployed in home um in retails etc we can detect multiple people it doesn't just have to be the driver um and we can um understand um you know combine face detection with body key point detection and do things like gender ethnicity age which can be very very helpful information as well and then of course we tie all of that with an understanding of the emotional state of the individuals in the vehicle one of the key concerns i have again daniella kind of touched about this touched on this is the importance of the diversity of the data so that we avoid data and algorithmic bias if we're training the data on a very non-diverse set of people the algorithm is not going to recognize people who look like me for instance so um that's kind of really key as we map out this machine perception pipeline uh it's so important that we train and test with very diverse populations so this is one example of how we are currently deploying this technology this technology is already in about half a million vehicles around the world and it's ramping up very quickly over the next few years another application that i'm very passionate about is in mental health so when you walk into a doctor's office today the doctor doesn't ask you you know jay-z what is your blood pressure or your temperature they just measure it we have sensors for all these vital signs but in mental health the gold standard is still on a scale of amount of time how depressed are you how stressed are you how much pain are you in it's very subjective and we know that there are facial and vocal biomarkers for all of these mental health diseases and given the fact that we spend so much time in front of our computers and our phones that's an opportunity to again with people's consent because that's really important it's obviously very personal data we should be able to um build a baseline for every individual and then when that individual deviates from that baseline we can flag that to uh to the person we can nudge the person to change their behavior or get help share it with a loved one a clinician doctor etc so that's an area where i'm very excited i'm actually just over the last year with telehealth really accelerating i'm seeing a number of startups uh in this area and i'm excited to see where they go and then one of the very first applications of this technology that we explored at mit even before we started affectiva was autism as you may know individuals on the autism spectrum really struggle with non-verbal communication so much though that they will often even avoid looking at the face altogether it's just too much information and for kids this really affects their ability to learn their ability to make friends and for adults it affects their ability to get and keep jobs and also uh be in relationships so it's it's it's a really key problem and we are partnered with a company called brainpower mom eight-year-old matthew krieger has been diagnosed with autism a lot of the trouble he gets into other kids is he thinks he's funny and doesn't read at all that he's not or that they're annoyed or angry matthew's mother laura signed him up for a clinical trial being conducted by ned sahin i want to know what's going on inside the brain of someone with autism and it turns out parents want to know that too you get points for looking for a while and then even for looking away and then looking back sahin's company brain power uses affectiva software in programs matthew sees through google glass these games are trying to help him understand how facial expressions correspond to emotions and learn social cues one of the key life skills is understanding the emotions of others and another is looking in their direction when they're speaking looking at your mom and while it's green you're getting points when it starts to get orange and red you're you'll slow down with the points am i looking at you you are looking at it just a few minutes later the difference in matthew's gaze overwhelmed his mother i'm gonna cry why when you look at me it makes me think we haven't really before because you're looking at me differently um so brain power currently has about 450 of these systems deployed around you know with different families around the us we're already finding that with the google glasses and in this combination with emotion ai the kids are actually improving from their ability you know uh to interact with others and read and understand these nonverbal cues the key question is whether this is a learning tool or is it an assistive tool right like if you take the glasses away does the learning generalize and persist and that's what they're actively exploring but it's really really powerful work i often get the question you know i am optimistic about this technology but i'm not naive i know that this technology can be used to discriminate against people it could be used to profile people we often get approached by various governments and and also organizations that want to do lie detection or deception detection or surveillance and security and we have taken a very hard stance that we are not going to engage in this use case and in these industries even though as a company we could probably drive millions and millions of dollars of funding and revenue but i i believe that um you know this technology has incredible opportunity to not only transform human machine interfaces but also reimagine human to human connections and we've chosen to um kind of focus our efforts and our on our mind share on that and then even beyond our company i'm a huge advocate for the ethical development and deployment of this ai um and i feel like well i know and i believe that one key uh component of that is ensuring that a diversity of voices and perspectives in terms of how we think about emotion ai its use cases its applications and again where should we use it and where should we not use it um and i will end with that thank you rana thank you so much i talk about a presentation that takes us on a journey and uh i i'm pretty confident in predicting that uh those tuned in right now or watching later uh if your software with consent was tuned to their uh emotional states um they would be fascinated and uh touched um and perhaps a little timorous and that's what you were kind of getting at at the um at your last uh your kind of concluding set of remarks um and talking about uh your awareness of the ethical implications of some of this and i just wanted to ask a question to get at that a little bit which is your car example for tired driving is compelling i mean as somebody who teaches uh torts i'm well aware of how amazing it would be to through the tort system ultimately have that be standard um kind of standard equipment on a car that would literally save lives from the day it was installed and of course when you're tired you don't process that you're tired you've got the dunning-kruger effect working against you and the car is helping you out but when you said 90 percent of our communication is non-verbal my guess is that more than half the people tuned in and i uh would be in that half suddenly like straightened up and kind of like what am i what am i broadcasting that i need to be it doesn't have to be a poker game to want to be able to try to project an emotion that may not exactly line up with whatever is roiling inside of us i think of the duchenne smile which is uh maybe a grin but you know a kind of forced one that we might all be familiar with from our parents or whoever asking us to pose uh for a photo so just i understand you're uh trying to keep a tight anchor on who would use the insights uh and the amazing technology you're developing but just how how would it be that 10 years or sooner from now if we have things like facebook ray-ban glasses or in your video google glass that you just wouldn't have people if not using your technology using that of a competitor able to walk down the street or sit in a conference room and have their ray-ban glasses on and as they look at each person they're going to know how that person is reacting to them and to others and they'll get a lot of telemetry that uh otherwise would be theirs to figure out and to doubt yeah there are a lot of privacy and ethical questions uh with regards to where you can deploy this technology outside of even security and surveillance we get a lot of requests to deploy this in retail stores from a customer experience um you know analytics or perspective and actually given that now a lot of meetings are happening online right on zoom and tv zoom plug-in or something exactly and you can now for the first time ever really quantify you you know who gets airtime who's speaking who's cutting who off right um and it's on the one hand i think it can be very powerful data and and if it's used in the right way can can be quite productive actually can be used to really enhance meetings and meeting dynamics and team behavior but if it's used in a way to spy on people then i am i'm against this and i'll give you an example in china um i've not seen this myself but but i've seen reports of it in classrooms apparently they're using this technology to detect if kids are fidgeting or not paying attention and if they are they get penalized and and you know they got reported to to to the head of school and their parents and i'm like i'm thinking this is an amazing opportunity to engage a student who is disengaged right as opposed to penalize the students so same technology very different ways of applying it and so i guess i'm i guess daniella said that too technology is neutral it's how we choose to design and deploy the technology and the use case that's going to make the difference whether it's really helping individuals or being used to hurt individuals and of course just in a yes and sense it regresses in part to the eternal question of who's the we making the decision and if the wii is a system rather than just people getting together to make a decision what are the incentives of the system overall which might be you're saying millions of dollars could come your way if you weren't to have the scruples those millions of dollars are a demand looking for a supply that might come from elsewhere i can't believe i'm backing myself into making an argument for strong patent protection so that uh and powerful regulation too and thoughtful regulation yeah yeah wonderful thank you so much for such a provocative um and moving talk it's not often that you feel like you get a glimpse of uh the future and how different it might be uh even in a quite short term but uh every time you present that's what you are presenting thank you thank you uh so uh our third and final um contributor for this panel uh aj agrawal um thanks so much for joining us and uh maybe for rounding out our tour as we think about uh the economics of a.i and

the ways in which ais can make predictions that we might not be able to make but that could prove accurate and how to process that as people so over to you thanks so much for contributing today wonderful thank you very much uh nice to be here and jonathan thanks for the introduction um so i'm an economist and i will take a slightly different view on this uh or pardon me different approach and the way um and i should actually just begin by thanking the organizers for including me here this has been wonderful to listen uh to daniella uh and reyna both uh learned something from each of them and um and so the way economists uh think about new technologies is um reasonably simplistic actually uh which is every new technology that comes along um you know most people think about a technology and are interested in how it works they want to see under the hood they're interested in how it's going to affect their life um whereas for an economist uh we look at new technologies and strip away all the complications and and frankly a lot of the fun and reduce every technology down to a single question and that question is what does this reduce the cost of so for example in the case of um the internet uh economists study that for a while and eventually okay we get it this reduces the cost of search uh reduces the cost of digitally distributing information and reduces the cost of communication in the case of the semiconductor industry economists studied that for a while okay we understand this technology reduces the cost of arithmetic it makes arithmetic cheap and so the same is true with ai uh if you were to ask an economist you know what is ai if you ask a technologist they would probably have an image like this in the back of their mind and they would uh talk to you about advances in uh neural networks and they would tell you about things like um backwards propagation and gradient descent and things like that if you were to ask an economist what's going on in ai they would not have an image like this in the back of their mind they'd have an image like this and the reason that an economist would say this is such a profound technology is because the vertical axis represents something that is extremely foundational it's something that is in embedded in so many things we do and in the case of ai that thing is prediction so the way an economist thinks about ai it's about a drop in the cost of prediction that ai is making prediction cheap and so what does it mean when something becomes cheap back to economics 101 downward sloping demand curves when something gets cheaper we use more of that thing and so that's the way we think about how the uh how ai will interact with people is is as a prediction tool that becomes more and more ubiquitous because it is cheaper and cheaper to deploy so what is prediction you can think of this very simply as predictions taking information you have to generate information you don't have so that includes all the things that you and i historically thought of as prediction like for example in business the the data i have are sales over the last 20 years and the data i don't have are sales in q3 next year and that's the thing i want to predict but so is the data i have are all the pixels in the medical image and the data i don't have is the label on the tumor as malignant or benign that's also a prediction prediction the label i'm predicting the label on that on an image and so uh what we began to see uh as we looked out is that um many of the applications in ai's really they fall into two categories from an economics perspective there are um everywhere that we're already using predictions we just start using more of them because they're better faster and cheaper uh so for example uh in banks everywhere they were already doing predictive analytics like in fraud detection anti-money laundering sanction screening now they're just using a new tool they're taking out some of their older statistical tools and dropping in new machine learning tools so that's category one the other category is taking problems that we didn't used to think of as prediction problems and transforming them into prediction problems so that we can bring this new technology to bear so um in the in for example semiconductors the canonical example that everyone's familiar with uh one of them is photography we used to solve photography with chemistry so photography was a chemistry problem and as uh semiconductors proliferated became cheaper and cheaper the cost of arithmetic went down we eventually transformed photography to an arithmetic problem and now we're doing the same thing with ai as prediction becomes cheaper and cheaper we take problems that we didn't use to think of as prediction problems and we transform them into prediction problems uh canonical example that everyone on this call is familiar with is of course driving uh so we effectively turning driving into a prediction problem um and so you know the the car uh you know doesn't have our eyes and ears so it gives its own sensory inputs cameras radar lidar and things on the outside of the car and the way you can think about this very simplistically is the way we train the car is we put humans in the car they drive for millions of miles and the ai effectively begins to be able to predict when it receives some i don't know if it's the ai blocking you but i at least have lost your audio oh okay can you hear me now you are back okay all right so i'm just i effectively i'm just describing driving as a prediction problem okay and there are all kinds of problems that we didn't used to think of as prediction problems that we have transformed into prediction in order to use ai jay-z can you hear my audio still yes i can okay wonderful um and so these are all problems that we didn't use to think of as prediction that we are we are uh addressing all of these as prediction problems okay so that's my point number one is just think about the cost of prediction uh uh dropping significantly the second point here is uh judgment and so we can take any task and break it down into these elements so imagine if we were in person uh in cambridge on campus and i was giving this presentation and as i was walking around i bumped my knee and on the lecter and a few days later my knees are in a lot of pain and so i go and see the doc the dog probably asked me some questions maybe she sends me for an x-ray that's input she's collecting input then she makes a prediction based on the data she has so maybe she says okay 90 chance that you've uh just bruised your knee 10 chance is a hairline fracture that's the prediction then she applies her judgment uh and you can think of judgment one way think of judgment is what's the cost of a mistake what's the cost if i if i treat this as a bruise but it's really a hairline fracture versus what if it's i treat as a hairline fracture but it's really a bruise think of that as her judgment then she uses her prediction and her judgment to reach a decision and both the prior speakers use the word decision uh in their presentations i think that's very central to our thinking about humans and ais um and then as let's say her decision is okay i'm gonna treat this as a bruise we'll put some ice on your knee and then come back in in a few days if it's still hurting come back and see me a few days later i'm either all better good to go or i'm in more pain and i gotta go see the doc uh either way there's an outcome uh i'm better or i'm not and then we learn from the outcome and that becomes training date what from an economics perspective what we're interested in is that you know people are talking about ai's coming and taking people's jobs but ai's really are a substitute for one thing in this diagram prediction that's it ai's substitute for human prediction so in economics terms uh as the cost of machine prediction goes down it will push down the value of human prediction because that's the substitute but all those other things in this diagram are complements and the value of complements goes up so when the cost of prediction falls the value of the of the complements goes up so for example human the value of human judgment goes up which is the value of what to do with the prediction just like i teach in a business school in the business school 30 years ago when people are coming to a business school let's say they took accounting they would get homework assignments of tearing out a page of the phone book and for the undergrad students the phone book was a thing that used to have the phone numbers of everybody and you would go to the phone book to look up someone's phone number and so you would they would you know go home and add up all the numbers in the phone book to practice their adding and and now of course we don't expect anyone to practice their adding when they come to business school adding and subtracting but what they do do is they apply their judgment to the output of the calculations that are produced by their spreadsheet so the judgment in terms of what to do with the predictions and how to deploy them the value of that will increase my last bit here is what happens when judgment can start being predicted i'm just going to show you on some experiments that we're working on i'm the co-founder of a company called sanctuary and so we make these humanoid robots and the control systems are humans and the main thing to to keep in mind uh with our robots is that they are interacting with the world and as they uh the human pilots the robot the robot starts to learn how to interact with its environment and an increasing fraction of the duty cycle of whatever task it's doing we can start predicting and so what we're interested in is what fraction of the duty cycle can ultimately be taken over by the ai via prediction so for example this one's in berry picking so right now this is we've got a combination of human pilot control and an ai and as we pick more and more berries the ai is able to take over larger fractions of the duty cycle and then whenever it gets to a point where it feels it does not have high confidence in its prediction of what to do next it goes back to the human pilot and says i'm stuck can you uh take over control uh and uh just the last example i'll offer uh illustrate to you here is playing chess let's play some chess and this is now in this chess uh example i'm just going to show you what the uh the the human pilot is seeing that display up in the top right corner and so the human pilot is not a chess expert but we're using a chess expert ai to guide the human who is then moving the chess piece uh while they're uh playing against their com their opponent and as uh as we repeat the exercise the ai is able to uh take over increasing fractions of the duty cycle in terms of the motion planning of moving the um the the physical chess players uh um spotting here let's play some chess let's do it so i'm gonna go i'm going to just wrap it up there uh we've written this book called prediction machines the simple economics of ai and the remarks i gave today are from the book wonderful thank you so much and uh if you want to put a link to the book into the chat room i think as a panelist you're entitled to do it for everyone or maybe one of our organizers can do it people can click their way to order it or to learn more about it but just to first a clarifying question when you had your slide up of the kind of duty or task cycle um was were you saying that prediction has been the realm of the that's the one part of the ai but now it can inhabit other of the circles or are you saying it's still confined to that one circle among the rest no it it's ai is uh entirely confined into the uh prediction circle and so what the point i was making there is uh as we divide up the tasks into what's prediction and what's judgment when we get enough examples of judgment then that you can think of it as shifting over into the prediction column that's now a thing that we can predict uh and so if we get enough examples of a person applying their judgment of what they what the human would do in a particular case so let's say that you know we were given an example earlier i think daniella uh she used the word suggestion or recommendation and so the the in other words she said think of the ai as giving the human uh a suggestion a recommendation and then the human she compared it to an intern makes the decision once the ai gives enough suggestions and then the human makes the decision and we and we get a feedback of um you know what the human did with that recommendation or suggestion then we can start predicting that too and so yeah we can sort of kind of keep moving the the uh but but it's always only predicting it's never actually um it doesn't hit so far we have no machines with agency but it might only be predicting but this is kind of the thing i was clarifying and you're right the connections with daniella's presentation are just terrific but that prediction bubble is growing and growing it's a dynamic bubble the the sphere of things that are candidates for accurate prediction is getting bigger and bigger for which you need less and less of the surrounding ecosystem of bubbles to have the machine succeed in the world yeah yes that's right so in other words when i was watching rana's presentation i was saying okay you know my interpretation of it was she started off with an ai that was learning to detect and predict when an eyebrow was furrowed and then she ult and and then she would let's say one stage use that and feed that to a human who is then taking that that uh label of a furro brow and characterizing that as some emotion of angry or frustrated or surprised and then when it gets enough examples of that it where it combines the eyebrow furrow and and other things then it then the machine itself can go right to surprise it doesn't have to right you know stick with those intermediate predictions and uh now from a clarification question to like a question question uh if we end up seeing a bumper crop in the ability of ai to make meaningful and accurate predictions might that be unevenly distributed in other words we're not just talking about by metaphor wealth but income inequality could it be then that those in a position because of the data sets they're starting with or the kind of training they do that creates a virtuous cycle so that their machines are out there and being purchased and learning more as they go would it mean that predictions accurate predicting could be quite unevenly distributed that many entities would be no better off than they were before unless they're in a licensing or other agreement with the entities that manage to make great predictions absolutely and in other words i would say that's the default it's not like a a chance it's it's it will absolutely be uh unevenly distributed and i think a great a great way to assess that distribution is daniella had a slide where she was titled the um something like ai challenges yeah every one of those challenges that she had six of them on her slide every one of those challenges is a friction in in terms of trying to generate robust predictions for an application so if you know no matter what your application is you go to her slide of six things and say you know how close are we to overcoming each of those six things and the ones where like there's some easy ones that we've already done where we're doing predictions for example for fraud detection in banks they spent a lot of money on it it was already well regulated in terms of how they would how they would do this so each of her six categories they had the data they had all the stuff so that's pretty well that's a pretty far advanced uh application of machine intelligence is for that and then you go to you know autonomous driving and all three of the ones on the bottom of her row i like she had three on the top three in the bottom the three bottom ones the the car industry is still i would say in reasonably early stages of trying to overcome those those those things and so i think it'll be very uneven and uh but we can anticipate the unevenness by by basically thinking through those challenges for each of the applications got it now even without the help of affectiva i think i have identified in daniella's window a hand up and a desire to say something daniela over to you so um yes i'm eager to contribute i would like to say that the challenges that we are grappling with today are due to our existing approaches to our current methodologies which i want to re-emphasize they are due to decades-old technologies that are enhanced by computation and data they're brute force methods that that didn't work 10 years ago but then all of a sudden work now because we have faster computers and we have more data there are new ideas that the community is developing not every data item is equally important there is the right data and then there is the um the the extra data that does not bring anything new in the training of an engine there are machine method learning methods that are unsupervised versus the ones that are supervised and so my point is that we are making uh technological uh advancements that will make some of those some of those questions uh disappear foundationally and i will i will just uh give you a couple of examples um so i want to give you an example uh of uh of reduction in size of uh of this of of a network so most neural networks today are huge uh they have uh hundreds of thousands for the small size and uh we go up uh into millions and billions of neurons uh each neuron executes a very simple function it's a function approximation it's a thresholding function but by putting together a lot of neurons and interconnecting them we somehow get this magic magical behavior well if you have a safely critical application it's important to kind of understand how the network reasons um to a decision and we've been looking at driving as an as an important example of a safety critical application and we have shown that we can learn to drive by watching humans how they drive using about a hundred thousand neural network deep neural network nodes and about a half a million parameters but then we have we developed a new model which we call neural circuit policies these models are inspired uh by the natural world by the the neural structure of simple organisms and by how biologists have completely mapped what that looks like and so we can achieve the same task of learning from humans how to drive with 19 of our new neurons but our new neurons are more powerful they can compute liquid time differential equations but now if you have only 19 neurons you can get an explanation for the system you can also have a more compact representation of the system and this has implications on how you train the data it has implications on the carbon footprint of the model and it has implications on a lot of the different technical challenges so the point is that yes we have some challenges that are making some industries get ahead of other industries but there is a new wave of of technologies that are coming and i believe that in five ten years time we will have more we will have different tools that will help us in some sense um equalize so in some ways i think that's that sounds reassuring and i think it's intended as reassuring because it's saying there will be things can't get too much out of sync among different industries or among particular competitors of course it also suggests that within a few years we really are going to see an explosion of new technologies into applications and i guess maybe it puts it over to rana as a somebody who will you know is an industrialist in these areas as well as a critical thinker about these things and i guess i'd i'd love to know rana when it comes time for we which maybe is overdue it's like now to be thinking about what boundaries should be placed on the power of these systems on the differential uses you just change it by five degrees and the incredibly salutary um heartwarming case you showed for autism could be applied to uh uh the example i think daniella was talking about i think it was daniella uh or maybe it was you in china of uh using the very same ingredients to gamify kids uh being more aligned with authority all that kind of stuff i guess here's the 64 bitcoin question to me um to what extent uh is the prevailing silicon valley and general market-based ethos that says if it isn't prohibited it's permitted basically huge field go out innovate transform experiment learn and if there are problems we'll catch up to them later how much is this posing a challenge to that framework and how much would you welcome the corresponding intervention by government by some outside set of judgments not just what you owe to your shareholders and to your conscience in how to shape those technologies as they're still in there still infancy uh how again it's like it's hard for me to say this as an internet person because like gosh the development of the internet was so critical to have with that permissionless innovation but i'm i'm just curious with your temperature on that question okay well first of all i i i don't think we should stifle innovation but i think this uh mantra of you know um you know ship it and then beg for forgiveness i don't think that that's gonna fly anymore and i actually feel strongly that as leaders as innovators as people who are at the forefront of this innovation we need to set the bar higher so i'll give you an example we decided a number of years ago to join the partnership on ai consortium which was a consortium started by all the tech giants google microsoft amazon apple uh facebook etc but they also invited a number of startups and we partnered with a number of civil liberty organizations like aclu and amnesty international and one of the projects or initiatives we undertook was to outline all of the different use cases of emotion ai and proactively think about the unintended consequences like play it out simulate it and then imagine okay where could things go wrong and then try to guard against it that's very different that's a very different approach than let's really literally you know ship it and there's that exercise publicly available yes there is we published the white paper actually um i'll have to dig it up and and share it um yeah but we literally put a table together every row was a different use case and then we had okay what can go wrong and what can we guard against it and you know we we sat around the table with members of the aclu who really challenged us and and i thought that was really interesting because as technologists and entrepreneurs we don't often engage with the aclu right we kind of try to just like proceed and kind of not really think about that so i'm a huge advocate for thoughtful legislation and regulation but i don't think as leaders we need to wait for that we should set a very high bar and and be proactive about these unintended consequences and that could even go so far as into product design and i should say i'm working to synthesize a number of the questions that are um audience participants have been asking thank you for all those questions and um a couple perhaps uh the observation that you know technology is a neutral tool it depends how you use it is total catnip to the science and technology studies folks who are all up no no technology is completely political design embeds values but to the extent that that is true does that mean you would welcome you know a shadow over your designer desk while you're mapping out how the products gonna work that's trying to make sure that particular values get embedded in particular ways i i actually really believe in defining a number of core values and using that not just as marketing fluff on your website but using it to inform your business strategy and so my co-founder is professor rosalind picard she's a professor at the mit media lab when she and i started affectiva we literally sat around her kitchen table in her house you know 30 minutes outside of mit and we said okay there's going to be a lot of applications here how are we going to draw the line and we defined three criteria or core values that have informed our business strategy you know commitment to privacy acknowledging this power asymmetry we wanted users to get benefit out of sharing this data um and transparency and kind of holding a really high not making our technology a black box but actually opening it up and and explaining the nuts and bolts of it to the extent we can so so these core values have informed our business strategy they're encoded in in the design of our of our algorithms and our solutions and i i think it's important that that that whether you know you're an ai startup or you're a fortune global 500 organization implementing ai and digital transformation that you would you be clear on these core values so i think that might put it back to our um resident economist uh to ask how do we avoid a race to the bottom where if you've got people wanting to be responsible and really opening up in process and substance but again there persists huge demand for all of the fruits of this and as daniella points out they can be done cheaper maybe with less data that it will be democratized don't worry about that or it could be democratized are there any looking at past history and other technological explosions perhaps what are the prospects for being able to contain and set a playing field for it rather than just have whoever's willing to not worry about this stuff is the one who gets the biggest market share well i think rana had two elements to her response one was uh the you know the leadership of the builders and the other was regulation and um it's hard to imagine um the race to the bottom not having a very big effect on the first of her uh characterize in other words it will be people like her that will care a lot uh just as there was in food like there was the ben and jerry's and people that cared a lot about you know how the ingredients were sourced and what they did and then there were people that uh you know uh were didn't care at all about that um and furthermore there's a lot of uh you know this is a field that's kind of attracted a lot of um probably because of the time it it came into its its uh glory um there's a lot of attention to the sort of social justice aspirations of machine intelligence and meanwhile there's a lot of uh i don't want to say bad actors but countries with other objectives and and sometimes people say well we can we need to control ai like the way we control nuclear weapons but we can't do that because the the control mechanisms the enforcement mechanisms we use for for nuclear require on an ability to inspect and enforce and we cannot inspect or enforce uh you know when people can write code in their dorm rooms um and and and uh and build models to to apply against uh you know all kinds of problems so i think that seems like a huge dilemma i i hope you're about to say but here's how we can overcome that so it feels like when you say race to the bottom uh it implies that there is only one direction and so um you know the the it i would i view this more as a uh you know so first of all we call it ai and it feels like there's something magic like i think another way to do this just call it computational statistics which is what it is and say okay so we're getting these advanced and computational statistics and uh you we've got to make sure that the that the positive uses for this um protect us as best as possible from all the negative so it it i imagine this to be a constant arms race a constant arms race like the one that we saw unfold uh in in the um cambridge analytica example uh like now we've seen it so you know if it happens once uh then you know then the key for us is to make sure that we don't let it happen over and over again and so we know that's a thing and now we are building all kinds of of uh tools to protect against that you know you this is an area in your own area of expertise so i think you know you'll have views yourself and so um i've you know my view is you can't stop the bad actors here and so it's just going to be a constant um back and forth between our ability to protect against it and and the people looking for for roots around well i must say to the extent that there end up excesses and problems the fact that they arise from mere computational statistics small solids uh understanding that the ai uh spectre carries its own well the reason the reason i the reason i wanted to just say that jc is because when people use the phrase ai it in for many people it invokes a sense of agency like as if the machine itself has imbued in it some nefarious objective and so the reason i like to call it computational statistics is because then people realize the machine's not doing anything only what it's instructed to do by the people who built it yes and that's to say to to the extent we worry about these things it might be a bit of uh red herring to be fretting about artificial general intelligence and what happens if we have you know a terminator or something and more as you say just about extremely powerful potentially unprecedented tools of assessment and prediction and ones that can scale up i guess this is maybe back to daniella pretty quickly especially once you have crossed the blood-brain barrier between bits flowing and processors flopping on the one hand and stuff happening in the real world and the other i mean the the videos you you've all offered are so evocative because they have a tangible real world quality to them that concludes as jay was saying the uh the ability to learn by interacting physically with the world and i don't know if that provides a potential uh gatekeeping point once it's escaped the mirror platter or something i don't know danielle if you have thoughts on that or any of you really well i i think that it's really important to keep perspective it's important to understand that today's technologies are not going to solve all of our problems and they're not going to take down the world you know if you're worried about robots and and you think that your job goes away i just want to ask you where are the robots look on the street how many robots do you see on the street yeah but that chicken keeps saying every day the farmer feeds me they're my best friend so well okay so um i i want to support um what aj said uh it's we have a lot of advantages from computation and from statistics uh we have uh we we already ha we already see how computers can can do more than what we can it can put machines can compute with higher precision uh then we can they can look at game playing and they can see more more moves in advance than we can and now we're just adding additional layers of sophistication uh in uh in that process um so i i actually don't see the the machine that will take over the human intelligence coming anytime soon i i really like the way and my friend andrew ing talks about this andrew says that worrying about machines taking over is like worrying about overcrowding on mars someday in our distant future this may happen but this is so far in the future it's not um it's really not uh worth worrying about it uh i i hear you on machines taking over i don't know how much that carries over to the dynamic application in a broad sense of applied statistics and rana you're going to get the last word we have to wrap in a second and i i also want to ask uh as you make your last word if you can quickly address the question how do you feel about potential defensive measures that people might develop to prevent the use of emotional ai or effective uh ai affective you said masks were no big deal you could sort of retrain on them but i imagine there could be other attempts to make it uh make make things more inscrutable okay i i um i'll answer that and then i i yeah i'll answer that first so when we started doing this work both in automotive and in other industries uh we got a lot the the first question was like well if i know that a camera's watching me am i even gonna emote

2021-11-03 21:16

Show Video

Other news