Inclusive AI - Pittsburgh ML Summit ‘19

Inclusive AI - Pittsburgh ML Summit ‘19

Show Video

Well, hello my name is Eve Anderson, I'm a director, in Google AI and, I. Work, on these two areas that I'm about to talk about I've, been, at Google for over. Twelve years which, is a lifetime in, Google years, before. That I mostly had entrepreneurial. Roles I had a company, based. In Cambridge Massachusetts and. I. Also taught, at a university and helped develop the curriculum for a new computer science university. So. Um my, plan is to go through this material fairly. Quickly and then leave lots of time for Q&A because, I think both of the topics, that I'm going to talk about our. Both. Really interesting, but also a little bit controversial, and so Q&A, is my favorite, part. Okay. So we're going to start with accessibility. And. Accessibility. Is about, making, products that help people with disabilities. 15%. Of the world's population, has a disability that's. Over, 1 billion people across. The planet and these. Include, impairments. In vision, in hearing. In movement, and in, cognition. So. Some of the things that we've been building at. Google, in order, to help people with disabilities are. Very AI, focused. And AI dependent. So. I'd like to tell you about a few of them one. Is called lookout and we launched, it in March of this year, lookout. Is an app that aims to help people who are blind or who have low vision better. Interpret. The world around them so, it uses computer, vision, to identify, objects, and text in the environment, and then, it provides a continuous, audio. Stream, to. The user so if you can imagine they might be walking around maybe one headphone in their ear and it's. Just talking, to them telling them what's. Around them. So. How it works the. Camera on the phone captures. Live imagery. Of a scene continuously. Then. Computer, vision is used to identify objects. People and text in the. Then. We score. The items that are returned so we decide which are the highest importance. One and ones, in which are the highest confidence, in recognition. And we, speak only the relevant items, to, the user to avoid overwhelming, them because. I think you can probably imagine if you're walking and trying to listen to the world at the same time if it's just spewing, out irrelevant, or, too much stuff that would kind, of suck. So. This. Is a really, hard problem to solve there are a lot of challenges beyond. Traditional. Computer, vision challenges. One. Is that if a person can't see, what's. What, they. Want to frame to, recognize the, images might be blurry they, might be poorly lit the. User might, be moving. As they're using it and so, we have to use techniques, like frame, selection, to, avoid, selecting. Images, that are blurry or where there's a lot of movement for. Recognition. Another. Big problem is miss recognition. In this, example it's identifying, the door as a refrigerator, and, if. You're. At home that might make sense that something, like this would be recognized, as a refrigerator, if you're, at work it's a little bit more likely that it's a door and so.

We Have to use the users context. To, understand, what they're doing at a given time and then, only use the recognizers. The, computer vision recognizers. That make sense in that, context. The. Third issue is latency if. Too. Much time passes. Between. The time that the camera detects the objects, and when, it's read to the user, then. The, information. Could be completely, obsolete, so. What we've done is developed a whole bunch of on device. Recognizers. And this, also then of course helps for users who aren't connected, to the internet or who, want. Privacy, in, what's, being recognized, for example, if you have it reading your, envelopes, at home you, might not want all of that going straight, to the cloud. Cognitive. Burden is one. Of the biggest, problems. It. Could be overwhelming if there's too much information or, too much or wrong. Information, so, we have to be really just, judicious. In how we, score, systems, in what, we decide to present, and we, have to do. Some techniques, like collapsing. Information, for example we have an object recognizer. And we. Have a text recognizer. Well. You don't want it to say can of coke from, the object recognizer, and then, say coke from, the text recognizer, that's redundant, and annoying so, we have to understand, when things are part of the same thing and dedupe. Results. Lastly. Another, challenge, is extended, use so. We expect, people to be using, this for. Long, periods, of time throughout, their day and we don't want them to have to pull. Out their phone and change the controls, all the time that, would be really annoying so. What. One, of the the, main ways to use is by wearing it on a lanyard like, all of you have around your necks right. Now and so, we built in control, so, that when the, user covers, the camera with their hand it just stops talking and then, if they want to resume, we. Use the accelerometer to detect, if a user knocks twice on it and then it resumes so. That way they never have to pull. Out their phone and it could just run and run and interestingly. We've, been collecting analytics. On usage. And we found that, 37%. Of. Users are using it for more than 200, minutes a day so. This, is something that. I think is really, changing. How, people interact, with the world actually. One of my favorite quotes from one of our real users, was I'm, gonna paraphrase. Cuz I don't have it in front of me but it was something like I was. Walking around in, a hardware store and I, felt like I could see again I was. Examining. All of the different products, and I didn't have to ask anybody, for help. That. Made me made, me really, really happy to, read that it's. One of the reasons I love working in accessibility, because you just like. Viscerally. See how you change people's lives. So. Another area of. AI. That's, really, useful for accessibility, in many many different ways is speech recognition. It's. Becoming so commonplace, that people almost don't think of it as AI anymore. Once things aren't magical, we don't call them AI anymore, but it is I promise. Very. Similarly. To computer. Vision it's. Taking in these signals, it's, running it through a convolutional. Neural network. And then it's providing, real-time transcriptions. Of the, speech that, are being said and this has so many applications, for. Accessibility. And here are just a few. One. Of them is a product, called live, transcribe, which we launched launched, in February, of this. Year this. Is a product that helps people who are deaf or hard of hearing so, what you would do if you're deaf or hard-of-hearing is you'd put your phone out on the table and then. When other people, around you speak you see transcriptions. In real-time, of, what, they're saying and, you can save them for later it, can. Recognize, many, many different languages. You, know some people who are deaf or hard-of-hearing can, speak but, others can't or they prefer not to speak, and so. If you want as a user you can actually type, what. You want to say to the group and then it can read it out to everybody, and I've. Seen this make a huge, difference, in people's lives, for, example one, of the research scientists, at Google his, name is Dmitry. Really. Really smart person. Deaf. Since the age of three and. He's. A mathematician by, training and. Because. Of live transcribed, he was actually the impetus, for, us building it because he was having trouble communicating with. His teammates, so, because of live transcribed, not only is he better able to communicate with his teammates, but, he said for the first time he could have conversations, with his six-year-old, twin, grandchildren. Speech. Recognition, can, also help with different types of disabilities. For. Example, something, that we released the, fears that go now but we continue, to release.

New Versions of it it's, called voice access voice. Access lets, you use an android phone without. Having to use your fingers, at all so, you're using your, voice as a finger, in a way so. It's a little bit different from the Google assistant, where you can do. Certain. Things which, we. Refer to as high level commands, rather. You're doing low level commands like scroll, down or, click Next or. Whatever, anything, you can do with your finger you can do with your voice so it gives you complete, coverage. Of all, apps. Including. Ones that Google didn't write and, I've. Actually, found. This very useful myself. So, when I started, working in accessibility. I pretty. Much believed, that I didn't have any disabilities, but, I type, way, too much, and I. Get sore and I've had really, bad RSI, actually, for the last few months and voice. Axis, has been a lifesaver for me especially, when it first flared, up I was able to just not use my hands at all and it made a huge difference from my own healing so thank, you team. If. I had if I had known how important, it would be to me I probably would have resourced, that project. Here. So. An, another. Thing I want to talk about is Google assistant, which was not built as an, accessibility, application. But, it has had, really, big implications. For. Accessibility, for. A couple reasons, one. Is that it removes the need to, so directly, interact, with technology in. The. Way that technology expects. You to which. Is a problem for people who. For, example have some kind of cognitive disability. Or for. Even, if you're blind or you have a motor impairment, and it's hard to type just, being able to speak, something is just so so, nice I, have. A teammate, named Kendra, and her. Mom has. I, think it's multiple sclerosis, her mom is in a wheelchair. And one. Day her, mom fell, out, of her chair and she couldn't get back up and she, was able to ask the assistant, to call her husband, and her husband came home and helped her so. It's, interesting because, a, lot, of things that weren't built for accessibility. Have, accessibility. Implications. And a, lot of things that are built for accessibility. Actually, become mainstream. For, example autocomplete. For, words that was built for accessibility. And now, I don't, know about you but I wouldn't want to live without it I.

Want. To show you a, video of a new project, that we're currently working on, so. What I just told you about speech recognition, works. Really, well for people with, clear. Speech without any kind of speech impairment, but, a lot of people with. Disabilities, have. A. Different, way of speaking than some of the rest of us do and so let me show, you a little bit about this project. People. Whose speech is hard, for others to understand. They're. Not used, in training the speech recognition, models the game is is to record things. And. Then, have it recognize, things that you say that aren't in, the trainings, your. Demitri record. And 15,000. Phrase it. Wasn't, obvious that, this was going to work. Just, sat there according. To. Understand. And, the, person. Who. Speak, to, them. You. Can see that it's possible to make us be trapped and I start to work for. Dimitri it should be possible to make it work for many people even people if you can't speak because they've lost the ability to speak the work that shun Sheng is done on you, know voice utterances, from sounds. Alone you. Can communicate. But. There may be other ways, of communicating, most, people, with ALS end, up using an on-screen keyboard, and. Having, to type each individual. Limb with their eyes for. Me communicating. Is. Steve. Might crack a joke and it's related to something that have been you know a few minutes ago the idea is to create a tool so that Steve you, can train machine. Learning models him style to understand, his, facial expressions. To. Be able to laugh to be able to cheer to be able to boo things. That seem maybe. Superfluous, but actually are. So, core to human. I still think this is only the tip of the iceberg we're not even scratching, the surface yet of what is possible if we can get speech recognizers, to work with small numbers of people who are in lessons which we can then combine to, build, something that really works for everyone. So. As I mentioned that's an active area of research right now we're, actually seeing, some pretty amazing results. With some of the people who've signed up to, be testers, for that so I'm just so. Excited at, the potential, for. Helping, more people interact. Verbally. Okay. I'd like to switch gears a little bit and talk about ML fairness, which is another aspect, of inclusive. Ml a. Little. Bit over a year ago I think it was in June of last year Google released, our AI principles. And these, are things that we live by as we, create, ml. Systems, and they're. All available, online. For everybody, to read and, I want to zoom in on one of them the second one is avoid creating or reinforcing. Unfair, bias and what. We mean by that is that we want to make sure that the algorithms, that we create, are. Not. Making. It so, that our products, are worse to use for people, if. Regardless. Of their, gender, or skin, tone or religion. Or any, number of factors and these up here are just examples and, in, different countries, different. Examples. Are relevant for people who have. Been, traditionally marginalized, or. Who technology. Is not serving, as well as it could. Sorry. About the formatting, here this says humans have a history of making product, design decisions, that are not in line with the needs of everyone and I'll. Give you one example, of that. Until. 2011. There. Were no female. Body. Type, crash-test. Dummies, for testing vehicles, the. Weights the heights the shapes of the crash-test, dummies. Tended, to be male and. Because. Of this not because of any ill-will, but. Because the testing, was not comprehensive. Female, drivers, were much more, likely, to be severely, injured, in a, crash and so. This. Is also hard to read but it says these choices may not have been deliberate, but they still reinforce. The importance, of being thoughtful about. Technology. Design and the, impact, it might have on humans. Now. If we think about a traditional ml. Pipeline, you're collecting, data maybe you're having people label, it or maybe the. Labels are based on what the end users are doing. You as the developer you're, choosing, your objective, function you're. Choosing which training, data you're actually going to use you're. Filtering, it maybe. Ranking, it aggregating. It from different sources, users. See an impact, they might click on it they might not click on it they might do something with the data and, your system is continuously learning. Well. At every point, in this, cycle. There. Is the possibility for, bias to be introduced. So. Here's, one example of how to combat that so.

This Is and this is an example of combating it by changing, product, design. So. Turkish. Is a gender-neutral, language, the words for, he and she it's the same word it's O and so, we found because of training data that. The. Oh beer doctor was being translated as he is a doctor, but. We all know that, that is only one possible, translation, and, that really reinforces. Many. Years, of bias which actually, in the real world is kind of evening out but because all the historical, data is as. It is the. Translations. Were biased. And so. What. The team decided was to make a design, decision to. Surface. Multiple, translations in, cases, like this. Another. Example is. Fairness by data so getting an unbiased data set, so, we found this is using, a, commercially. Available. Classifier. That, the first three, images here, were being correctly. Classified, as a wedding whereas. The fourth one was not but it is it is, a wedding and so. What we did was we launched an inclusive images, competition. And to. Get more images, from around the world and we've, released a data set called open images, extended. A. Third. Example is fairness, by measuring, measurement. And modeling so, this shows what's, called the perspective, API it's a system, that determines. How toxic a comma is on the web to help moderators. Moderate. The more toxic. Ones. And so, it, was finding that some people are straight, had. A very low toxicity, score, whereas, some people are gay had, a very high toxicity, score, even, though they should be neutral and so, the, first step is to measure the, results. Among. The different groups different identities. And then. You, can use different. Methodologies. In this case we used a methodology called min diff you. Can look it up we've published papers on it in. Order to, improve. The difference or decrease, the differences, in results, for people with different identities. So. Here are just some of the lessons that, we've learned in, working on mo fairness across the company, now. As with the Translate example, identify, places where there can be more than one correct answer, think. About the data that you're using for training and evaluation how, its created, is it representative. But. Don't just measure the representative. Of the data but also the, model because, even good data can result in a bad model. Give. Users. The opportunity. To provide feedback no, matter how smart you are or how inclusive, you think you're being you're, gonna miss things we, do all the time so making sure that users have a way to, provide feedback, including. Users. Who come from a variety of diverse backgrounds. We've. Created some tools to help people in, their work these are all free. Tools, that anybody can use one. Is called facets, to allow you to look, at the data the, other one is called the what-if tool which allows you to look. At model performance, without writing code.

You, Might have heard already about our machine learning crash, course we, put in an intro to fairness module, in there this is the same course that we used to train Googlers, and over 20,000. Googler, employees, have already taken it, and. Then we're promoting, transparency through. The concept, of data cards, and model cards which. Show, distributions. And performance, of both the, datasets and the models themselves. So. I think that. Yes that that is it and now we have a few minutes for questions you.

2019-11-26 05:19

Show Video

Other news