Day 1-7 Roundtable Discussion

Day 1-7  Roundtable Discussion

Show Video

day one Roundtable discussion day one ended with an open discussion on future Technologies including both those discussed earlier and other technological advances we can expect over the next decades speaker bios are at future of interface.org BIOS we're now into an opportunity for a serious Roundtable discussion about what we've heard today and the the questions that we're going to try to answer I think among the questions we'll try to answer are what have we learned from today's discussion about potential and possibilities for the next 20 years what is it that we can't do that we would like to do and what's preventing us from doing that do we have any solutions to break through those barriers and what did we miss in today's conversation about projecting two decades into the future Greg you probably might want to add to uh to that brief summary of what your aspirations are for this next hour of discussion no first of all I want to mention that all panelists that are here I see a number of them all on here you're free to turn on your cameras and your microphones because this is open to all of you and so we'd like you to to all join in and the topics if you're interested Etc hi Kathy um and um the uh yeah look at you know you know centers uh localizability uh questions like uh we heard a lot about AI today um and I know that everybody as soon as you talk about Ai and the fact that it would be a lot better if it was personalized and and that's scary like all my information personal information is going to get fed into this big AI pool who all has access to it so you know one of the questions um is um and and I don't know the answer would be how long would it be before something like this can be localizable where you can have all this great AI except that I could pull the model down and run it on my phone and my information I can feed to it that contextualization we're talking about um but it but it would not go back up to the cloud and this could also if I can personalize it I think that that came up earlier a bit um if if we can personalize it it can then shape it and help it to overcome some of this bias like I don't want the AI to always assume I'm the average person and I'm not sure what the average person is but I don't think anybody uh there's a small number of people who are going to fall into being the average person on every Dimension um um and so maybe that's a thing so I'd like to uh start off talking about that um but uh also a number of the other topics um and uh I know that uh Andreas uh has has picked uh harvested a bunch of ideas also to bring forward so um can we uh just first I'd like to get people's thoughts about the importance of personalization and and localization of personalization can I get one example of that uh Google has an application that runs on the mobiles pixel mobile is called live transcribe uh it will take up to 100 different languages and it will produce a transcription of the selected language assuming that's what's being spoken um you know the limitations are that it has to get a pretty good quality audio signal in order to to do the transcription historically we pushed the digitized sound to the cloud process that and then send it back to the mobile and then displayed the transcriptions but we believe we're um on the course to get all of this to happen locally in the mobile in response to Greg's concerns about personalization and private information never leaving the device and so I I see that as a feasible um example of localization uh although getting high-powered artificial intelligence into something in your hand feels like it's a stretch but then we would have said the mobile uh a smartphone so to speak was a stretch if we were talking about it in 1965. I see uh is it rain or Rhine has their hand up it is rain like the weather rain has in here yeah okay yeah all right you know to to your point then we already have um AI that it that does have local models that are out on products that are in use today uh so live transcribe being one example but also anyone who makes use of natural language voices in text to speech is using um is using some kind of an AI model to engage with that natural language voice and some of those are served locally some are served online so um I don't know that it's actually too far out there that some of the um more practical or pragmatic versions of these tools at smaller scale than the kind of lofty description that we say when we're when we just sort of think generally about AI I don't think they're really that far out or that uh that impossible and then the other thing that um that I want to mention is that it's not necessarily and I apologize if you're hearing my kids um it's not necessarily needed to have ai solve these personalization problems uh personalization itself yes super important but do we need to use Ai and machine learning to do this work all the time or is there a certain amount of personalization that we can design into interfaces through intelligent algorithms that we write and aren't actually relying on some kind of fancy futuristic system but are using what we already have available to us yeah I was referring more too personalizing the AI so you want to use AI for an AI function but you're afraid to contribute your data anywhere near the cloud because um you know it gets absorbed and then no one knows where it's going to show up again I saw a hand go up Yuda and then Ted oh I think there were three hands before me but I I'd love to make a comment on this I think it's less um the as you said the use of the AI to personalize but rather to overcome the average training that is there in terms of the uh all of the functions that you want if you differ or are unexpected or an outlier Etc so the in order to torque the model in your direction um is there a way that we can actually prioritize the local over the average or the the the majority or the the um the normative part of the model some way of giving privilege to or prioritizing the the um the local uh as opposed to the general without the need for all of the training and the huge data sets um uh Dad I'm gonna uh you're just right I just saw that I called out of order because Larry's hand was raised against a tree and I couldn't see it so um the uh so Larry then Andreas and Ted is the order that people raise their hand notice that myself hey everybody um there is a localized um AI for speech to text as you mentioned already I'm right here in Boston Xander glasses now do not connect to the cloud when they're creating captions in their glasses so that's kind of a low hanging fruit in some ways but in my in my time within high tech I learned quite rapidly that people do not use settings that well you look at all the options you have in captioning color size font rarely touched rarely so this notion of personalization customization and to keep it private to yourself you know here we are in a foi conference and I think of foa fear of algorithms we really have grown to not really want machines to decide what we should be seeing but what happened to the good old-fashioned Wizards how about when you sign on to any of these apps it runs you through a series of questions that you yourself can answer for your own perhaps marginal kind of interfaces that you want that are different without it having to go to the cloud keep it all local but help someone get this customization that they want themselves which seems a much simpler way to try to get someone over some of the barriers we're looking at um excellent I think that that uh a combination of that or um uh having something that launches and makes suggestions because a lot of people don't really if they see a big long list of of settings they just blow by it because they don't understand it but um on the other hand you don't want a clippy uh that keeps popping up all the time and tapping on your screen and and uh and irritating you so there's going to be an interesting balance there Andres awesome I just want to build on what Larry was talking about um you know I think some of us most of us on this panel have probably been around and working since web one right so uh you know so so so there's perspective there right and not just perspective on Tech but also how long it takes for Meaningful Tech to actually proliferate into the world so like Larry's perspective around captioning you know um you know I think this is a an important one um that's been around for a while uh but it's now finally saying okay with web three maybe we'll get it right this time you know let's let's build some of these capabilities in and just make it prolific so it's not a compliance issue it's actually just the way it should be done so I think there's going to be things that from the future of interaction or the future of an interface there's going to be I think things that are more mature that you can build standards around that can be applied and and really proliferated so that it moves Beyond assistive Tech buried in someone's settings and it just becomes kind of the way the operating system runs um you know that's you know and and thinking about how we can plant those seeds now right so that those become fundamental to the kernel of what we're building um you know I think um but then there's a mix of other things that are emerging and really nascent that are blooming like a garden so it's things like BCI and all these other things that we're talking about which are still have been involved you know there's been work in BCI for decades but we're still sort of hitting this um uh Sunrise if you will of neurotechnologies and other kinds of things that are emerging in the field which are really nascent um which aren't there it's wild west so there's no standards so that was one of my observations is you know there's an opportunity for standards and interoperability uh around some of these emerging Technologies um and I would sort of emphasize interoperability standards consortiums before regulatory because you don't want to necessarily start to lock down regulations on things that haven't really proven to be useful and usable I wanted to quote Tom Gruber he's the crew CTO and co-founder of Siri and he often talks about humanizing technology um and I don't know how brothers and mothers got a bad rap but like when you talk about Big Brother like I you know I think Brothers can actually be good um but the idea of sort of that that common idea that our relationship societally around data and privacy we we need to shift the conversation from Big Brother to big mother right that that actually thinking about how do we provide the philosophical Spirit the sort of character and the DNA of how AI should behave and what it should and shouldn't do should be more like a good mother versus you know a sort of territorial brother right and so thinking about a way of looking at that I think is is important um and one of the other things I heard was the role of being human is changing so it used to be that you know it's humans controlling dumb machines and now we're you know trying to figure out what it looks like when we're controlling smart machines and so there's a bit more of a dance uh and thinking about sort of where that control uh lever moves so when we think about you know tactile i o controls or brain control interface these kinds of things you know the last 20 years of BCI has been about how do you make the sensor do all the work because you're sort of trying to control a machine that didn't have intelligence well now we're going to see neurotechnologies that are going to be controlling things that can think on their own so maybe that sort of in that Medium place of where the control is on the spectrum of who's doing what uh can shift right so that maybe the human doesn't have to do as much from a control perspective I think someone on earlier said that we're moving from controlling to delegating and I really liked that um and so thinking about the delegation and uh working with machines as opposed to dictating what the machine does I'll sort of pass it on to to the next okay um next hand up is uh Ted yeah I um I've been uh spending a little bit of today thinking about what wasn't said today but I will start by just talking about the AI issue that we have on the table and the the just personalization thing the early ideas we're we're personalizing by by doing it yourself and uh quite frankly without feedback we don't always know what we want anyway so um the Adaptive uh approach um is is um so has some value in in it giving us a mirror something to understand what we've actually asked for sometimes um and I you know I can give examples that are simple such as uh we had uh an Adaptive track Point as opposed to a knowledge base subtract point that was much outperformed the others but we but we weren't able to deploy it because well it too got to be brittle and I think really what we're going to see with this AI is um that we we really um are seeing now persuasive Technologies interacting with people in in in uh surprising ways with with chat TPT and generative uh AI but we we see that uh as unreliable and you know 20 years from now it's not going to be unreliable because it's going to be this beautiful mother idea it's going to be there to Mother not not to not to uh not to put us down and and you take a look at what's going on when we're trying to use this kind of you know AI to to to to uh keep uh nasty stuff off the off the internet and maybe you know I mean a lot of a lot of gigantic Combs of effort that are going on and some of the biggest companies in the world to to try to uh to quench the dangerous uh communication uh and and false Communications going on that's that and and it's quite it's it's quite uh nefarious at the moment it seems very upsetting and scary that hopefully will calm down in 20 years and it will be more motherly uh in that way and and finally the the the this this persuasive and and generative AI uh approach and and and uh adaptive uh AI that that that will not make us have to imagine what we would like in our profile but but create it for us is is going to also bring out the expertise uh within within uh people and and uh in their interactions and that that dream excites me because it it speaks to uh experts um uh and expertise being Central to policy policies such as policing the internet such as maybe even making a society that's that's more Humane and and less less saber rattling so that that's enough about the AI side of things for now okay excellent um uh Utah um yeah I I wanted to just insert one more question and I I sort of put it in the chat earlier but I'm not sure if it was fully addressed and that is the trends since we're talking about AI um one of the the issues of course with uh optimization systems uh the current uh deployed AI not the generative llm systems that are emerging is the issue of statistical reasoning and people with disabilities um the the fact that uh probably the from a data perspective people with disabilities are the outliers and the excluded minorities not represented it's beyond a data Gap it's beyond um human bias entering algorithms it's a statistical problem and the the question I have is what do you think the potential is of the more generative models that are that use transfer and transformation learning to get over that statistical bias against someone who is different from the norm and from what it will be the optimal pattern of the past which is being optimized now in the decision system um is there is there hope that uh that particular bias that people with disabilities feel more than anyone else because they tend to be at the edge of all of these other um excluded groups uh well that the the emerging models will do a better job or that they will be less biased and stereotyped um yeah and I would just like to add to that that even when we talk about people who are blind or some other disability we have the same problem that people tend to talk about the average blind person uh or at least the average blind person that they've encountered and if you gather data about blind people and dump it in and then you restrict it to just blind people um the well if you just dump the blind people data and it just gets washed out by the mass numbers of course um but even if you focused in on that you're going to get the the average blind person if you will whatever the heck that is um but I thought the llms also were all statistically based so well they have statistical yes I mean in in the large data sets but they're somewhat um the statistical reasoning isn't used for optimization there the transfer and transformation learning has the potential I'm hoping and that's what that's my question is there because of the way that it can be localized and the um re-contextualized uh transferred to a new context does that give us some possibility for retraining without pushing against that that huge set of data question but uh let me throw that out to the group here is there anybody on this uh on the panel that's here uh go ahead Andreas um speak that particular question uh was that you know or how was you did talking yes hi hi um yeah I absolutely I mean that's kind of that's actually what we're working on is trying to figure out how to um localize models uh that can be personalized either on edge or on device um so that it becomes highly personalized um with each and every piece of additional data um the challenge is how do you um uh uh how do you not not rely on reinforcement learning too much right so so because sometimes right and so you have some sort of hygiene issues um and you know regression issues regarding the model that you might want to be able to um perfect right so you don't go down the wrong too deep down the wrong hole and it becomes unusable or completely irrelevant um it kind of depends though from our perspective it's kind of like how do you want to use the model like for instance our perspective is we're using models for um trying to predict uh situationally relevant phrases so moving beyond sort of next word uh sort of syntax and moving into entire utterances that might be appropriate semantically uh for a situation um so so so that ability to reduce caloric output like how much effort it takes for someone to say what's on their mind in a certain situation and generate that speech via an interface like that's a really specific kind of personalization issue which I'm sure someone else might have a different issue where if you tried to say adapt the model to a motorized wheelchair as an example like that model might be you know totally different but could still be based on language thank you anybody else want to speak to you this question so I was gonna try and go ahead and this is basically what I was going to say earlier but she beat me to it and actually asked the question and set me up perfectly um so there has been a lot of conversation about adaptive interfaces that you know come to meet your needs and personalize based on you the simplest example that most people have seen is if you use a mapping software enough times it infers where your home is and will surface that up higher in the list when you pick a destination for example um and in the transportation space there's been a lot of success in inferring preferences from observation so for example you know way back in uh during uh oh God I don't even want to think about when this was but we had a project here where there was personalized in vehicle navigation and could learn your preferences for unguarded left-hand turns Bridges tunnels all these different things and if you asked participants to set their preferences for those they were always wrong compared to what they actually displayed when they drove and and the inferred models were always better likewise you could go from a population so so for example you could say well this set of people is you know all older um so we can come up with a a set of initial preferences based on your peer group um in your demographic and then we train from and we learn based off of that starting point as opposed to a zero starting point or the whole population starting point so you can change the starting point at which you you begin to personalize so that you have a bit of a head start based on people who are like you so for example if you wanted to train a speech system to understand uh deaf accented speech um which has wide range and variability but it all has very kind of you know if you if you if there's also kind of a kind of a general space in which that tends to operate tends to be in so you could start from the mean of that or the or the mode or whatever you want and then personalize off of that and you'd have a much faster learning process than if you were starting from general population for example um and so we and there and so this works really well when the inputs are easily to measure document and and get clean signal on so transportation is a good one right we know the features of the roads we know where you're going um it gets a lot harder when you get into wide open unstructured learning um so that's why I think you're seeing it already in products uh that have a much more kind of grounded structured experience like the mapping software that I mentioned earlier and the counter uh trained uh examples are of course an issue how to unlearn or how to yeah remove the learning that is counter to what you're trying to train or adapt to right so you're changing so the factors are essentially the value you know for that for that factor unguarded left-hand turns or something starts here but as it watches you it lowers that you know it lowers that cost or it increases that cost so it goes in either direction it's not just a up or down it's not just up it runs the the eye doctor where they sit down and they have the machine get close and then they do the little flip-flop to to fine tune and see what you really prefer um okay uh Clayton you're next uh you're muted yeah so taking off from the really interesting big mother Big Brother suggestion um uh I think we we need to revisit Phil agri's capture critique in his uh great paper surveillance and capture as models of privacy so in that paper Agri identified a virtually Universal problem with conventional automation which is that those systems imposed what he called a grammar of action on what people can actually do so that uh the activities of people could be brought within scope for the model that the conventional automation was able to deal with as we move to this new class of system the grammar of action is going to become implicit and mushy and I think as a community we need we really need to understand the implications of that there could be some good ones uh such systems may be less confining and and less distorting Of Human Action but that may not be how it works out so I wanted to sort of put that uh somewhere on the on the on the agenda uh second thing is is to highlight something I think is in common uh with many of the technical development people have talked about but just the surface that explicitly that's what I understand to be the still not met challenge of incremental training and catastrophic forgetting so uh many of the things we want these systems to do involve in one way or another adding new information to what they quote now unquote and as I understand it now this is an unsolved problem so this has to be something that goes on the research agenda finally and at a lower level I think it's just a question for the people who talked about the brain stuff uh we heard a lot about what we could call a sort of electrical aspect of what's happening in the brain but as I understand it especially with regard to things like mood there's a huge chemistry thing going on and and what can we say about the technology available for intervening in that or monitoring what's happening with it dealing with the time scale of it and other things like that um yeah I some money a number of years ago I was reading uh some some research that talked about the fact that we keep thinking that it's all neurons um most of all the neurons use chemicals to to operate but but beyond that that there may be a lot of other types of signals and chemicals as you said flowing around that um that are used to convey uh tune um and and operate so there must be a might be a whole nother layer a dark matter I would say in space well this is dark information flow in the brain that we're just not even looking at because it's not electrical um excellent um uh event you're next in line so I was just thinking a little bit about personalization and it suddenly dawned on me that uh especially for the situation where a person is with disabilities are trying to get the system to adapt to their needs um and if you wanted to generalize that as you have where you have a descriptor that says these are the settings that I need in order for the application to be accessible to me it occurs to me that there might be a privacy problem now because if we have to propagate the information about what you need to a variety of destinations some people might say well I didn't want to have you know the world know that I need you know amplification or that I need bigger fonder that I need something else so I don't know what to do about that it just it just dawned on me that for some people that information might be considered personal and yet we still have to provide it in order for the system to respond unless you can do it all locally with standardized interfaces to the service on the other side and then do Transformations as needed local to the user this is uh this is interesting um you know we solved the problem for for morphic um and uh for those that small set of settings but uh it never occurred to me that there might be a need for some people to have a a a a public uh trusted um cloud-based repository so that they can move between their devices and move their preferences in in a way that's guaranteed to be private versus having it moved by a company and and whether they do or they don't you don't trust that they aren't um uh data mining it and of course anybody does something once they go if you're not paying for the service you're not paying for the product then you are the product um so um but maybe that's something that we ought to be thinking about um is some kind of of a Data Trust and I know you you've done some stuff around trusts and stuff like that so maybe you have some some comments about that um I also wanted to have uh you you to also talk about um the um the story about adding disability data and not working out like we thought it would uh Christian you're next uh somebody's voicing you on they're they're muted I'm not hearing anybody voicing you can somebody voice a Christian Emily can you voice Christian I can't I don't have him pin can you give me one moment sure thank you thank you guys thank you okay so I ended up just typing something into the chat box it's uh for the panel and hopefully somebody can uh read that for me and then um I will assign that okay I will read it and then I can follow along okay yeah it says in terms of that message there uh that I would like to read to you okay if you stop voicing him um he's going to sign what I'm saying I think is that right Christian you want me to read the text let me just read the text that you posted so everybody can hear it and it's in the record um but Christian wrote was in terms of localization Christian saying I prefer to sign and then I want to have The Interpreter uh read directly from the chat but then I can uh provide this in my natural language okay Emily can you see the chat go ahead and read it then okay great so in terms of uh personalizing that the localization and adaption the idea of having the interface be able to learn what you need is tempting and aside from what Andrea said about needing to be careful not to go down that uh wrong path so of course you have to be careful that there's the issue and the abilities because those can vary from day to day there are also examples that are related to the mobility of cognitive function but the specific case in point my own voice will change quite a bit depending on whether I'm wearing my cochlear implant processors or not so now it's like having a split personality and how I present to AI using speech recognition and the adaptation has to be able to deal with sudden large shifts every day even hour to hour depending if I have my cochlears on or not you know so then in terms of my reception it would be a different experience it's hard to be able to split the two you know so if I have my cochlears on or not it's two different uh situations so meaning with AI for example that can perhaps recognize the speech whereas you know it would be different for me to recognize that so I'm trying to make that uh adaptation to figure out how those controls can you know change and of course change rapidly as need be foreign that's uh that's interesting uh we've talked about having different preference profiles that you could switch between uh but it would be nice if it could recognize it and automatically change between them uh and adapt uh to them there are many people whose uh all sorts of abilities change from the morning until the evening and sometimes it's not abrupt but it's it's it's dramatic in other words it's gradual over the course of the day but from the morning until the evening um uh voice fatigue motor control uh all of these things also people with cerebral palsy they may have perfect control until something stresses them like pressure to do something or something like that and um suddenly there they lose control and and uh whatever interface would have to adapt pretty dramatically and instantly to a completely different motor situation uh uh for them um thank you that's very uh uh really important and it also uh event it occurs to me that this is something that is not just to the disability field that there's many other people that may find for example um when children around and when they're not around they may behave differently when they're at work and when they're at home they're very different people and um and they want all of their interpretation of how they talk and what they mean to be very different because um uh you know at work a little bit more May mean that it's a lot of emphasis and and at home you may feel much more relaxed to show a larger Dynamic so um this looks more than just a sort of disability related Andres all right thanks uh and Christian it's great to hear your voice uh and sharing your thoughts uh there um and I concur with what everyone's saying is so maybe just to sort of put a bow on it is that you know we've invested literally billions of dollars in to try and create a list of toggles and Sliders in settings menus on devices but still so very few people get the benefit of it and so I think if we sort of take a nod at biometric authentication for security purposes to get into mobile devices thinking about Touch ID or face ID or these kinds of methods um that require very little calibration up front and then there's a certain amount of personalization that happens on the back end right so if you take delivery of a new device of some sort um what are the fewest inputs that can be asked during an onboarding period that could actually eliminate the need for anyone to go into the settings at all uh you know that you could essentially have formulas that can you know get it as close to comfortable as possible so that you're just tweaking it and not having to realize that there's all these affordances that you just don't even know about and you require a paid professional to come in and set it up for you I mean that's sort of silly uh you should be able to have autonomy to set up your own um settings and personalizations whether that's fonts or Voice or display and things like that I think that's where things ought to go um I think the the glaring thing for me and I'm just I'm one of a few voices here on the panel um that have invested a substantial part of their career uh working with people with disabilities um and I'd say we are grossly underrepresented in this panel of people that actually have disabilities um and then underneath that I think we ought to have conversations with the companies and individuals who are responsible for the tools that our designers and Engineers use to make sure that those tools are accessible because ultimately the best case is that we have people with disabilities in product management in design in engineering that are using the tools to design things that actually do work so that we can reduce our bias in that regard so a little sandbox moment for me but you know I think that it's important for us to not just think about the end points but also what it takes to get to the end point and making sure we have the right representation uh in the process and people with disabilities aren't simply relegated as de minimis input for feedback but they're actually co-designing or taking the taking the wheel so to speak and making products um uh we taught one thing that's particularly interesting that I think that in the spirit of what we didn't talk about um uh things that I'm kind of particularly nervous about um deep fix you know so deep fakes are a kind of a really big deal um and we're underestimating how powerful those are going to be in the next year and you think about where things are now in in how convincing we can make deep fakes video audio um social graph data entire profiles I'm I'm really scared of being personally deep faked right you know and you're like so if you think about the compounding effects of being in the metaverse of whatever that looks like um I think there's going to need to be a user experience uh specifically it needs to be a topic I think at an industry level around how do we address deep fakes and how do we authenticate um humans like just pure humans versus augmented humans versus entire Bots right there's there's like okay so you know if you enter a room and it's like the Matrix and you have 20 Mr Smiths and you know which one is the real one you know you're the real one but you know how do you convince the world in the metaverse that you're the real one uh if they're so convincing um so just something to kind of think about that's a wow we need to think about that um some of the stuff that I also heard today uh was everything we're talking about is human scale uh ux so human scale meaning things that we can see and feel and touch so things that are small as small as a handheld mobile device as large as maybe a building or a skyscraper or a transit system say in Manhattan but that's still human scale so I you know we need to have interoperability between human scale things but then we also need to have um sort of Nano or micro human skill interoperability um so understanding how if for instance I'm getting kind of out there because that's the spirit of day one is to go out there a little bit um you know uh Nanobots and stuff are a real thing you can inject uh things that go in and are hyper targeted with Precision medicine to go and deal with like localized cancer treatments and things like this that are in your gut or in your blood system um you know what what if what if these things go wrong or you want to sort of abort that process you want to excrete whatever's in your body that's been injected into your body like can you override that or do you just have to wait for time to tell you know is there a you know so what you know thinking about this interaction right there's a user interface of some sort that either a patient or a doctor has to be presented with to understand what's happening and where things are going in your body and if they're going in the right direction or not and if you need to sort of Hit the plunger um so these are you know 20 to 100 year out kind of problems but they're really happening right now uh they're happening in the lab right now so um uh and then also thinking about sort of nanoscale and then thinking about multiple exponents out into space there's been very little Innovation from a ux design perspective for zero gravity um zero or minimal gravity so accelerometers and gyroscopes yeah you know you sort of think about how will AR glasses perform in space right um think about you know how how we will interact with the spaceship uh or how will we interact with other people or creatures or mammals you know so there's there's some sort of 100 year out kind of things that we should also think about you know in augmenting our abilities to operate as human beings in very different circumstances physics wise than we are today okay the uh your comment about having more people with disabilities you know involved at all levels of everything I actually have disability groups who are complaining that um I guess it's a good sign but um that they can't get any good technical people um as part of their organizations anymore because they're all being hired by the companies and so uh and they can't as a disability organization compete against the uh the company so it's good that they're inside instead of outside but it is an interesting uh uh side side effect uh Judah um you know in computer vision um the idea of using computer generated uh situations are people who could train cars uh was raised and I see you next up in line so um I just wanted to pose that back to you it sounds like that's kind of like teaching to the test um to to say well this is what I imagine uh the world looks like and I'm gonna train my cars to deal with what I imagine the world to look like or and until we get enough experience in the world uh that we run into every single uh odd situation but you had some good examples of that too I thought would be good to get on the record yeah so so um a story I've told quite often in uh talking about Ai and disability is um in 2014 when I tested seven um automated vehicle engines and uh presented them with a scenario of a friend of mine who pushes her wheelchair backwards they all of course chose to proceed and run her over thinking that she was not going to enter the intersection um they all said these are immature models come back when they've been trained with more data about people in wheelchairs and intersections when I came back and we retried it they all ran her over with greater confidence which of course shows that it isn't a data Gap issue in fact if you are highly unusual or unexpected a computer doesn't know what it doesn't know and more data isn't going to address the the the issue um and I I think you also were suggesting that I talk a little bit about privacy and privacy I think is is a bit of a red herring if you have a disability because um most a lot of people with disabilities have already Bartered their privacy for essential Services if you're highly unusual you're going to be re-identified any aggregate data and if you're the only person say in a neighborhood who has ordered a colostomy bag you're going to be re-identified so de-identification at source doesn't work and then differential privacy removes the data that is needed to ensure that the service works for you so I think we need to go beyond privacy and look at how do we prevent abuse and misuse because of course if you have a disability you're most vulnerable to data abuse and misuse so um they there seems to be this this understanding that we've got privacy by Design or in some ways we've got these privacy protections but they're not really uh working if you are highly unusual and the same thing goes for AI ethics um the AI ethics cluster analysis to compare the performance with the majority and the specific protected group really doesn't work um for you if you have a disability because you're not part of any bounded cluster or it's it's highly unlikely that you will detect the the very specific outlying bias that um uh that you are experiencing so um I I think we need more nuanced more uh uh a better approach to these things the the we we have been working on Cooperative data trusts so a group of people that are very unusual working together so whether it's a um a rare illness um bottom up ownership of your own data releasing it with specific permissions so that it doesn't leave you locally unless you agree and you also benefit from the value of the data so that and that's um some of what we are looking at with individuals that that have these privacy vulnerabilities as well Yoda that's a really I I just want to underline that because that that really strikes me um uh you can't expect the world to accommodate your differences if you aren't going to tell the world that you're different so that combined with the fact that it's just futile to try to hide yourself you know to go unless you go completely off grid you're going to be identified you know there's all sorts of stories of people who suddenly get a pregnancy stuff and they can't figure out why they're getting it until two weeks or four weeks later they determine that they're pregnant and the internet knew they were pregnant before they did um you know kind of thing so um the I think that one of the things that we should note here is is your comment about we need to figure out to focus on uh data of use um rather than trying to uh just uh lock it all up because a you're not going to be able to lock it all up and B if you completely uh make it so that no one can tell that there's anything different then there's no way that there can be any accommodation to try to to accommodate that if you will so um that's that's very interesting um Ted Aaron Andreas and then yeah um I meant to move on to uh non-ai topics but I just want to comment briefly whose um thing which is basically relationships are local we have a relationship of a certain sort with a certain person certain community and that will have to be absorbed into the way we build our interfaces for them to function reasonably at all uh so the treating treating one relationship with with privacy and another one with openness uh sharing certain things with with people that need to know it uh is I think going to be uh Central to to competence as we forward I wanted to just mention with the with the business of the deep fakes um right now OptiVision uh makes it uh uh is is uh has has modules to put uh Providence watermarks in their cameras and we are selling on the order of eight billion uh cameras a year at the moment so the uh you know when I've talked to the the people that are looking for deep fakes uh the AI guys are losing absolutely uh they they they're they're running at 50 percent correct about about uh deep fakes uh a year ago they were doing better um and and I think that was really fascinating about the eight billion cameras or whatever it is it's it's it's more than people um is what they will be used for and how they will be used you know uh the toaster will know when the toast is brown uh and and it will be in in all sorts of things like that um and and when we uh the other things that I think are a couple of things that I think may not have been touched on today that I just want to go over because it's actually 4 30 where I am in the morning um is uh that that um a lot of the VR AR stuff has uh enjoyed uh talking about you know an ace an Iceman for somebody's that's that's that's got a lot of pain or whatever but you know Scott Greenwell uh well one of my uh my PhD student at um MIT I mean he actually you know we pushed him to to Really test where does the where can you use VR to uh outperform the competence uh and and uh uh ability of of not using it and actually teaching physics was was an example where he demonstrated I think we are we we uh I think uh evaluation is Central to any Improvement in user uh in user experience and just just another example of that is is the whole there's been a large amount of work on force feedback you know people putting on outfits to go walking and and people trying to retrain for for reasons um but but I think really in the next 20 years uh we were going to be able to use force feedback uh in in uh in more in more comp in in competent ways to to teach us to dance to teach us to to teach us to uh to ski to do all of these things that we've talked about for decades uh with with the with the high quality uh ml we are we're in a position to understand whether we're succeeding and make these training and and learning experiences be actually what they what we've proposed they would be um and maybe I bring it back to uh it's a context uh so this relationship uh is part of his context but everything is context to you you know uh you know you're you're dancing you're not you're not you're not kite surfing these These are different things and we have to be situationally appropriate for anything to work um thank you uh Aaron so I'm going to circle back to privacy so um the same Research Center that funded the work on the navigation personalization also funded a large-scale surveys of people with and without disabilities also older adults asking questions about their willingness to trade uh so who should get privacy who should get their private data and in return what kind of functionality might you get back so this was questions about cars that you know monitor how well you're driving to give your feedback or robots in your home to help you out you know how much privacy would you give up for some of these these these capabilities and by and large this is a very large survey um the uh people the more the the more severe the disability the more willingness to trade privacy for capability from the system which makes a lot of sense if you've been in the disability world for a long long time um and so I do think that there is a bill there's kind of an awareness that that that trade-off can be done and if the capability and the functionality that you're getting back is high enough and that's on us as developers researchers and and so forth to provide as much value back for the amount of privacy we're asking for um and I do want to also kind of re you know emphasize and reinforce what Christian wrote in Discord uh about uh the difference between privacy and confidentiality so for example the relay Services uh you know uh there there's very strict confidentiality rules uh related to relay operators so there is a model for uh you know kind of creating confidentiality requirements when that level of privacy is starting to get a little too close to invasive um and I also want to point out that there's a big difference about who the information is being disclosed to so I mentioned the survey and one of the questions was about your driving data um and that one that line was very different from all the other ones bathroom data Health Data whatnot when asked you know for your data being shared with yourself your family members researchers you know doctors government and so forth every other every other thing like bathroom data or you know food data was all kind of following the same pattern but for driving there was a huge Notch where doctors were people did not want their doctors to see their driving data and the reason why is that doctors can pull your license oh right and so you have to think about who has access to that data and what are the second order ramifications of them having access to that data not just the first order the second order as well yeah the other thing is that if the motor vehicle company has it and they are also part of an insurance the problem is that some of these things you know are so interconnected behind the behind the scenes that you don't know where all the dendrites are going and so thank you giving the information to one place and and suddenly the you know it shows up in your doctor's office so it shows up in your insurance company's office right and it may show up in hiring and the thing is the thing that scares me is we have more and more of this deep learning um the it may show up in your employment by an engine that doesn't pick you but the engine doesn't know why it didn't pick you it just knows that given all of the dendrites that's coming in that something made it decide that that um that you are not to be hired and and so it doesn't give a reason for it it doesn't say it's your disability it just it's Gestalt is to to discriminate and and that's the part that that scares me is that we do we don't understand how these things are making their decisions Andreas an invent sure um I I wanted to sort of point out kind of an obvious about how some of these Innovations actually get out into the world um and so obviously things quite often start right um in military right so military applications often start within one DARPA kind of level interesting experiments um and then depending on sort of where that goes often it will migrate into clinical or into disabilities as a t or other kinds of embodiments and then eventually it becomes kind of some Bridge into Mass market right if there's kind of a useful application so some of the things that I'm thinking about when we think about the future of interaction most of the interaction design that has been designed now and in the past has been one to one you know one to one and asynchronous so most of the interactions that are mediated with technology right it's it's I'm interacting with a web page I'm interacting with an app it's it's me in the app and that's an intermediary to do something else and usually that thing that's getting done as an outcome happens asynchronously um but if we start to look at what's happening in military now right we're thinking about real-time systems right real-time user interfaces that require closed loop very low latency systems um and we're starting to see sort of this expansion from moving away from just one-to-one interactions moving into like one-to-many interactions that are real time as far as real-time controls and then also many to one so thinking about the one to many you might have user experiences where you might have a war you know a soldier that has to be a man in the middle that's interfacing with an AI in order to control drone swarms or other kinds of complex systems so it's a one-to-many kind of thing well you can imagine a world where that kind of one-to-many control interface will translate into someone who's paralyzed and being able to control multiple things simultaneously so like a wheelchair that you can control the drivetrain and walk and talk at the same time it's like chewing bubble gum and skipping right it's like the ability to do multiple things at once is quite profound um if I think about the other part of that which is the many to one interface you might say well you need to have smarter decision making for a Warfighter in situation right there could be really dramatic consequences if you make poor decisions in a battlefield um and usually they're taking information from lots of data inputs to be able to make decisions in in sort of nanoseconds right um well the same thing kind of applies to a t thinking about your care Circle or you know be my eyes is a good example where you sort of crowdsourced vision uh for individuals who have blind or low vision decision making right and so that ability to have an intermediary that can support either a small Network supporting an individual with a disability or a large Network supporting an individual like these are interfaces that I think are going to be much more prolific one to many and many to one over the next 20 years okay the one thing that uh is surprise that I don't think most people realize and that is how much the military drives the technology and um and now we're going to see uh uh the resolution of our our our virtual and augmented reality driven Again by the military who just turned out a big contract that they had for um uh um uh augmented reality goggles because it was making all the soldiers sick um and things like that so um you know there's a lot of money to be made if you can figure out how to solve problems of space programs military sometimes will generate do some of our research for us sometimes scarily event I'm just yeah thank you very much everybody for these useful I've been taking notes like crazy um two things occurred to me one is completely uh out of the uh flow of the discussion but a friend of mine has an artificial leg and uh he has been through a number of these and one of the things that he said was that he had to make dumb the smart leg that somebody offered to him and the reason he had to do that is because it did things he didn't predict and you can imagine that if your leg decides to do something that you didn't predict you can often fall over which is exactly what happened to him so predictability turns out to be really important and that that's a crystal example of that and I think we noticed in today's talks where prediction and predictability turned out to be important components of a machine Learning Systems so uh just a note that that we should think carefully about why predictability is so important uh the last point I wanted to make is derived from some of the conversation we've just been having about personal personalization and privacy it turns out that um probably most of you know this but data Brokers are out there not only sharing information about you but also getting paid for it and and they're doing so without much regulation and it occurs to me in the course of this discussion the use of data in order to make things more personalized uh suggests that we should be um attending to the regulation of data Brokers more than we seem to have done in the past uh yeah and I think that Echoes back to uh some of the things that Yoda was talking about where um instead of trying to prevent anybody from having any of your data because they need it uh we have to start flipping around and and look at how it's being used or abused I should say yeah right if you're muted rain thank you for that yeah I'm thinking about this conversation in the context of uh of andreas's recent comment about the idea of systems being one to many or even many to one um and also thinking back to an earlier uh discussion that happened on one of the panels this morning where the question was raised um it is Tech making us more siled as as Tech becomes smarter and we become more integrated with technology are we becoming increasingly siled um and related to where I'm going also the conversations that have been happening even here on this uh this conversation right now about privacy and how we think about privacy um what I'm what I'm sort of realizing is that um we might have we might be taking um kind of a middle class and uh sort of Workforce age perspective on these questions um because for people who are not necessarily middle class who may have uh less access to money um there may actually be more of a many-to-many engagement with technology where multiple people are sharing even the same smart device um and those individuals are likely to have a very different perspective on privacy and one um one example to that is a recent ethnographic study that was published looking at a high school that didn't have a lot of money and how the students were engaging with their phones and actually sharing their phones and giving each other their passwords into their phones and then some of the uses of that technology were were not um they they also were not kind of one to many or one to one or many to one interactions they were actually kind of and they were not sort of siled in technology but they were kind of in this weird space in between um interacting with the technology and interacting with each other where you might have say a group of teenagers who are all standing together having a conversation uh with each other handing each other's phones to each other looking at their tick tocks and then um looking at the comments that people have left on their tick tocks and talking through those comments where you've really got a completely different Paradigm than what we're really talking about about how people might actually be engaging with technology and and it breaks down our concepts of privacy it pushes us to thinking more about one uh many to many instead of a kind of a one end of many um and then it also pushes us to think more about um the the lack of a border necessarily between physical interactions or personal interactions and techno technology interactions um that's that's interesting and it raised a a problem that I had also seen where um there was a woman in in a particular

2023-06-25 10:14

Show Video

Other news