The Future of Technology Runs Through Accessibility. No, really! - Joe Devon
JOE DEVON: Now... I would like for you all to close your eyes, except those that need the captions. Keep your eyes open.
But if you don't need the captions, close your eyes, and visualize a loved one. Now open your eyes and raise your hand if you really have a strong visual of your loved one. Like, the stronger the visualization, the higher. All right! You look like you have a real strong visualization. Do we have a mic? Please say your name and how you...
What you saw. >> I'm Ivan, and I thought about my girlfriend. Because I haven't seen her for a few days.
And I saw her full facial features. The details, the curve of the eyes, the specks on the skin, curly hair, kind of really... Like a portrait of her. That was my image. JOE DEVON: And is it really visual to you? IVAN: Yeah. JOE DEVON: You're seeing it? IVAN: Yeah.
JOE DEVON: With your eyes open or closed, can you visualize? IVAN: Yeah. Yeah. Pretty much so. Yeah. JOE DEVON: And even like... If there was a name tag, or the pattern on the clothing...
IVAN: I think I was mostly focusing on the face. On a face in the air. That was kind of my... And everything was kind of blurry. When you blur your background on Zoom meetings and stuff like that. It was mostly face.
JOE DEVON: Thank you. So... People like you...
And I don't know if that's the majority of the population or not... But that is called hyperphantasia. When you have the condition of extremely vivid imagery. Now, who over here -- when I asked to visualize a loved one -- got a very faint mental image? Very weak mental image? All right. Who wants to share? How about Ben over there? Describe what you saw. BEN: I was visualizing my grandmother.
She passed a number of years ago. So I have kind of the kindness of her features. And gray hair. But the tone of gray is lost. The specificity of features is very soft.
JOE DEVON: Is like black and white? Color? BEN: Uh... No, I do have color. JOE DEVON: And is it really visual or conceptual? BEN: Somewhere in the middle. JOE DEVON: Okay.
BEN: Yeah. JOE DEVON: Does anybody else have like... Even less of a mental imagery? That they want to share? No? Okay. I can share that I don't even get that.
For me, and this is called aphantasia. I do not get a physical image. It's just very, very faint for me.
So... This is called aphantasia. And when you think about it, this is another kind of blindness. It's blindness of your mind's eye.
And we don't even for the most part -- we're not even aware of it. Is that a disability? I don't know. I would love to be able to see my parents visually. I'm really missing out on something. I just never knew I did.
And I don't know if that has any impact in general. Then I'll add one more thing... Are you aware...
And maybe there's someone in the room... That has no inner monologue. That you don't speak to yourself. When you're thinking, you don't hear yourself speaking. Does anybody have that? Everybody here has an inner monologue? But there are a small amount of people that have no inner monologue. And I don't understand it.
How do they think? What is the impact on their day-to-day life? We don't know. And do you call that a disability? I don't know. And the purpose of me bringing this up...
Of this whole exercise... Is to understand that our concept of disability is defined very narrowly. And there is a whole inner world that we don't know, that each of us has. So hello. My name is Joe Devon.
Co-founder of Global Accessibility Awareness Day. Chair of the GAAD Foundation. And for work, I am an AI Futurist and head of accessibility at Formula Monks. Which is a technical consultancy. And because of the AI work that I do, I have really been rethinking the entire meaning of disability. And hopefully by the end of this, you will too.
If you haven't already. So when I was growing up, everybody said the five senses. But now they'll typically say 11 senses. The first two I've already mentioned. If you don't have that visual imagery, or you can't...
You don't have an inner monologue, that is an aspect of aphantasia. Some people cannot taste in their mind's eye. Can anybody hear taste in their mind's eye? Nobody? Okay.
I can't either. Smell? That's a half raise. You can smell a little bit.
Touch. Can you feel a touch? Okay. We're getting a bunch of people nodding. Thermoception. That's the sensation of heat or temperature. We're getting some yeses.
I've got zero on all of this. Nociception. That means pain receptors. Anybody? Can you imagine pain? And that's the last one that typically you can sense. So we have a few yeses. But there's a wide variety even in this room.
And then you have equilibrioception. Which means your sense of balance. Proprioception is your sense of where your body is in space. Interoception...
That is internal sensations. So are you hungry? Are you thirsty? Do you have to go to the bathroom? Feeling your heartbeat. And then temporal perception is having a sense that time has passed. So these are the senses officially. Though I think even within these senses, we see that there are some subsenses inside of that.
Now, there's been a big debate of the term disability, as opposed to differently abled. Who here feels that they prefer the term "disabled" or "disability"? Raise your hand. Okay. And how many feel "differently abled" is a better term? We have a few people that are not raising their hand either way. I don't know why.
Anybody want to share why you're not raising your hand? >> English is my second lamguage. JOE DEVON:Oh. That's a good answer.
Fair enough. Anybody else want to share why they didn't raise their hand? Yes? >> Hi. I was just saying that some experiences are different, but they're not limiting. And I guess that's why I feel like it's not an either/or. JOE DEVON: Got it.
Okay. Well, so, from the people that raised their hands, it's about 50/50. I personally would go with disability. Because most of my friends who are mostly blind -- I have some friends who are deaf -- but for the most part, the people -- my friends have told me they prefer the term disability. But is that a valid metric, really, disability? And I started to question it. And so I'll share a little bit more here.
Color perception. So most human beings, they see color in the RGB spectrum. If there are any designers or frontend developers, you know what RGB is. Right? The reason that we use RGB is that... Most humans have three color receptors. And the way that that breaks down is that women, who have two X chromosomes, they have two color receptors.
Then when you have a Y chromosome, that's one. Setting aside the gender thing. But just in terms of the chromosomes, the X has two color receptors. The Y has one color receptor. And that's why you'll see a lot of men have colorblindness, because one of their receptors doesn't work. But something that was recently discovered is that...
A very small percentage of women actually have four receptors. And RGB is not good enough for them. And they can see 100 million colors. And everybody who has three color receptors can only see a million.
So it's a hundred times more vibrant. Right? And there was one description that I read from a woman who, as a kid, she asked that her room be painted green. And then when they painted her room green as a child, she said: Why didn't you paint the color I want? And she literally could not describe the color, because there was no reference point for the colors that she sees. And so why do I bring this up? Because most of us are color disabled. Right? We're missing a lot.
But we don't think of it. We don't even know it. And so really, the definition of disability -- you say that somebody has a disability, because let's say the average person can see, and if you can't see, then you'll consider it a disability.
The same if you're deaf. The average person is not deaf. But this is really a poor way... I think it's a primitive way to describe the situation. And now when you think about memory, there's something called hyperthymesia, which means total recall. There are people that remember everything. And some of them are very unhappy about it.
But it's only about 200 people in the entire world. So for the rest of us, we can't remember squat. Like... I can't remember the name of a character in a movie while the movie is running! And so when you think about it, all of us are memory impaired. Except for those few.
But we don't call that a disability. Does anybody remember The Dress? All right. Well...
Over here... Who sees... So this is The Dress. For those that can't see. This dress was on the internet.
And a lot of people saw this as black and blue. A lot of people saw this as gold and white. I see it... Well, I won't bias anybody. Raise your hand. Who sees gold and white? Oh, only three people.
And who sees black and blue? Wow. The majority. Anybody see other colors? What do you see? >> It looks like there's a white streak. I can't tell if I'm seeing white or if that's just a faded color. JOE DEVON: Interesting.
Well... I see gold and white. The actual color is...
Black and blue. And the reason that we see it definitely is... That you might have seen optical illusions, where it depends on the lighting of the background.
And our brains are trying to figure out the lighting around this dress. And depending how you translate that... Is the color that you're going to see. And if anybody knows the famous XKCD cartoon...
Uh-oh. Lost the... There we go.
If anybody knows the famous XKCD cartoon, he really illustrated this, by taking that strip of dress. And on the left side, he put a blue background. On the right side, he put a yellow background. And then you can clearly see the colors are different, and that causes an optical illusion. Did you just raise your hand? >> It's often a coin toss for me.
Each time I look at this, my brain does a different thing. So sometimes I see gold and white. Sometimes I see blue and black. JOE DEVON: That's funny. I always see white and gold.
I don't know why. Anybody remember the Yanni versus Laurel? Okay. We're gonna try to play this.
I don't know if it'll work. Let me see. >> (audio voice) JOE DEVON: All right. Who heard Yanni? All right. Who saw Laurel -- or heard Laurel? Only three.
Because I'm one of them. Apparently it's something to do with how you received the sound. Is it low pitch or high pitch? But we all think differently. We all have different abilities.
Our brains are completely unique. And when you look at each of these dimensions separately, you will notice that we each have things that we're good at and things that we're not as good at. Now, what's interesting is that...
Artificial intelligence and accessibility -- people don't realize how they are super related to each other. They're symbiotic. And what I mean by that is: Artificial intelligence -- what is it really trying to do? Other than trying to take sensory input and understand it like a human being? And then translate it? And so if you're trying... If you're doing research in artificial intelligence, and you want to get good, you're gonna work with people with disabilities in order to understand if you're doing a good job of understanding it and presenting the information. But at the same time, as you get good at this, you're also gonna be able to provide better assistive technology, and so the two feed into each other. And this is not something that the big firms are not aware of, doing AI.
Most of them, the good ones, are really well aware of it. And OpenAI called Be My Eyes, an assistive technology app, and asked: Would you be a launch partner for our GPT4 launch? They were doing multimodal. Which means different types of sound, pictures, text, and they launched together with them.
And they're doing testing with people with disabilities. One aspect of this is automated speech recognition. Again, this is trying to understand the sensory input of spoken speech.
We're trying to get to the point -- or let's say where we're gonna see a real tipping point is when there's parity with humans, in terms of understanding. And there was a big plateau, when it came to automated speech recognition, because in the beginning, they were just trying to take the sounds and map them to words. And that just did not do the trick. Because as our human captioner can attest, when we say filler words like um and uh, it doesn't help to translate that and put that into the captions, and then sometimes you're mumbling, your word doesn't come out right, or maybe you mispronounced something. And it's important to have an understanding of what is being said.
So... They came up with a new concept, which is natural language understanding. And the deeper and deeper that artificial intelligence goes to understand even multimodal -- because you have some audiovisual, like videos that have sound as well -- it's gonna try to understand the images and the video that's being seen. And then it's gonna do a better job of doing the translation. And as this happens, it's gonna be great for assistive technology.
But you'll also be able to take the entire database of audio and video or audio-video media, and then provide transcripts, translate that into all languages, needless to say... It's a fantastic assistive technology that's possible. But also, it's going to be providing features that will be helpful to everybody. So Google, they found that there were people that were non-standard speakers, and they really had a lot of trouble with their automated speech recognition. And they created Project Relate. And what this did is...
It found there's 250 million people that have speech impairments. And so they created an app that allowed people to speak, and that really improved their technology. So, again, it's another example of testing with people with disabilities. All right. This one is a little bit more complicated to explain.
This is Segment Anything from Meta. Is anybody familiar with this? No? Okay. So... Essentially, what you see here are bounding outlines, where what they're trying to do is understand the objects in the picture. And this is also something that they use for video.
And what happens is: As it understands this, it allows you to do things like future technology will be querying an image and saying: What do you see in this image? So let's say this is a kitchen, and you see kitchen appliances and utensils. And you can say to it: Describe this image to me. So typically, you have humans that do alt text. But with this, you'll have a new assistive technology where you can say: Hey, here's a painting. Who painted this painting? This piece of furniture that's attached to the painting -- is that an antique? Is there a connection behind it? This can be very useful if you're blind and you can't see an image.
But at the same time, it's also going to create new technology that's good for anybody that might want to have a deeper understanding of the images they see. And what's really cool with video is that there's gonna be new features that allow you on video to query, let's say, a character on a TV show. So imagine you have something like...
Game of Thrones, where you have 250 characters in the show, and somebody comes up, and you're like... Who are you? So imagine you can stop the screen... So audio description is great for people that are blind. But just for anybody, if you can stop the screen and say: Which character are you? What family are you from? What episode did you start? And can you replay the very first scene that you came in? And all of a sudden everybody can remember -- because you might remember that most of us have a really bad memory, compared to those people that have total recall.
And then you have cognitive translations. So the transformers are the type of technology that made the LLMs very successful with OpenAI. And they're really good at translating, let's say, from English to French, from another language, but they're also good at translating from one programming language to another, and they're good at "explain like I'm five" this picture, for those who can't see it, is Michael Scott from the Office, saying why don't you explain this to me like I'm five? Which is also a Reddit thing. ELI5. So you can go into a school, take a curriculum, and translate it for different school levels. This is obviously still a work in progress.
But you can imagine that this is the kind of thing that might be really helpful for assistive technology, as well as so many other applications. And then finally, AI and robotics -- that's a field where they haven't gone as far. This is a picture of a Groot robot that actually a friend of mine who does all kind of cool robotics, he built this, and it did show up in Disneyland.
Probably will again. And he tried to recreate the gait. I can't do the gait of Groot. But Groot has a very particular gait. And they put a tremendous amount of work to get that gait right as it's walking.
When they displayed it, they did it with test audiences in Disneyland. It went extremely well. AI and machine learning is going to improve, in terms of allowing you to automatically train a robot to do things like walking. And needless to say, this is gonna be great for assistive technology. AI is going to personalize your experience.
And reality is going to be generated in realtime, in the right format, for you. So if you're blind, you live your life quite verbally. And what's great with the transformer is that it's going to transform your experiences into sound. It'll all be verbal. And if you're deaf, the sensory input will turn it into visuals. And if you're deafblind, it's going to turn into haptics.
Which is really cool. Another complicated thing, which I'll explain soon... In part of trying to give you the future -- where is it that the technology is going? Now, the brick phone was fantastic in its day. I'm old enough that I remember when phones went portable.
And that was awesome. But this is huge. And now the smartphone is great, and probably most people are viewing it as... This is the perfect size. And weight.
But it's not. We're gonna look back on this the same way we look back on the brick phone. And spatial computing is amazing. If anybody's tried the Quest, or Apple is coming out with a new product -- it's gonna be really cool. It is cool. I have two of the Quests.
Not the latest one. But this form factor is just way too bulky and way too big. This is not where it's gonna go. So what does the new technology stack look like? Brain-computer interfaces. That's gonna be the new input device.
And this is a picture of Cognition. Andreas Forsland is the CEO of Cognition. And his mom went into the hospital and had locked-in syndrome. So she could not communicate the problems that she was having. How she was uncomfortable.
And it really prompted him to build this start-up that just takes a headset and connects it up to the back of your head, and can allow your brain to give commands. And that's gonna be the new input device. But do you think that anybody else working on this is going to really succeed if they don't test with people with disabilities? For sure, that technology -- that's gonna come from companies like Cognition. That's doing it, where it really counts. Because with locked-in syndrome, if you get this wrong, there are consequences.
And now, I have to go into a bit of an explanation before the next one. So I'm gonna ask you another one of these silly questions. When you see or hear words, letters, numbers, does anybody here have a color associated with it? It's usually about 2% of the population. 2% to 4%. All right.
You've probably heard of it at this point. But it's called synesthesia. I see you've heard of it. Yes.
So I do know somebody that has synesthesia. And it's a cross-sensory perception. So on the left, for those that can't see, on the left there is all in black, a bunch of numbers on a square. And most of them are 5s. Some of them are 2s. And then on the right, for those that are colorblind, the 5s -- it's the same picture.
But the 5s are in green. And the 2s are in red. And people that are synesthetes -- they see different colors...
Sorry. Different letters and different numbers in a particular color. From what I've seen... There's so many different kinds of synesthesia. But my friend, he knew that it was in black. But he still saw, somehow, superimposed color on top of it.
And what's going on here is that you have different senses that are operating. So the color cones were not operating when you see that black ink. And yet, it affected the perception -- came from the color side of the equation.
And so you have that sensory cross. And the reason I'm mentioning all of this is... There's some new technology. And I asked this question. Can you imagine tasting vision? Sort of rhetorically.
And I say that, because this device here is a brain port. And what the brain port does is there's a video camera that takes video, and then it translates that into electrical signals on sort of a lollipop that you put on your tongue. Your tongue has lots of nerves.
And then your brain sees the video through your tongue. Now, it sounds super crazy. Right? But it does work.
Because when you think about it, when you are looking, and you typically have heard growing up that your eyes are taking in the color and the light, and then that's getting sent to your brain, but if you think a little deeper, there isn't another human being inside of your brain that has a projector that sees that light coming in. That's not how it works. Your brain is 100% dark.
It's electrical signals. And so this is translating it into electrical signals. By the same token, touching sound. Has anybody seen the TED Talk by David Eagleman? Where he has a haptic vest? This is another super cool thing. So what he did was he took a haptic vest, attached it to the audio of an iPhone, and every sound that was heard created haptic touches on the vest.
He didn't give any instructions, and after, like, four days, an hour a day, this person who was deaf, he went up on a white board, and they would say a word behind him, and the person would write it on the white board. Because he was able to hear the words and understand them through haptic touches on the vest. And this neuroscientist, David Eagleman, has a start-up called Neosensory, trying to make that footprint smaller. So it's just like a little...
What do you call it? Bracelet. And it's just incredible. And it also can cure tinnitus. Which I have. Does anybody here have tinnitus? Okay.
It turns out that tinnitus is... A bit of a hearing loss, and your brain is trying to fill in the blanks. And so what this does is it's gonna take some of those sounds, turn them into touches, and then your brain can understand what's going on.
And that's a cure for tinnitus. Which is amazing. So this is where the technology is going. And this is called sensory substitution. So just like with the synesthesia, you have neuroplasticity in your brain. And so you can switch from using your brain for one thing to using it for another.
As long as you get those senses coming in through one kind of nerve or another. And so... Sensory substitution is what is going to power the new output devices. And then there's another TED Talk by some former Apple engineers, where they have wearables that are projecting UIs on the hand and presumably other things. This is just a sneak preview.
And this is where the future is going to go. It's gonna be wearables. The computers are gonna turn invisible. Everything is going to be personalized. And even if you aren't deaf or you aren't blind, you're still going to be able to use haptics to supplement and augment your senses. So the winners definitely are going to have accessibility as part of their secret sauce.
Hence the title of this presentation. But one thing I caution is: We cannot wait for other people to build this. What I learned from Global Accessibility Awareness Day, for those who don't know, it's a day that came out of a blog post I wrote on a database blog. I'm an engineer. I'm just a geek.
I wrote a blog post. Nobody ever listens to technical people. And maybe 10 people ever saw the entire blog.
And now it goes viral every year, to the tune of 200 million plus on social media. And Stevie Wonder gave a concert. I could go on and on about that.
I'm only sharing it because I want to inspire you. You're sitting here. You have vision.
You care about accessibility. Or you wouldn't be here. Don't assume that somebody else is gonna build it. I'm out there, every day, trying to make this technology work for people with disabilities. Augment our senses.
Whatever it is that you're working on, you can also make an impact, if you just take your vision. But it's not gonna happen by itself. We all need to work together. So I really want to inspire you to see that this technology needs us. Needs our expertise.
And we need to help and work together on it. And I would like to see accessibility mean something more than just disability. Because as you saw from the different senses I was describing, there is a variety, and every single person senses things differently.
So why don't we just augment our abilities and make the most out of it, using all of this technology? And that, my friends, is accessibility for everyone. Thank you. >> Hey, Joe.
Great presentation. I do have one question. Where do you see the next curb cut effect? JOE DEVON: Oh my gosh.
I've been speaking to some studios. You might have seen in this discussion about the characters, that you have different characters, and you query them, as part of an assistive technology? So kind of bringing in through that accessibility door the possibility of creating a whole brand-new experience, where you are speaking to the characters. And so...
Initially, it's like... Because you're trying to help people with the new audio description, but the reality is it's going to make it an interactive, completely new experience. So trying to work on that. I can't go into too many details. But that's what I think will be an amazing curb cut effect. But there's hundreds like that.
IVAN: So this was very inspiring. And I did not... I'm in the web development business. And I know how hard it is to actually get your clients, once you build something, to actually implement the accessibility options that currently exist.
So this was really eye opening for me. And I'm inspired. I wanted to ask: Is there any place online, like some kind of an aggregation hub or web, where you kind of could look into the companies that are currently developing and pushing the products that are in the scope of what you just explained? So that maybe I could keep track of it, or other people, and then like... Oh, I would like to use this product or implement it, or I would like to get in touch? To help develop the technology? Is there something that's not on social media or something like that? JOE DEVON: Not that I know of.
I've had these thoughts in my head since I got this new job. That was AI-related. Accessibility-related.
So I'm in the luxurious position of being paid to think about both of them. It might be a good idea to put some of this online. But I started to use the #AI4a11y hashtag on social media. So if you want to follow me on social media or just follow that hashtag, that might help a little bit.
IVAN : Alright. Awesome. Thank you. BEN: In response to that, I'll also just plug -- a few months ago, Sandy Lacey, at Perkins School for the Blind, is doing a mapping of the assistive technology start-up space.
So they have... A good mapping -- more broadly, in assistive technology. But she would, I expect, be looking at kind of the disability AI start-ups as well. So something else to keep an eye on. CAMERON CUNDIFF: Yeah.
There's a couple questions on this topic. Coming through the stream as well. I guess another way of framing it is: Which companies are especially active in this space? You mentioned OpenAI and Google. What companies are especially open to collaborating with the accessibility community, in your experience? JOE DEVON: Gosh.
I'm trying to think of what I can say and what I can't say. I will say that that's something that we're working on at Formula Monks, for sure. A lot of the big tech companies are working on it.
But I can't really go into detail there. CAMERON CUNDIFF: Sure. JOE DEVON: And I've seen a few folks online. But for the most part, as I've been speaking to folks that I thought might know about it, most of them did not. And I'm only now starting to see a few folks here and there...
I do believe in the big tech companies there are people that are quiet about it. Because some of them, when you're speaking to them, they go into some pretty good detail, which obviously means they've been thinking about it for years. So... There's pockets.
But not enough. That's why I'm hoping -- we had a nice audience here, and online. And I'm hoping that people are gonna hear what I'm saying, and say: All right. I'm gonna work on it. A lot of opportunity.
CAMERON CUNDIFF: Cheers. >> Thank you so much for your presentation today. I'm curious to get your thoughts, because a lot of times, when we talk about AI and accessibility, people automatically think of, like, overlay technologies.
And, you know, AI has a kind of negative connotation. Because it sometimes gives the perception to content creators or developers that they don't necessarily have to worry about it. That some artificial machine will then kind of take whatever they make, and then access-ify it. You know? So I'm curious how you look at that balance between -- obviously some good implementations around accessibility.
We can't say that every single thing that they've done has no benefit to any aspect of it. But I think it's the wrong approach anyway. It's like... Even if they got it to work perfectly, at the end of the day, it's an assistive technology. But the money is to be made on the business side.
So these overlay companies are selling to the businesses this solution. And now if you need to use it, what winds up happening is you have to learn 10, 15, 20 different UIs, because everybody implements it a little bit differently. That's not really solving the problem with caring about accessibility from your heart. That's just like a get rich quick scheme.
And yes. I mean, we have to differentiate. But we can't throw it all out, just because there is one aspect of it that doesn't work.
We do have to call it out. >> You know the phrase... The future is here, but it's unevenly distributed.
Right? So... You look at the usability spectrum today, and you can't help but noticing that we might as well be 20 or 30 years ago... So many websites and products are still in the dark ages, just trying to do the basics. Right? So... Everything that you're showing is very optimistic and forward-looking.
And cutting edge. And... Do you have a sense that we're approaching some kind of Singularity? Where some huge players and general developments just kind of lift all of us out of this muck onto some kind of, like, higher plane? Or are we gonna be doing that in some areas, but then still struggling with these kind of stone age issues for years to come? JOE DEVON: That is a wonderful question. I am old enough to have lived through the promise of the web when it first got really popular and came out in the '90s.
And the hope of something new was just so great. And the disappointment of what we have today so vast that... I'm an optimistic person in general. But I don't come here with blinders. It's probably gonna be just a few companies benefiting.
The reality is somehow they'll find a way to ruin it. But... One of the reasons that I'm trying to be positive and look at the positive side of it is, again, you don't know what you can achieve, if you're sitting out there, trying to make a difference. So I try to reach out to everybody, and share a bit of my story, to say: We can make a difference.
Who would have thought that people cared about accessibility to the tune that they did? Who would have thought that Satch and Adele would put the entire company -- say that accessibility was a priority for the company -- release the Xbox controller just because he cared about it. But there are so many people affected by disability that we just don't realize how huge the numbers are. And that's why this took off. It was just a tiny little blog post nobody should have seen. So yeah.
I just want to inspire everybody to make the difference. But I'm not trying to say... Hey, this is all gonna turn out in the most positive way possible. Because there are people that are just gonna try to make money off of it.
And not make the best of it. So... It's a great question. And hopefully it'll work in the most positive way. But... I can't let myself be let down again after the web.
That was too painful. CAMERON CUNDIFF: Yeah. I feel kind of bad asking this next question, because the question is: Are there risks around the uptake of artificial intelligence and machine learning for accessibility and disability inclusion? JOE DEVON: Sure. There's just risks, period. I mean, AI is risky. The cat is out of the bag.
We can sit here and try and pretend that it's not. And we can try and close it down. In my opinion, anyway, what's gonna happen is if we close it down, all it will mean is the independents are not gonna be able to work on it. And then you're going to see that it's gonna be a small group of people that benefit from it. So I'm personally saying: Let's go full steam ahead. Because it's gonna happen anyway.
But let's make our priorities number one. >> Any other thoughts from the room? Yes. JOE DEVON: Hi. I would like to ask... From all your years of working with people who care or may not care about accessibility, what has been one of the biggest challenges in terms of getting people to act on it? Because I'm pretty sure...
This is something that people talk about, and what's the call to action? And what has caused the pivot for companies you've worked with to pivot from... Okay. This is something that exists to something that I want to work on? The biggest challenge is, if you don't have a financial incentive, a lot of folks don't care.
So if they get sued -- a lot of people, a lot of companies will look at it from the perspective of... All right. Let's say I'm breaking the law.
And let's say I get sued. What's the cost of developing accessibly, versus the cost of getting sued? And then even when they do get sued, and you're working with them, that's no fun either. I never enjoyed that kind of work, because then you're chasing some income and revenue with people that don't want to do it. And so I think the story needs to be changed... I have a whole other piece of this presentation not written yet, but I've alluded to it. When I was talking about accessibility changing how we view accessibility.
Because we're mostly thinking of it as people with disabilities, as opposed to this -- everybody has different abilities. And then if you just take a look at people that are blind, it's x number. But then you say...
Visually impaired. It's 2.2 billion people. You look at in America -- the population over 50 is a third of the population.
And 11% of the average person's life is going to be spent with a disability. And so depending on how you phrase that and show that, you actually can make a tremendous case for building accessibly. Right? And that's where I'm trying to go, any time I talk about it. I talk to customers about it. Is to make them understand. And you just need to give the right examples.
So, for example, for designers, you think that it's expensive to put somebody -- to teach them how to develop accessibly. But if you take something like Slack or some kind of app that has an online/offline indicator, and you use red and green, it looks -- to say online/offline, it looks like gray and gray to someone who's colorblind. Right? And it costs a lot of money to fix that after the fact.
And it may be -- the CEO of your biggest client that is colorblind, and you left him out of the equation. You didn't spend all that much money to train your designer to know that they just need some other indicator besides color to convey information. And now you saved a lot of money, because you made accessibility part of your product. Your product is better. It's more usable.
And the numbers are huge. So that's the direction I try to go. But...
You don't always have a podium where you can explain it to people, and often folks don't have the patience for all of that. So that's the challenging part. But I hope that we'll be able to get that message across soon. I think we need on the marketing side to market accessibility differently.
>> Yeah. My question is... So how AI can change our daily life. So ordinary people's lives. Like, the internet, we know, 2000 -- the internet began to change our daily lives.
But how AI can change our daily lives, like clean out my kitchen, do laundry for me, I just stay home -- do something for me. So how AI can change our daily lives. JOE DEVON: It's already changed my daily life.
Because I'll use, for example, ChatGPT to -- that would be the first place that I go for a search, instead of Google. But I'm very careful. Because it hallucinates. It lies to you sometimes. So if it's something important, you always have to double check it and do a search.
But I do that anyway. Even if I'm doing a Google search. They have answers in there -- I don't assume that's right. I always go to the original source when I care about something, which I think most people don't. That has changed my daily life.
When I'm doing presentations, some of the images I created were using Midjourney, which is a generative AI tool, where you can just type in text of what you want the image to look like, and it will spit out the image for you. That's just a couple of tiny little things. But it's going to be so much more. And it will use the technology that was in this presentation. KIM: Hi.
Thank you so much for your presentation. It's really awesome. I actually am a person with a disability. I have severe vision loss.
I have very little hearing left. I'm wearing $7,000 worth of hearing aids. Carrying around a $2,000 Braille display. I have the latest technology.
Everyone who designs for accessibility -- for me, accessibility is everything. Also, accessibility for me is invisible. Right? So when anyone looks at me, they would have no idea that I am almost blind and almost deaf. And that is true of many, many conditions. So, for example, the number one leading cause of vision loss in the United States is macular degeneration.
People with macular degeneration lose the center of their vision. Which means they can see their periphery and walk around. You will rarely ever see a person with macular degeneration who needs a white cane.
I, on the other hand, have no peripheral vision. They can't read. I could read, until recently. Now I cannot read. So vision loss, hearing loss, all these disabilities -- they are progressive. And many of them are so invisible, we don't realize how many people around us are actually living and working and functioning with disabilities.
It is absolutely incredible to watch where all of this is going. And the fact that we keep improving things enough that I am still able to live, work, and play independently. And so...
To me, as I said, accessibility is everything. Thank you. JOE DEVON:Thank you so much for sharing that. I will say that for me, I wrote that blog post because my dad was losing his vision and his hearing, as he reached his 80s. And it was really painful to see. And he could not do his online banking.
And when he couldn't do his online banking, that got me upset. And I thought... I didn't know all that much about accessibility at the time.
I was working at American Idol, the year that we had the first ever blind contestant. Blind finalist there. And I met him.
I met his family. Such wonderful people. And then I saw a screen reader demo, and I thought...
Am I failing Scott McIntyre, because I didn't know about accessibility? That's when I started to learn about it. And then I wrote this blog post. And it goes viral. And you see everybody has a story.
I'll also add an anecdote. I met someone who would always look at you, while you were speaking to him. And he said that he was blind. And I think a lot of people didn't believe him, because the way that he looked at you, it really felt like he could see everything you're saying.
So he had this anecdote that a friend of his said: I know that you're blind. But I think sometimes you peek. Pretty funny. KIM: My blind spot's bigger than yours.
JOE DEVON: There you go.