Monitoring Drivers With Eye-Tracking Tech w/ Adam Gross | Reimagine Mobility

Monitoring Drivers With Eye-Tracking Tech w/ Adam Gross | Reimagine Mobility

Show Video

Hi and welcome everyone to the latest edition of the reimagined Mobility Podcast series. I'm here with Adam Gross from HarmonEyes. Adam, thank you for joining our show.

it's an exciting time in the automotive space. And at the same time, we need a lot of help in a lot of different areas to make the new technology safer and better as we reimagine mobility going into the future. So please explain a little bit to our listeners and viewers, where you're coming from, what is your background, and then explain to us who is HarmonEyes and what in the world are you guys doing to help us reimagine mobility? Yeah.

First of all, nice to nice to see you, Stephan, and thanks for having me. so personally, my story is I'm kind of an entrepreneur. at heart, a number of startups. I'm kind of a veteran at this point. my, my last company, I had sold in 2012, and, as I was coming off of that, integration, met my current partner, Doctor Melissa Hunfalvay.

They, on the tennis court. she was a former professional tennis player from Australia. And we had a good match, and we wound up talking afterwards, and she was explaining how she was looking for a business partner to help her, with her work in eye tracking technology. And, Melissa was an eye tracking scientist at this point. Now, 12 years later, one of the the leading eye tracking scientists in the world. and so it was it's an interesting concept, right? Basically, we founded, our parent company, Right Eye.

and it was all based on the idea that there are decades of research that correlate, someone's eye movement behavior to their vision, their health, their performance, and their attention and the tool the sensor used to measure that eye movement behavior is an eye tracking sensor. Now, back in 2012, eye tracking sensors were big and bulky and expensive. It's usually camera based and, you know, thanks to Moore's Law. we're now down to, you know, micro cameras and a chipset that can fit into really any device.

And so the, eye tracking sensors have perforated, they're they're I say not ubiquitous yet, but they're becoming ubiquitous, and and you're starting to find them in the cabins of vehicles, in the cockpits of aircraft. you're finding them in augmented and virtual reality headsets you're now starting to see coming out of Asia, glasses free 3D monitors that are powered by eye tracking technology, also tablets and phones. And so back then in, in in 12 years ago when we started Right Eye, I we we didn't have this environment where the sensors were everywhere. So we were forced to build our own product. And so we wound up developing a medical device that, that objective vision testing.

And we sold a lot of it to eye care professionals, neurologists, neuro rehab, a lot of performance, elite performance customers like professional sports, MLB, NBA, and also elite military, special operations command, etc.. And so over the last decade, what we've been able to do is we've been able to procure a really large data set, of, I dare I say, the, the largest, data set of eye tracking data in the world. And at the same time, in order to power our medical device business, we had to develop a really robust eye tracking data analytics platform that took in the data from the sensor and at the end of the eighth step, kicked out machine learning and AI models that delivered outputs that were meaningful for a medical device. The meaningful outputs were objective information to do things like diagnose and also measure, you know, someone's vision skills. For example, if it's a performance based use case, and so about a couple of years ago, as the proliferation of these sensors started to grow, we started getting some phone calls from some of the big tier one tech firms, who were developing the headsets and driver monitoring, companies, who were developing in-cabin, systems to monitor the drivers and also sense, different aspects. And so what we we kind of took a step back and we said, okay, what are the challenges here? And then we're going to solve these challenges.

We don't want to build a another solution in the, you know, searching for a problem. We wanted to identify the problems and build to that. And so what we realized is that while the sensors were growing and there were some basic utilities that were there were being able to be deployed, such as attention and focus and things like that. where we shine is on that eye tracking, the AI and modeling side and so we were because of the data sets that we had collected, we're able to train models, on that data and develop, predictive models around other user status on things like fatigue, things like cognitive load, that, you know, stress and things like that anxiety.

And, also targeting, where are you looking? Situational awareness, a lot of performance based outcomes. And so we started developing these AI models and we created this company called HarmonEyes. we deliver these models.

We're rolling it out right now. We deliver it in a really easy to deploy SDK so you can deploy it at the edge, which is locally in a cabin of a vehicle or on a device. It doesn't require cloud technology.

it's all it's all under the hood, if you will. No pun intended. so that was the first problem that we, that we solved. The second problem is that we quickly realized when a large tier one came to us and said, well, you know what? We have a five year R&D pipeline and every, every, every and every use case in environment. We're using a slightly different eye tracking sensor, right.

The sampling rate could be different. The precision, the accuracy, they all can be different. So even in their R&D world, if they want to deploy eye tracking functionality every time they use a different sensor, or even if an existing existing sensor has a firmware update, for example, the specifications of that eye tracker slightly change or significantly. And so the eye tracking models have to constantly be revalidated and re verified. And that is just a blocker for the entire industry, whether it's other driver monitoring system, companies or it's or it's the other application developers or the manufacturer hardware manufacturers themselves.

And so what we did was about a year ago, we invented, a solution called the A.C.E. it's an automated conversion engine. And basically what it does is it normalizes data from the data signal, data from any eye tracking sensor to a to a standard and a format, a single standard, format, so that anyone all they have to do is build to that one standard. And it doesn't matter when they're updates or when there's new hardware.

they can be it can kick and operate. So it's really creating interoperability for eye tracking for the first time ever. And we really think that's good. That's going to help the industry. So these are the two solutions or two areas rather that we're that we're targeting who's very interested.

You really created a not only a model to identify the different scenarios of how somebody eye looks and, and what, what information you can get out of it, but also that it's really, agnostic to whatever hardware or sensor it's working with. It's interesting. So the one thing before I jump back into mobility, because basketball is dear to my heart, what would an NBA player need to know about his eye tracking in baseball? Again, that the ball is coming. You got to hit it. But basketball we talk.

What are we talking there fatigue or some of these other things. You talked about it. What are they trying to improve. Yeah. So for for sports it's it's like military.

You're performing different tasks. If I'm a point guard on the basketball court, I've got a different skill set that I need as opposed to someone that's shooting a free throw. someone that's shooting a free throw. and you're going to follow that.

You need to fixate. Right. And the science shows that you fix it on the front of that ramp. So if we can identify that, you fix it on the front of that rim, your percentage is going to go up. If I'm a point guard and I'm driving down the the court, my scanning ability and recognition need to be really top notch, right? I need to be on the court. I need to be looking for players who are open or who are going to be open as they're moving, where there's no defenders.

And, and this translates even into military, right, where our, our customers are starting to, to, assess vision skills and, steer certain warfighters in different directions. If I have strong fixation stability skills for long periods of time, I may have come in wanting to drive the Humvee, right. But I'm going to be better suited to be a sniper. Right? And and if I don't, and vice versa. I, you know, if I've got really good scanning and recognition ability, and situational awareness, they may put me as driving the Humvee, searching for IEDs on the side of the road. So you can see how this translates to performance driving.

and also everyday driving. And these are skills that are absolutely needed not just to mitigate risk but to improve performance. and so and I want to mention one other thing. So you start to think about maybe not the performance part that we just talked about, but some of the real critical aspects like like overload, and fatigue.

Right. It's not enough just to identify the stuff on your fatigue. Like, of course you are. You're already experiencing it, right? You probably are already suffering from the symptoms of fatigue, which is creating risk for you and others, others on the road. So one of the really amazing things that our models do is they predict it ahead of time. So there's a constant stream of you as a driver and it's low, medium and high fatigue.

And so what we allow you to do or the platform behind it, the driver monitoring system is we actually give you the time predictor that if you continue on this track, then in say, 65 seconds, you're going to go from medium to high fatigue. And what that allows the platform to do is to preprogram certain interventions or corrective actions so that you don't get to that point. So we've we've all kind of experienced, if you have a newer, newer model vehicle and you have got the the sensor turned on, that if you're kind of veering into the lane, you get the haptic feedback.

You know, I turn mine off because it happens in the wrong time. Get a says, get a cup of coffee, whatever it is. That's a responsive sort of, feedback. Right now, I'm already in the middle of the lane, so the damage may already have been done. So the beautiful thing about what we do is we actually, can predict it ahead of time and so we can prevent, you know, that adverse, affect happening. you talk about projecting it right ahead of time, which to me could solve a lot of challenges we see today in level two plus some level three type applications.

I drive a vehicle with a level two plus, and there's sometimes this eye tracking is great, at least for what it does right? Keep your eyes on the road or the system turns off. And sometimes it tells me, hey, your pay attention to the road. Or I'm like, I am. I'm like laser focus. Like I'm staring at the camera right now. I don't know what you're talking about. Right? And then I figure out is in my sunglasses.

Is it a sun shining in my face, or is it something different? You know, I don't know, but back to the prediction ability. Adam, tell me, is is this are we talking? I can tell you that in two minutes you're going to be too tired or you have a certain level where maybe an OEM wants to create an application that says, hey, we're concerned. What? What's the time? You can predict it, right? I mean, is it something that is like an hour ahead of time? Is it ten minutes? So share a little bit of light on that on this very fascinating, technology you have.

Yeah. So so let's just take overload for an example. our current overload models are predicting, it at about anywhere from 30 to 60s out. Okay. and so, like I said, the way the SDK output is, is working is that there's a constant feed, constant output of, you know, Stephan has is driving the vehicle and it's low, medium or high. So you're you're in low mode forever.

This there's going on your great right. Even you may be medium but that's okay too. right. And there may be an opportunity for you to create a custom setting, right. Or, or the manufacturer to that to create a custom setting. It depends on where things go.

so you could have some control over it. But let's just say, for example, you're really concerned when you go from medium load to overload, okay? Because that's when bad, bad things happen. Yeah. and so we will we can predict 30 to 60 seconds out that you're going to experience overload if you continue on this track. And so what that allows the what that we we enable is a pre programing of certain corrective actions. So whether it's a haptic feedback, whether it's a physical, reaction from the vehicle, audible noise, something like.

Yeah, exactly. And you know, I know we're talking about vehicles now, but you know, I, I look at mobility and that's in, you know, in, in, in other environments as well like aviation. Right. So whether I'm flying a Boeing Boeing 757 or F-35, I also want to know before I'm going to be overloaded because now my risk profile is is different. Yeah. It's interesting. So then in the industry has been challenged with for the last 20, 20 plus years that I've been in the least involved with somewhat of a driver monitoring it. Back then we didn't call it necessarily that, but oftentimes it was drug impaired, alcohol impaired, you know, how can you not allow somebody to drive? Well, it's, you know, a breathalyzer test, very elaborate, trying to integrate this into a car, etc..

are we talking with your technology? You can tell as soon as somebody sits in, he, he or she is as consumed too much alcohol as potentially a drug overdose, I don't know, potentially is close to having a heart attack even or, or passing out. Are we are we literally talking that that level of capability. So from our perspective the data does all the talking. That's the ground truth, right. And so I mentioned earlier that we have collected this massive repository of eye movement data. Now we've got a large data set of folks who have cognitive load fatigue.

Right. We understand situational fairness and all that because of the performance data that we've got. We've got a whole a number of data sets around health and vision related, issues. And so, and we haven't really gone down the path of creating, like using that to create a clinical diagnostic tool. but we are talking to potential partners who, who would do that? And, and partner with us. Right. we create the models and they'd go out and create those, those diagnostic products.

That being said, screening. Right. And creating having those biomarkers are really important.

And so some of the data sets that we've got, where we understand if someone, for example, is we know what the ocular motor function, which is the the micro eye movements, the signature of someone is having a stroke, having a stroke. Okay. Or migraine. we actually have a large data set around movement disorders. So things like Parkinson's disease, we've we've partnered with MIT where we've collected data all around Lyme disease, and with University of Miami Medical School for mild cognitive impairment.

So we we do have a whole other clinical, set of data where we can deliver predictive models for folks who start to look like someone who is having an active stroke. Yeah. And that's, that's a, that's that's. So hopefully that addresses your question now are we are we ready to deliver it. We're ready to to to to train the models. Sure. And develop it. And really at this point because of our infrastructure, we can do it pretty much faster than anyone because we've got the data and we've got the data analytics platform, the AI modeling platform, so we could deliver a lot of those models in just months.

we just need the right partnerships to prioritize the interest. And, and then let's quickly go back before I want to look forward again. I mentioned before in my cover example, I feel like certainly when I have sunglasses on, it's a little more challenging, I would assume, but want to know from you to you. It doesn't matter if somebody has sunglasses or not, at least to your data and your algorithm. It doesn't. It probably depends on how good the camera is to try and get to the eye.

Right to be able to see. Would that be a fair statement that is completely accurate. and so what we do is we take in the signal data, right? The sensor has to work where we're assuming that in a vehicle, those sensors have been tested for years up and down in, in various environments when the driver is looking left, right, we've seen environments where there's a sensor right in front and the dashboard face front facing on the left side. On the right side.

We also are seeing you know, multimodal sensors right where you're they're also looking at facial patterns and they're looking at head movement, things like that. we focus on eye movement. so as long as we get a good quality data signal or signal data, we can run our models, and it does not matter. even just speaking from experience, with our medical device, which is a remote tablet device, we're able to track and most eye tracking sensors are able to track through wearing sunglasses or corrective lenses, lenses, everything interesting.

So then let's again to the true meaning of our podcast Reimagine Mobility. What what can you reimagine with this technology going forward? I mean, it sounds like you pretty much you already everything that from an industry perspective is one of the challenges are eye tracking, eyes, preventing accidents, or at least giving even more advanced warning that an autonomous vehicle or, a level two, level two plus level three. Hey, you may need to take over here and disable the autonomous features. What do you see next, Adam? And where is this? Where is this thing going next? Well, for us next.

it's securing those relationships that allow us to deploy these models that we've, that we've developed. And we're just starting to do that. And so we want to see our models out there in the world saving lives and helping improve performance, and mitigating risk.

And that's really what we're about. So the the next year or two for us is going to be about, and we see it in two ways, right? We can work with, with with vendors who are developing these driver monitoring systems. Right. They're building, cohesive systems. And eye tracking is important, but they may not go as deep as we do in the eye tracking. So we can license our, our models to, to to those folks.

and, and so what I see is kind of a two way, really a three step approach. Right? So the first step is some of the, the tracking of the user statuses, like things like fatigue and cognitive load, situational awareness, targeting distraction. Some of that we call them basics. I know they're not, but, what we're really we're really we're really strong at the second part is kind of along the lines of what you talked about is that if there are clinical issues that are happening to a driver, being able to anticipate that and taking appropriate interventions in order to prevent an accident. and we see that in, in, in all sorts of mobility.

We also see it in the training side. Right. So, you know, dealing with big heavy equipment manufacturers and machinery, right. Operators of that, you know, maybe, maybe you don't want to take that earthmoving machine on to the quarry for training.

You want to do it in an augmented or extended reality environment so that if they if they have an accident, it's not, you know, ditching a, you know, $2 million machine and, and so and risking risking a lot and so we see it in the training environment as well. And then I'd say the third part is more of the tool, which is what I talked about early on, which is what are race or automated conversion engine, which is really kind of having, creating, finally creating universal, compatibility for eye tracking sensors. And so, OEMs don't have to worry about, you know, firmware updates.

And are my eye tracking models going to be valid? You know, on a Wednesday at midnight when the driver gets in the car Thursday morning, are they going to be out of whack? And, and and so if everything just works, it's interoperable. and the sensors are all develop, all the models are all developed to one standard and format. Then we solve a huge problem for for all types of complaints. Right. Great. So then let me add a curiosity. I mean, we've we've talked to all sorts of different people, obviously, on this podcast.

And we had one guest that talked about crash test dummies, right, that we use and it used to be it's one whatever, one fake body that you use for any shape, size or film of an individual young to old or, you know, muscular or skinny, whatever it is. Is there a difference in eye tracking then, or in in the information you have that you have to train your eye model for different nationalities, right? I mean, our eyes certainly look different and the eyes of, of of people from Asia originally, different skin colors, female, male, older people, younger people. I know we're maybe go more into the medical field right now a little bit here, but very interesting.

How is that all incorporated? That's a really good point. And you're right, we we didn't go that deep on our data sets. but it's really important to understand what the normal population looks like.

So our normative data set for for those who are interested is like over 250,000 people. Right? So and it crosses over different ethnicities, it crosses over handedness, left and right handedness, eye dominance, age. it really crosses over everywhere.

And so we unders we like to say we understand what normal is, across all different types of people and in the entire population. And so if you understand what normal is, then it allows you to train your models better. and when it comes to different, different eyes, and there are around the world that really comes down to the hardware sensors and, you know, it's we're assuming that, that track ability, if you will, has been tested.

in, in the, in the lab interest before we get the, the signal. Sure, sure. So maybe two more questions. Second to last, what really enabled this for for us here to reimagine mobility, which it sounds like you're at the onset of, of really changing mobility at least as it relates to driver monitoring and safety, etc. and again, not just in the vehicle. I'm thinking heavy duty trucks, right? Where driver fatigue is always a concern because you're driving a big vehicle that can potentially create a lot of a lot of damage. But what really what's the enabling technology is, is the advancement of of deep neural networks, of deep learning to really train the data, because I assume the data you you can collect, it's a matter of time.

Again, 250,000 plus people. But is it again, the deep neural networks that allows us to do not a trained as much more efficiently, and not just a pure I would say machine learning piece. Right. Is it the sensors, is it a smaller or higher, more powerful, compute platforms or processors share a little bit of light on that. Yeah. So another really great relevant question.

So we're at the point now where we for most of our models, we don't need more than 30 or 60Hz, sampling rate. So it's pretty it's pretty. It's pretty low. We can generalize it all the way down. I think what's really critical and what we've heard from some of some really large technology companies, is that, you know, part of the last decade of what we've built and we built it the power our, our medical device business. And we almost didn't realize what we were doing, but we built the really the only, you know, eight step eye tracking data analytics platform. And it's an eight step pipeline.

So imagine the first step being, just capturing the signal, right. You just got to get the signal. But once you get eye tracking data, and for anyone out there that has dealt with eye tracking data, maybe not most of you, it's really, really difficult data.

So it's x y z coordinate data. It's oftentimes, you know, collected at a rate of 200, 102 hundred times per second per eye. Right. So you're getting a lot of data.

There's a lot of noise in that data because eye tracking in the wild, especially while you're driving, as you said, you've got sunlight coming in through the the sunroof. You've got, you know, wearing glasses, all these different environments, they're, they're moving. so it's important to be able to filter and classify that data. So we have to take that raw data. We have to understand

some of the foundational metrics like what is a fixation? What is a psychotic movement, what is a smooth pursuit eye movement just from the raw data that we're getting. So that's the first phase. We call it kind of signal capture and analysis. The second phase is feature generation really being the that this is where we've developed 7000 algorithms.

And being to to to start understanding okay. What are those mean. Right. What is what is that eye tracking feature or what are those the the fixation and the psychotic eye movements. Where are they located and starting to make sense of it.

And then that third, that the third part of the pipeline, which the last three steps are deploying machine learning techniques, which really for the majority in the last ten years is what we've been doing. And then with the advent of AI modeling and deep learning, we've deployed those tools now and do it using a lot of pattern recognition. and so that's where we are. So you go all the way from capturing the signal in one single platform, mind you, all the way to, machine learning and deep learning and AI model outputs in real time, delivered to an SDK that's at the edge. So it's very low compute and very low power consumption, and it's easy to deploy and it's seamless.

and, and we believe we've got the answer to help with a lot of these solutions. Yes. Fascinating stuff. I have to say. Adam, last question. Nothing to do with AI tracking. Well, maybe does a little bit, but what's the next car you're going to buy and why? So this is timely because I'm.

I'm, I'm ready. and I, I've kind of migrated my kids are a little bit older now, so I've migrated from, you know, the SUV to the sedan. I've got an Audi sedan, and I really love my Audi sedan. I just love Audi in general.

My son really wants me to get the Rivian, the newest Rivian SUV. I'm not quite sure. I'm ready to go electric yet. I probably would have gone electric last time, but, the Tesla, it doesn't catch my eyes much. I know they're great, and I've driven my my friends, and they're they're amazing vehicles. But just the look and the look and feel of it for me. Yeah.

So I would probably say that I won't be as bold to, to do what my son wants me to do and go with the Rivian. And I'll probably just, you know, get another sedan. Excellent. Don't be disappointed in me. I know nobody's disappointed. It's just a very interesting last question, I have to say, because we get all sorts of both answers and yours is again makes a lot of sense.

I mean, very, very reasonable, logic here. Adam, thank you so much. It's been truly fascinating for me to learn about this and understand how how far this technology already is and, and what you can do with looking at somebodies eyes, alone. Thank you so much for your insights.

2024-07-20 20:17

Show Video

Other news