Christian Meyer's speech at RITZ on the 14.10.2021 | LAKE FUSION Technologies LFT

Christian Meyer's speech at RITZ on the 14.10.2021 | LAKE FUSION Technologies LFT

Show Video

If you take today's so-called ADAS systems, the assistance systems that you can already enjoy in some cases in your vehicles, as a basis, then we are talking about a level mostly in the order of L 1, L 2, where I get an assistance that already offers us today the topic of distance control, Lane control is already very convenient, but it gives me a certain deceptive sense of security, because the systems often perform well, but when we use them for a longer period of time, we quickly realize that at one point or another these systems do not offer the necessary reliability. I had that recently in which my vehicle turned off, on the turning lane, here on the federal highway B31 my radar system has recognized, "oh there is the vehicle, that brakes off" but has not recognized that it has already left the lane, accordingly it came to a sharp braking. There are studies, but they are a little older, as of 2018, Waymo says I drive 11,000 miles without the driver even having to intervene.

General Motors isn't quite as good but comes close. But if we now go back to the statistics and there once maintain the assumption that we have an average speed of 37 mph, so in Germany that would be around 60 kmph, with a drive by the lake we are even a little bit slower and we still only have a reliability of 10-2, 10-3. If we take the safety requirements from aviation, which of course must also apply to the autonomous driving approach, according to ISO 26262, then I need a reliability of 10-9, 10-10. For all those who cannot see the figures so clearly, an example is: Today, you compare ten large, thick drops of water with a full bathtub.

That is the dimension that we have to meet, that we have to achieve, in terms of safeguarding, in terms of the security of the entire system. Whereby, I have to say, safety starts with environment detection, which is what Lake Fusion is committed to and also puts on the market. But it continues with the trajectory calculations, up to actuator technology, so software and hardware in one must fulfill these requirements. These requirements are, according to the statistical consideration, in particular the ISO 26262, which maps the functional safety, but also the SOTIF (Safety of the intended function), ISO 21448, which ultimately also secures the performance, in combination a must, in order to also fulfill these failure rates and this safety, which I would like to have at the end of all days, as it is today in aviation it is also in the automotive sector with autonomous driving. But now I can tell you quite clearly that simply meeting this requirement, also from a physical point of view, and then according to the state of the art, with a sensor, with a computer, with simple software, does not work. The required safety cannot be achieved in this way.

That means that today you have several possibilities where you have to start, so that at the end of the day you are also able to bring systems into such a fail-safe state physically, but also in terms of feasibility, you first have to start with the system architecture. The system architecture in itself describes the way in which I can imagine a system based on independent, non-correlating branches that are also, as the experts say, dissimilar, i.e., completely different in terms of technology. Can I integrate them into the vehicle and operate them independently in the overall system for as long as possible.

And beyond that, there are these so-called safety levels, they are called ASIL. There is a gradation from A to D in the standard, which corresponds to the highest level of safety that I would like to achieve at the end of the day. I can also evaluate these safety levels differently within the architecture and build on them differently using a decompensation process. And that is the big challenge in an overall system. How do I manage to ensure the highest safety level, this ASIL D, i.e. this 10-9 failure rate, although I actually start at the front with the sensor, which cannot achieve this from the outset. The first thing I have to think about is a multimodal approach. Multimodal approach here means I don't have just one sensor, but I use different sensor technologies. Let's take the camera as an example. Here's a nice example, on the left, you can see a timber truck driving through Friedrichshafen, that's actually a stretch of Friedrichshafen, it has an overhanging load, that's timber from the forest and it's driving around the curve, we're driving behind the vehicle.

The camera has some, but big challenges during this time away. On the one hand, I have a very poor contrast image and on the other hand, I ultimately have merging contours, especially when this truck drives around the corner, I will then have a big problem in the camera image to ultimately still detect this overhanging load. The second technology, which is already very well established in cars today, is the radar system. Radar systems are actually very easy to handle in terms of higher approval levels, so I can do ASIL B very well with a radar, but a radar has physical limitations.

For example, I have the problem that when I look at this overhanging load, I find a material that has a very poor radar cross-section, i.e., a very poor reflection behavior of radar systems, which leads to the fact that this overhanging load is ultimately not selected by the radar system. This means that I have to know at this point what radar technology can actually reflect from its physical aspect and what it cannot. And most radar systems are based on the Doppler Effect, which means that they measure distances of every vehicle driving in front of them, how the speed changes, coming closer to them, driving further away from them - I can measure that very well with radar. But if I have objects moving sideways towards me, which do not change much in their distance to me, radar has a big problem to detect that.

Let's move on to the last technology, which is ultimately one of the core competencies of Lake Fusion, namely laser-based sensor technology, which emits an active light pulse at the end of the day and evaluates the energy recovered from the reflected light, thus making an active scan. One big advantage is already obvious: whether the sun is shining, it is daytime or nighttime, is first of all completely irrelevant for an actively scanning system, I record that. But of course I also have a few artifacts, which means that weather conditions also play a big role. The reflection of, let's say, "adverse weather conditions," like rain but also snow. Malicious tongues say that a LIDAR system is the best snowflake counter available on the market, which means that they also have to solve certain things. If you take these elements together and consider what I predicted, that there has to be a so-called architecture where I have to run these chains independently for as long as possible and then, at the end of the day, have a solution where I can balance out and use the strengths and weaknesses of the individual technologies, then I'm also in a position, and that's the nice thing, that the standard actually provides that. In the automotive sector, we are a bit better off than in aviation, decompensation, i.e., the merging of these two branches that you see there into a greater level of safety, is actually easier and more attractive to do here in the automotive sector than in aviation, whereas in aviation it can only be done once and then it's over.

So, for example, if I take the safety level A or B, I just take B on the upper branches and B on the lower branches, I can actually get one more level higher than the next higher one after clever combination with the use of these parameters. That works. Whereas in aviation it's just this one connection. But I could also count on that, and I could go from C to C back to D again. So, I have a lot more possibilities to finally get this high security, that I don't leave my trajectory at the end of the day or that I don't leave my trajectory at the end of the day. Trajectory or have no collision with any objects. Here you can see the videos superimposed on top of each other, with the lidar system running on the top left and artificial intelligence on the top of camera.

Below is so to speak the sum picture with the exclusion mechanism, that is here simply the subject 1to2, 50/50 evaluated. Both sensors are equal and beyond that I then get an exclusion process where I say that what one sensor doesn't see but the other sensor sees, I fade out, only if both say yes then I'm sure there's really then the clear road, there's really then the object. I am curious what such an AI would say if they saw such a road user in traffic. This is a nice exception from a very interesting safety lecture that I heard some time ago. That is undoubtedly to recognize, Cologne, there is I think also there, at certain times of the day. But such a road user simply does not fit into the scheme of a trained AI system.

That means AI itself, artificial intelligence itself, has big challenges in that it simply cannot provide the assurance that I can validate it at the end of all days, that I can prove what ultimately the AI has learned. That's my first issue, and according to the standard, and the second approach: I have to prove that my code is valid, which is relatively difficult with AI. I have the code there, but that is not my functionality. The functionality is what I've learned. And AI has that challenge today. That's why Lake Fusion is firmly convinced that if I go beyond a level of ASIL B, i.e., what then leads to autonomous driving, I always have to take something deterministic, something unambiguous, something rule-based on board as monitoring. And that is exactly what Lake Fusion does. Lake Fusion helps its customers, for one thing, to secure the, and now the focus is back on environment recognition, block one in autonomous driving, to build an architecture that, at the end of the day, meets the safety requirements or the safety goals that have been set, for example, the ZF company, people-mover business, where I say the route goes from A to B and I don't want to have any collisions with road users during this journey, I don't want to leave the trajectory, I want to have during this trip, even if I have inconsistent situations, I want to be sure that the vehicle stops.

That's what we're going to have, that's going to be to some extent the result of what we're going to see in the next period. But there, too, it's like in aviation, where there's more of a stopover and says okay, we'll check the system again, find out what's happened in the architecture at the end of the day, what was the corner case that I had there, for example, what I don't normally have in normal operation, and then I can resume operation. We will experience this. But this learning process that is taking place will not have any impact on the topic of safety. In the systems, we will always insist that accidents must not occur, and in aviation, of course, one can argue that there are also accidents in aviation, but the accidents, if you take a look at them, are really events that one could more or less not ignore. Let's take the crash over the Atlantic with the Airbus plane, I was involved, is then actually an area where the pressure tubs, that is, the pressure gauges measure a wrong altitude value, because it was actually minus 70 degrees Celsius and even colder there, where before never such a value and was measured in the atmosphere, then such things happen now. I hope we will experience as good as no cases of it and if then only very few. The premise must be that even the operation of autonomous driving systems, he then decides the vehicle stops, we stop.

We then have to take this back into operation, but at the end of the day, everyone's safety must be guaranteed. And that's exactly what Lake Fusion ultimately does. We are the ones who offers a so-called safety envelope to the customer and bring it to the market and this safety envelope is a rule-based construct where we clearly always confirm in the form of a check the environment, yes, there is an obstacle, that is so and so far away and it is also confirmed via the various channels of the sensors and thus also a very high reliability that this sensor technology also delivers the correct results. Now, of course, they still have challenges.

In particular, the lidar area, but you can see that nicely here in the video, we have the issue that we also have weather conditions that always break this nice laboratory definition, namely all of a sudden we have artifacts that you see here now, the red pixels are the reflections of the fog, that happens at our lake from time to time, and this fog, they have to filter out, they simply have to be able to do that. We've been able to do that well in aviation for 15 years, and we're able to do it well here on the ground today. But they must have it in their heads. And then it's very important that they ultimately output a value as an environment recognition system, where they tell the overall system, where can I actually drive? Is there really road of me? Or is there a big hole in front of me? Or is there a road user standing there? Or does the road end at that point? Am I allowed to drive? Where can I drive to? This is one of the features that Lake Fusion offers in the market. This is the so-called free space, you see this as a purple area, practically here the road is recognized and now it depends, when I say "recognize", it actually means I recognize that this space is free.

That's where the SOTIF comes into play again. First of all, I have to make sure how far you can actually look at the sensor. If I say, okay there is nothing, or the sensor tells me in 80m I see nothing, then it is far from the statement given, there is nothing, because if the sensor cannot look so far, then he has not seen what is there in front of him. If my wife says to me, look in the refrigerator, there is butter inside, then I say, I don't see it at all. Then I know okay, the man factor plays a role, take a look for yourself. It may be a simplified example, but it illustrates that what you can really see is when I look myself and monitor myself and know that I can look this far, then I also know that I have identified this room as free, i.e., the algorithm behind it first checks how far the sensor system can look now.

You can imagine that the weather is nice here in Markdorf. If it starts to rain, the sensor system, especially the laser sensor system, will immediately have to reduce its range, which can mean more than half the range. So, this sensor that is active here right now has a range of 60, 70 meters, that's how far it can see the road. When it starts to rain, up to heavy rain, the range of detection of the road is reduced to less than 20 meters. So, you can already tell that has a big impact. Which doesn't necessarily detract from the autonomous driving function because I also always tell my wife, when it's raining, drive a little slower. So autonomous driving must be able to do that, too.

What you can see so nicely here now is that even more information is recorded by the sensor system, for one thing it's lane detection, you practically do lane detection again with this sensor. This is actually a pure lidar sensor and is not smoothed in any way, these are single frames, which means that a retrospective image is applied to each recorded image and an up sampling is not yet established here, we deliberately do not do this, because then the image would be beautifully embellished and the quality would ultimately not be apparent. In the individual image you can see very nicely how well the algorithm works and which way it ultimately debits the environment. The camera image behind it is essentially a picture, so that we can orient ourselves there, he has really seen the road, otherwise they would see only pixel representation in the top left there we do as a man now difficult to bring a validation on it. And that brings us to the keyword: having such systems is one thing, but I now have to validate them, because validating means I also have to check whether they really fulfill the functionality to the required extent. And this is done under various conditions.

We discussed this morning whether we can get as many children as possible to voluntarily throw themselves in front of a vehicle so that we can see whether our vehicle and whether the algorithms recognize it correctly. That is not going to be the case, of course. So, we will develop and apply methods to this end, and one of the core competencies of Lake Fusion is to implement precisely this verification process so that this so-called safety, as defined by the standard, is met at the end of the day. Simply summarized the portfolio and the Lake Fusion, I think in detail I do not need to go into it.

It shows first and foremost the safety critical applications that we just have so nice to see that in the individual software elements that we are now bringing to market. Then there will be series elements in the next few years. It is the fusion, the combination of the two branches together, where I say, at the object level, I take the detected data from the camera, the detected data from the lidar, the detected data in the radar and make a comparison test from it, which is of course more intelligent than the one I showed before, where I then go and say okay, now I have achieved a protection and a probability, which then satisfies the functional protection ASIL D, i.e., the highest level. That brings me to the end of my presentation. I think I have remained reasonably in time. Then I'll say a little bit about our origins. As a Lake Fusion, we perhaps have a bit of an advantage over our competitors in that we were more or less born into the aerospace industry in terms of safety.

There was no development company or project in aviation that didn't have certifiability in mind after all. In other words, before a development project was started, approval was considered. Approval in aviation but also in automotive, today we have large processors at home with 12 cores. If we want to achieve a next higher level of safety in aviation, then these twelve cores and one processor become slim 2. That is, they assemble their processor capability, which increase safety, right away to a, I say far behind performance computers that existed maybe ten years ago. Those are the challenges that you must meet.

And the big trick of it is to solve via the architecture, the big trick of it is to know how the sensor technology works, what are its strengths and weaknesses after all. And that's what we've been doing together in this team for ten fifteen years. These products from the past are now also life-saving products in civil aviation, but also in military aviation and that's exactly what we are now transferring to the automotive sector, and we are also proud to work with the local industry here and I personally also thank you here for allowing me to give this presentation. We didn't get together then because we already had a space, otherwise we would have loved to come here as well. In any case, I wish the RITZ all the best for the future, and I hope that we will also find a location here, which in particular can also expand further into future mobility, internationally as well as nationally, I mean to have an OSS conference, here for example also at Lake Constance, would be an exciting task.

Thank you for your interest, if you have any questions, I am at your service. Are there any questions from you to Mr. Meyer? Or are you so flashed. In that case, yes, you need to sit down. I hope I haven't overwhelmed you. You do have impressive technology transfer from aviation to the mobility sector. When you look at the life cycle of an aircraft versus a car, do you see that as a challenge or an opportunity? Are the development cycles different? Maybe that you could say something in terms of safety.

That's a very interesting question, I don't think I can answer it in one sentence either. In aviation in fact, we talk about use phases around 30 to 40 years, what an airliner uses, actually and also the safety considerations are these already to some extent aligned with that. In the automotive sector, I don't want to take the service life of a single vehicle or a single owner. I would rather see a chain. I know that in the automotive industry, development cycles, for example, start of production every five years, every three years, are very short compared to aviation, so I don't think Airbus is able to bring out a new airplane every three years.

Nevertheless, the safety considerations are completely equivalent at this point. And they are not fundamentally changing their system, i.e., the environment recognition, they are developing it further, that's right, they are improving it, but they are not going to reduce the safety requirement because they are now operating their vehicle in a shorter time. That by no means. So, from there, I'm convinced of that. We had had the nice discussion at the beginning. What is it like to sit in a vehicle like that? Do I still read a newspaper? Who is to blame if something happens? The question of who is to blame is still a hotly debated topic, although the legal situation in Germany is such that the Ministry of Transport has to be given credit for having taken a really good step with the legislation.

On the other hand, you don't think about it today when you get on an aircraft. You get on the plane, and you don't ask yourself whether it's your fault or not if something happens here. And we have to reach this level, and I am also convinced that it is my personal opinion that we will accept and enjoy safe autonomous driving in our road traffic more and more quickly when the topic of people-movers becomes more and more present and is accepted by the public. I have also already bet 50 euros, within the company, that autonomous driving will progress faster than the so-called mobility, electric mobility will make way in Germany, so I'm pretty much of the opinion, this will be revolutionary topic. Like back then we took the iPhone just to make phone calls, today I still have maybe ten percent of the phone functionality. So, to have autonomous driving we're going to get a completely different platform, a completely different usability. We may not even be able to tap into the convenience and that today, but that will be the topic for the future. I hope that answers your question.

Do we have another question? How many employees do you have? We started with 7 employees back then, that was the core team. Today we are at 17 employees and will be 20 by the end of the year. Maybe also a little statement about that. When you start as a startup, you first have to take a first phase, namely: which product do you ultimately want to bring to the market? What do you need for this? You are very much looking for performance. Because we came from a large corporation, we have been involved in the process landscape, structure, and infrastructure right from the start, and when it comes to the current process and transformations in series production, that is of course another step, and we are proud of the fact that we have actually been able to achieve this growth here in the region, where it is not so easy to find employees, and we are also confident that this will continue in the coming years.

2021-10-31 20:19

Show Video

Other news