Next generation opto-electronics in automotive engineering

Next generation opto-electronics in automotive engineering

Show Video

foreign [Music] welcome to research pod once upon a time a car was as simple as an engine Wheels steering and a seat that was a long time ago the latest generations of cars now carry technology borrowed from Industries ranging from machine learning to Medical Imaging today I'm speaking with Florian friedel group leader Automotive optoelectronic components from hamamatsu photonics about the role of their light sensing Technologies play in modern motor vehicles and what the future may hold for The Cutting Edge of photon detection oh and joining me to discuss their work Florian friedel from hamamatsu Florian hello hi Will nice to see you nice to have you to join us for the folks at home could you tell us a bit about yourself some of your work background or the steps that led you to where you are now yes sure so my name is Florin frill I work for Hema Matsu for the automotive sales team in Germany and Europe I'm responsible for our tactical working group Automotive that's the strategic planning for us in Europe I've been working with hamamatsu for the past 10 years now and I'm in the leader position since the last three years and can you tell us a bit maybe about hamamatsu as the whole company overall and where your photonics Works fits in happily yeah so hamamatsu is a big optoelectronic company we basically do everything that has to do with light our main roles are for medical and Industrial markets for the medical that's a lot of dental application like x-ray scanners and something like this for industry you can find us everywhere you can find light sensing basically that starts with your mobile phones to detect the ambient light and adjust the display brightness and color the same goes for TVs we are involved in drone business for lidar scanning or some camera application stuff like that but also very simple photo diets or average photo diets that are used in small devices or so that's some very modern examples mobile phones and drone technology it's maybe a little bit of a surprise to some listeners as it was to me when I started reading about your work that cars and automotive engineering has a place in the field of light research so so how do those two relate to what exactly does your work the niche it fills we are already in The Cars Since 25 years with our Optical Technologies the thing with our sensors is that you probably won't see them as powered by hamamatsu or stuff like that we are already quite common in the cars and almost all of the lidar systems that you have driving around in Europe and the US are already using our sensors actually but you will never know because it's on component site and usually you just know which tier one has their stamp on the box or sometimes not even that so it's nothing new but of course with the miniaturization and everything that is going on right now in the General market or these Optical Technologies also become more and more important for the cars past applications are simply like rear view mirrors that automatically dim the brightness when you have someone glaring from behind but also like Optical communication via plastic Optical father in the cars for infotainment TV displays and radio connections and stuff like that but I do expect that our sensors will get quite more common especially with the rise of the lidar sensors that we expect in the next couple of years when going to these higher autonomous driving functions or even to full autonomous cars in the future to kind of connect to your own personal history quite apart from professional one where you're one of those kids who was into cars and now gets to work on them full time actually no maybe that's a very unpopular opinion but I'm not so into cars and not so into driving cars I'm rather already looking forward to the autonomous driving cars so you can spend the time you have to spend in the car or something else doing something that you'd actually like yeah exactly exactly Emma says guys so I have to drive around a lot and the time you spent in the car really gets sometimes a bit boring I suppose at least you get to spend your days making that autonomous future a little bit closer exactly yeah [Music] the daily operations of hamamatsu in the current Fleet of vehicles you've mentioned some of the older Technologies with rear view mirrors and the entertainment but bringing that Automotive future you know closer and closer are you in any of the current generation of you know smart Vehicles electric cars things that people look at on the road and go oh that is very new we are in some of the very new cars I'm not allowed to tell any specific models here of course usually it's rather these at the moment at least this very expensive high-end cars that already have implemented these rather expensive Adas systems including a lidar maybe something like this and when we say adask systems what does that mean Adas means it wants to driver assistant systems and these are the systems that will enable the autonomous driving car or right now they enable the assistance functions like Highway Pilots or traffic jam assist something like this no it is I think one of the core truths of the world that engineering is filled with many acronyms and abbreviations we're going to deal with some of those are talking about your research and it's going to spend a lot of our time talking about mppcs and sipms and spads so if we could maybe summarize what some of those parts are and then how your work puts them all together with all of the research and the work that you do so we are mainly talking about as you just said mppcs and sppcs that's the hamamatsu brand names for silicon PMs and spat arrays so mppc stands for multipixel Photon counter and that is the same as a silicon photo multiplier tube so that's a sensor that is amplifying the light by roughly a million sppcs are single Pixel Avalanche diets these are sensors that can detect even single photons of light so it's a very very low amounts of light from technology point of view they are very similar so we talk about both of them in parallel basically okay to compare them to say a digital camera which has a sensor at the back which takes in light and transforms it into electrical information for digital storage what kind of scale are the multi-pixel photon counters and the single Pixel Photon counters working at compared to the camera in my phone or the sensor in a DSLR or something so compared to a regular camera chip let's say they have a built-in amplification within each pixel so we are also talking about 2D arrays very similar to a camera sensor but the resolution is of course much much lower so you cannot compare it to like 4K resolution or HD resolution or something like this it's much lower but the sensitivity of each of these sensor pixels is much higher up to an extent that you with spats can even detect only one one Photon coming in and why would anyone want to be detecting just a single Photon what are the applications for that so for these kind of sensors we are talking mainly about lidar applications in the car and for lidar it's very important to have a certain visible range let's say so a lighter system in general works in the way that you send out laser light a pulse laser light usually that is reflected from your obstacle car or pedestrian or something like this and then you detect the signal that is coming back as a reflection with your sensor as you can imagine if there is a pedestrian wearing a black hoodie for example not much light is coming back so you really need to be able to see these very very small signals even to see them and then to detect and also see the object that is there okay and to get in the you know the micro scale the engineering of how those chips work and imagine the speed of processing means a lot there so could we walk through the steps of how a lidar scanner works what is you know step one send signal step two reflection off signal step three processing and then so yeah Step One is sending the signal so by pausing your laser light step two is the reflection of the signal basically the signal coming back from the object you want to see you want to detect step three is seeing the signal that is coming back and step four is then calculating the distance and interpreting the signal that is coming back so from Hama much Society we usually focus on the first three steps we don't deal with the signal processing afterwards only to a certain extent but we are the experts for the photo sensors so we deal with everything that you can do with the light and with a little bit of readout signal processing that means very simply with the information we get from the laser and from the sensor we can for example directly calculate the distance to the object but we don't do any interpretation of the object what kind of object is it how does it looks like what is the geometry and stuff like that to come back to those acronyms of spad and mppc that's single Photon and multi- Photon pixel counters how do those compare what is the difference for their application why you'd use one compared to the other and what kind of information do you get back from that's a very good question that is a bit complicated because both Technologies are actually very similar from technology point of view but in the end and how you use them they are still quite different so when we talk about nppcs or silicon PMS multipixel photo encounters or silicon PMS it's basically spat array but you have multiple single spats that are combined to One sensor cell so when I rephrase that it's basically you have multiple pixels for each output Channel and dispatch each output channel is also one pixel in the sensor array okay so it can collect all of that information into one point into one point exactly but that also means that the readout in the interpretation of the pixels of the signal is quite different because for spats you always only get information of light or no light zero or one so you have to look at the histogram of your readout to find your real object between all the noise and ambient light and so on for mppcs you get a certain signal height depending on the amount of light that comes back so in this case it's much closer from the readout to regular sensors like photodiodes or Avalanche photodights because in this case you can really look at the signal height and the higher the signal the more light you get back basically and I mentioned the noise as something to filter out there and reading through some of the material available from Hammer Matsu there's also discussion of ghost objects and cross talk the idea that looking at that histogram the bar chart that will tell you if you know yes or no if you reach that threshold for light detection or rejecting as noise how complicated is it to get an accurate answer and what can be done to ensure that you are getting the right information at the right time it's not that easy that is true unfortunately because we have such a high amplification built in these kind of sensors and you cannot avoid the ambient light you will always have some background light either coming from the Sun or coming from city lights or different cars or so you really have to focus on the light that you are sending out on the wavelength of the light that you're sending out and to try to just collect this signal this light from this signal and by using a bandpass filter for example you can remove a lot of the different wavelengths that are not used by a laser in addition to the ambient light you always have of course some thermal noise in the signal that cannot be avoided and you will always see this in the sensor so you always have to find the right threshold to distinguish this noise combined with the last remaining ambient light from your real signal that is actually one of the most complicated topics here yeah the crosstalk is basically the effect that a signal on one pixel has on the pixel next to it between each pixel and each readout line you have certain inductances so that means you have a magnetic field that is introduced when you have a large signal on one pixel and this magnetic field might influence the pixel or the signal output of the pixel next to it so it might be that you see signals on both of these pixels even though you only have a real signal on one of these and from the experience with our APD arrays we notice that this is very critical especially for lidar applications because it might introduce we call it ghost signals so you see stuff that is not really there just because of the crosstalk so it needs to be avoided a spread of false negatives around where the false positives is exactly exactly [Music] there's one other part that can have left out to me that was the use of micro lenses in filtering the light coming in and the sound going out how micro a lens are we talking about so this is actually one of the newer technologies that we just used since last year and that we Implement now on all of our mppc and sppc arrays basically we use these micro lenses in a way that the size of the lens is the same as the size of one pixel in our 2D sensor array that means one of the lenses is depends on the pixel size but between 10 micrometer and 25 micrometer in this range with the use of those micro lenses on the mppcs and spads what kind of advantages does that offer compared to using just as they were so in the regular sensor array without micro lenses you will always have basically a small Gap in between each of the sensing areas of the active areas of each pixel and with the micro lens array you can almost 100 eliminate these blind spots between the pixels let's say because all the light that even falls into these in between areas is focused on the active area by using these micro lenses it's like a big advantage in terms of the clarity of information that you get coming in of the clarity of information and the pde the photon detection efficiency is also increased in the end okay and now to talk about the new series of Technology coming out of hamamatsu in this year maybe the next couple of years as well you know to put some numbers to it in terms of percentages and resolution and available light processing how much quicker faster clearer are things getting that is not easy to answer because we are focusing very much on custom specific sensors especially for automotive because from our point of view every tier one every OEM has slightly different requirements on their than perfect system so to say and that is true for the sensor side but also on the readout on the Asic side behind so we are really focusing on building custom specific sensors for each individual requirements here so it's tough to say details but when speaking very general we are working on improving the photon detection efficiency even further because one of the problems is that we are using near infrared light usually for most of these lidar sensors that is just beyond the human eye visibility so we are a little bit higher from wavelengths perspective compared to that what the human eye can see unfortunately the Silicon material that is used for almost all of these sensors is getting transparent at these infrared light already so it's really tough to catch enough light to see the signal and that is what we are working on improving right now we have photo detection efficiency of 20 roughly in our sensors and within the next year we are targeting to 25 maybe even 30 combining a couple of different technologies that would be a 25 to 50 increase very roughly yes exactly and in terms of putting all of that into application there was another acronym that we've encountered now of Asics are those part of that percentage boost that you've mentioned so the Asics are basically the readout circuit that is right behind the sensor so just to manage all the readout from the optical sensor and also very rough signal processing like action counting or some thresholding to see if the signal is above a threshold if it's below then we don't even take it into account maybe a small buffer to record a couple of milliseconds or so from your signal very simple stuff but it's important to combine it as close as possible to the sensor to not get any parasitic effects like for example crosstalk that's why we combine it with our sensor directly and all of this is being done as you say in custom scenarios for custom applications exactly so we have basically a couple of different options here and like a library of different Asic functions that can be implemented for each individual customer design yeah and then one of the new generation features will be multi-echo detection which is a very fun phrase to say but what does that actually entail in terms of managing all of that signal yeah it's a bit clunky multi-eco detection basically it means that when I send out one laser pulse it might be that the laser pulse is not only reflected on one surface but on several surfaces and that I get back not one signal but multiple signals for example when we have bad weather we have snow or rain then it might be that a part of this signal is already reflected on some of the rain drops and then I might get a small signal back from the weather effects and another from my car and when talking about a car that I want to detect that doesn't have one flat surface but it's also like stepped for example so it might be that a couple of different signals with different ranges comes back from one object and with this multi-eco detection I can detect a couple of these Echoes that come back from only one laser signal and I know that they belong to this one laser signal that helps manage all of that scattered information that scatter exactly and also if I have multiple objects that I want to see like a pedestrian in the car in one laser shot can also distinguish between multiple objects in my way sounds like something that would be very important because any misinformation could have critical consequences exactly exactly yes well looking to the Future then what would you say the near future holds for hamamatsu Technologies looking over maybe the next five years you know where do you see all of this going that you know is coming up and then beyond the kind of future that we may not even be prepared for we are right now working or in the final steps of the back illuminated mppcs the samples are in production right now the back illumination of the nppcs is basically there to further increase the photon detection efficiency again we did one step this year with the micro lens arrays on top of the sensor the next step for the next Improvement will be the structure chains from front illuminated mppcs to back illuminated mppcs and with a step to the back illuminated mppcs we are also providing a couple of standard arrays with a certain resolution pixel resolution that will also be available from beginning of next year the resolution in this case is still not too high but as I mentioned these are standard products and we are rather focusing on customer race so we are already talking with several customers here in Europe and China and in the US and with these kind of customers we're also talking about larger resolution arrays and is for anything that you see that would be maybe a limit to the growth or the development that could go into the arrays you know do you see if there's a threshold that is going to stop the development of new products either in terms of you know signal processing speed or photosensitivity or you know material availability is there anything that would stop you from getting to that future with the technology that we have right now it seems like the photon detection efficiency will be at a limit at around 30 to 35 percent something like this if we want to go beyond this we either need to use different materials or maybe even different Technologies as of today the second limit is with the resolution of these sensor arrays and the limit is not per se the resolution of the sensor or rate but the readout signal that you need for this array if it gets larger you can imagine that the readout circuit will also get more complicated and also larger and actually in this case it might get a bit difficult with the thermal management because all this Asic heats up quite significantly and Optical sensors in general do not like to be heated up so we need to see or find a way to keep the thermal heat here as low as possible without affecting the sensor array [Music] and lastly who between the hamamatsu labs where you are working and the end users the people sat in cars should have learned something new from this interview and all the stuff that we've talked about what kind of summary would you want to give to them so I think it's interesting for everybody who is interested in lidar technology and in new Optical technologies that are coming to the market right now and also maybe car enthusiasts that wants to understand how these new functions that are pushing to the car are working what is the background for this I think the easiest summary to give is basically there are many different lidar Technologies on the market and also different sensor technologies that are used so it is very important to know and understand your exact use case and with it I mean the scope of your lidar system that you're using are you having a long range slider a short range lighter like a lidar cocoon for a very short range something like this and then you need to know which kind of technology and what Asic functions are important for your use case because every lighter is different every lighter has different advantages disadvantages and also requirements on the sensor so yeah you need to understand what is important for you and what are the functions that are best suited for you and we from Hammer Matsu we are focusing on helping you with these choices basically and also with combining the sensor array with the necessary Asic functions people want to know more about hamamatsu and all of the work that is coming from your Labs anything that you guys are up to at all where can they find that information they can find it on our web page hamametsu.com or de flooring thank you so much for your time and talking with us today was a pleasure will thank you very much foreign [Music]

2023-04-06 04:40

Show Video

Other news