Robotics, Drones, Nanobots, Simulation, Self-driving cars | The SCI-AI Podcast Ep. 32 | Julia Nitsch

Robotics, Drones, Nanobots, Simulation, Self-driving cars | The SCI-AI Podcast Ep. 32 | Julia Nitsch

Show Video

Vivek: In this podcast, we will talk about some of the greatest technologies that exists today. Like the robotics, drone technology, simulation, self-driving cars, the humanoid robots. I believe that in the future, we will be using all these technologies, the robots in our everyday life already. We had started doing it with the use of robotic mowers in our gardens and they will very soon also come to our homes, assisting us in different work, different repetitive work that we are doing so that we can focus on much better things in life, which is like to thinking about how to make ourselves healthier, thinking about spending more time with our families and creating a better world by creating jobs. Hence it was very important to discuss, we did a report to assist as to what they think about how robots and especially the ones that are getting intelligent nowadays and hence will change and shape our world. Hope you will enjoy this podcast.

And if you do, please do like, subscribe and share this podcast with your network. As a podcast host, I need this support from you. Also on our YouTube video, please do not forget to subscribe and hit the bell icon so that you are subscribed and you're notified of the videos that we upload on our podcast.

Hello, and welcome to The SCI-AI Podcast. This is episode number 32. Today my guest is Julia Nitsch.

Julia is a roboticist focusing on the perception of autonomous robots. She has worked on multi-modal perception and machine learning algorithms. She also has a PhD and it is my pleasure to invite her on the podcast. Hi, Julia.

How are you doing? Julia: Thank you, I am fine. I'm looking forward to our discussion. Vivek: Okay, great! As we all know, drones are in many practical aspects, including military warfare, drug delivery, and much, much more nowadays. We all know that drone technology is taking us by storm for almost a decade now. How does then machine learning, create models and opportunities for the unmanned aerial vehicles like UAV.

And also if you can talk about the use of deep convolutional neural networks, the role of them in visual perception. Julia: Sure. So you're going to be talking about this unmanned aerial vehicle and especially in the delivery field. So what we actually want to achieve or where I see the biggest benefit of this drones is when we can really deliver goods to remote locations, maybe also in an urgent case, like we need to deliver medicine. Sometimes we don't have the opportunity to have each for each drone, a single pilot.

So we really need to go to a stage for the drones can fly as autonomous as possible. And that's where I see the big benefit. And for this stage, especially for this autonomous stage we need a good environmental perception. So drone simply needs to understand where it is, from what it is surrounded.

And for this, we have simply the opportunity with different sensors. And as you said, one of them could be a camera. With which we can perceive the whole environment detect objects. And yet, especially when it comes to perceiving the environment and object recognition, we currently see that neural networks and particularly the CNNs are simply state of the art for this object recognition tasks.

So that's for me, really the big benefit where we can apply them. In particular, because we know most of the times which objects we can expect where we don't want to fly into, where we need also to perceive our landing zone so that we can do a safe landing and the safe delivery. And that's where CNNs play a big role for this. Vivek: So how about the use of machine learning to create models and opportunities for us? Julia: CNNs are yeah, a big topic and object recognition. Of course, what we need to do is always running the complete machine learning pipeline.

So we need to collect our data. We need to record our own datasets because for the drones, it's different to normal cameras. So we have simply a different perspective of the objects, which are not that much present in other datasets. So that's why we run for the complete data collection, pipeline, data, cleaning pipeline making sure that we have a broad data set of what we need. This is the first step of course, and not a big topic is all the using data from simulated environments so that we can enlarge our data sets with also similarly to data and yeah, running some computer vision techniques, or also neural networks on it so that the data gets more similar to our recorded.

And then we can really start on defining the objects, training object recognition models on it. Of course, when we are talking about object recognitions, we are talking about also refining existing. So sometimes it's necessary to train models from scratch. So without any preset parameters, but I think especially with the object recognition, that makes a lot of sense to go to the refinement and refining instead of starting from complete scratch you. When one important aspect, when we are talking about machine learning, not only the learning of the models and the creation is also the deployment. So we have to imagine the system.

Server, which is flying down area. So we really have autonomous Robot. We have usually embedded hardware, which is not as powerful as a yeah as a laptop, which you can use. So.

This is one big aspect. So these models also need to be really targeted for that, the execution hardware. And of course, one big topic of drones is always the battery life. So this is something it's always a trade off between the computational expense would we can afford in terms of battery life, So I think this is for all the autonomous robots the same.

We'll probably become all the big topic when we're talking about electrical cars but this target platform where we really execute the things. So we, we can't just use anything. We really need to take care of what we want to deploy and where the benefit lies in terms of computational costs in terms of expanding the lifetime or the runtime of our drones.

Vivek: That's a great point and the battery aspect of it. I'm pretty sure. I mean, in the future, Tesla might win this space if they want to compete in the UAV segment. Because yeah, I mean, they're, self-driving cars, with their batteries, all the, you know, giving a lot of competition to other you know, creators. How do you feel being one of the few women in drone technology worldwide and what are the challenges and opportunities it presents for especially women in this field? Julia: I actually looked up the numbers for Germany because I'm currently located in Germany and it's about 17% of software developers are female. So this is like a very, very, very low number.

So I can really say this for roboticist even a little bit lower, but this is too few. This is simply too few. I think it's a pity that I don't have more female colleagues. And I mean, we have multiple reasons for that and really starting from stereotypes and goes until wrong expectations about the job or not knowing what this job is really about. And yeah, the question is of course, what we can do about it.

As developers as society also, maybe to encourage more women to get into this field. And what I'm doing is I have two main points. So the first would be really the, the area of being a role model to other women in the technical So that yeah, maybe more women are inspired to change to this field or even start in this field. So what I'm doing is I'm voluntarily giving coding lessons for children.

So that yeah, the little girls see me and maybe I can inspire them for a career in AI, that, it's simply also that the children get in touch with technology at the quite early age. Because at least in Germany does, is not part of the regular school education. So they really need, there are organisation. Maybe even encouraged teachers, but then need some touch points with technique. So does this one part that I'm doing and in the organisation I'm working or volunteering.

We have two formats, one open for children between the age of 10 and 19 think it was. So anyone can sign up debt and for some of these lessons, we close them. So that they are only for girls and women. So everyone above the age of 10 can apply for these lessons or every woman can apply. And yeah, we saw that, especially this lessons get a lot of attraction and we always have full classes there. When we're talking about AI and ML and yeah, this is one point from me, will it be this role model, bringing women in touch with the culture and yeah, the second point is networking, networking, and networking so that we really.

Bringing together the female technicians so that they can exchange about the challenges you face and what we can, what we also do is being heard of regular female networks and then on the tools for technicians, but also encourage may the other women to look at least in this technical field. So as you might know, in this industry, we don't only have pure technical roles, but a lot of other rules too, so that more women can try in here. And I think this is really an important part.

At least, I feel like I have to do to encourage more women for this field. And yeah, that's what I can also see. Not only happening in industry, but for example, also in academia at the tech conference, and sometimes there are special women breakfast where we can connect and exchange with each other. And I think that's the way to go. And hopefully you will see more women entering this field at some point.

Yes, that's fantastic. Great job that you're doing you know, bringing in more women and girls in this field, especially in AI so it is commendable, quite commendable, and I wish you all the best. I hope we have an equal participation and equal Bages and equal opportunities for women. Vivek: So I wanted to ask you the field of robotics. It's something that is also getting mature in terms of getting adopted real businesses that impacts normal people in everyday life.

How powerful then are the autonomous robots carrying out tasks that are intelligent and that requires complex interaction with the surroundings. Julia: Really really good question. So I mean, what we can observe is that robots are more and more used as co-workers for repetitive. So for tasks, unlike you described. So not so intelligent, intelligent tasks just for example bringing boxes in the factory from A to B, but we have are very, very limited operation area. We know how this looks like, and I think this is really the area where robots are currently bringing them most benefit where these co-worker helping people to take over repetitive tasks because in the end it's still like the way how we program robots to do is we as developer, I have to tell them explicitly what they need to do.

And for that, we are restricting it to a specific environment where they can bargain. So when we're really talking about intelligent tasks, it's becoming. Really hard for robots to enter this field.

Because even when you're talking about AI and machine learning in particular, still we tell them what to look for with the data sets. We provide them with the objective provider and they simply cannot generalise concepts. So we tell them what to look for and they're looking exactly for those things.

At the moment, we don't have anything like this artificial general intelligence. So it's really us as programmers, as developers, we are telling them what to do. And I think that's why we are currently see this limited area where we can really apply robots with benefit and. Yeah. So does this really the part why we still need the human workforce? That's right.

This is really, we can take on or the direct protective tasks, which we can lay off to the robots that for the intelligent or intellectual tasks, we still need human and this won't change any time soon. Yeah, that's a great point. The reason why I asked is that repetitive tasks doing repetitive tasks are still quite okay.

And robots are doing it for the last, I think for two decades. What really matters is when you put general intelligence to it, wherein they start thinking about their surroundings, something like a autopilot that needs your attention with the surroundings, wherein you learn from the surroundings. I mean, now you can, you would still need a human to provide those datasets, those models, but first learn and then implemented. And that's where all the complexity comes in. Yes. And I mean, when you're talking really about the robots or even industry for zero.

It's still the same. So the bots can really work alongside to humans and. Even the environment can change. So it's not like this automation pipeline, which we know for the past two decades. So, I mean, they can interact with humans. They can drive alongside in the hallway.

They can do a lot of things. But in the end, it's still the developer program that it's not like we let them out freely in the factory and their learning, particularly from itself or purely by mutation learning what they need to do. So this is something where I think until we get there, I mean, this is long, long way. Vivek: Right. And that is what I was about to ask you.

How do you envision the future? Like, is it 30, 40, 50 years from now that more intelligent robots will be around us? Julia: I can't. You add your number there. What I've really could recommend to you. And I think there's a, some fantastic book also to read is the a 'Thousand brains theory' from Jeff Hawkins. And there, I mean, it complete chapters about this general intelligence. And I mean, I think as long as we haven't understood how our brain works it will become hard to try to my reproduce the behavior and robots.

Good absolute ideas. And in Y podcasts in the previous episode have covered that. With a lot of AGI researchers as well.

And they are also of the opinion that it is. It's a monumental, fundamental shift in how we developed in developing intelligence, because right now with weak AI or the narrow AI, it's more, tool-based both when you want to make sure that the bot or the AI learns by itself. Then you're not doing justice by just adding a good lines of codes there.

You have to understand how the human brain works in terms of philosophical parameters as well. And without that, you cannot build a very robust AI. Vivek: In Robotics, what is it that is ground-breaking which will let us focus on more productive and since it is autonomous, it will be able to perform more repetitive tasks of our daily lives or in the businesses. Julia: Well, I don't think the, there is like something like a single. Or yeah. Think of thing, which will break everything.

I think it's, it's a combination of a few steps, which we are taking. So for me, when we were talking about our working life, I see a huge benefit and this industry for the zero and IOT topic, when we are talking about factories, where we can really use province and alongside humans. But because they're, I mean, they're operating alongside humans in the human environment. So the question is if they really have a higher, repetitive, or less arrows of radius than the human, or we can simply free human resources for intelligent tasks or intellectual tasks there. And I think this is the big benefit when they're talking about this cobots and working alongside. This is one aspect, but I think we will see a few changes if you adapt in the future, of course, as a roboticist and hope to see more and more robots coming into the area where it's simply too dangerous for human.

So when we really talk about search and rescue scenarios, Maybe some merchandise in nuclear power plants that we can send in robots to not risk human life. So this is really something where I see a big benefit of the robots and yeah. Coming back to, to our work in life. When we are talking about this whole automation making things easier, making good protections, the way we how we currently use to, we currently and use the workflows so that we don't yet that we can get rid of this repetitive tasks and that people can. Been up at least this workflow parts.

Yeah, with the help of not only robots, but fully automation tools pipelines the way how we really process language nowadays. That's also getting more and more inclusive and the way how we currently use computers. Vivek: Let me ask you something then connected to it.

Like you mentioned about the nuclear plants where sending robots is much better. So in terms of space science how does robotics and to help the civilisation in terms of doing the discoveries or finding hidden rules of our universe by doing research on other planets or moons. Julia: Yeah, I think there is simply no other option than using autonomous robots. Simply to all this environment for human, especially when we are talking about Mars missions. I mean, we, we, we have no other option that sending robots there and due to this really far distance, we even can't teleoperate these robots.

So there is a need or a strong need for truly autonomous robots or truly autonomous. I mean, they need to survive their It's the wife. I mean, they need to operate there for a few months before they are sent back for, they can send back things to her and as roboticist, really, when I think of letting a robot run for yes, for your four months without a reboot, I mean, this is a really, really big thing, so that it's not getting stuck.

I made some kind of software move in the back, but also then with all the mechanical things, which we simply don't know about planet looks like what we need to think of, not getting stuck in any stupid hole and that can't do anything anymore. So this is for me, it's, it's really interesting to see how those robots are designed. And then you also did development.

I mean, they already sent it years ago. With the past technology and now what's available. So I think we can really be curious to see what we get back from space in a few years. Yes. That's a great point.

I mean, imagine sending a whole robot by itself, staying there or yours on a planet and sending data back to us. Vivek: I mean, like you talked about the battery in the UAV is how would you have a robot just stand by itself by using solar energy you know, on, on the Mars over. So, I mean there are a lot of parts to it and how we all collectively come together at NASA and other organisations to sort of build something like that is phenomenal.

So talking about nano box or nano robots and the shape of a shape changing robots, like yeah. They're there somewhere that are as well, since they can fit into smaller parts where human hand or devices cannot reach. So what are the criteria in building such robots? Is it the finesse of building it is or that it is the speed to perform an action that the human brain and hands or legs can't perform. Or is it the precision of not making a mistake? What kind of ideas are put into it? Julia: For me, I, when I think about nanobots for me in the beginning immediately medicine comes into my mind and all the great research which has done there with, with nnanobots.

And this is for me, really, really interesting, but also a little bit far away from the daily work. So I think when, when we think of also nanobots thoughts in, in terms of medicine, we have to get away from the thinking that this is a little mechanical bug, which you swallow, and then it crawls her out into your, into your body. But I mean, how it's completely different, how they are science, you can't compare it to mechanical rather than all. So they are using techniques to how this could move with ultrasonic sound in the body and bringing it exactly where the doctors wanting the where the doctor wants it to be, or even really bringing the drugs. Especially to the part in the body where they want him to be. So for me, this is really kind of an art on how these robots are designed.

And I can't think of how to start with the design process at all, but this great combination here of technique, mechanical engineer, also medicine. Simply amazing. And I'm really curious and yeah. What we will see down future in terms of treatments.

And how precise it was, treatments could become. And of course, this robots, when we were talking about the human body, I mean, it had complete different scale and the precision, they need to reach with this robots. I mean, this is so far off on the precision. I need to reach with, with drones or UAV's.

We're not talking about millimeters here. If I know they run the drone or landed drones with a millimeter precision, everyone is happy and we are good to go and we have enough space everywhere. But for the body, you, you have simply no space at all, no time to make mistakes.

So in terms of position, in terms of timing and controlling, I think this is really a complete different story. Yeah, as I said, some of it, a lot of research and I'm really, really curious what we'll see there in the next five to 10 years. Vivek: Fantastic. Yes. The kind of Tech and the kind of design, the kind of art, like you said is you know, you at it's pretty phenomenal how the small nano bot and a bot is sent inside, to treat certain things.

And maybe you've been outside of healthcare that are some more applications of farming. They're using it as a drone, small drones, like bees and others. Julia: Yeah.

Also one application, which I just read about Austrian startup is also to clean the drains and tunnels where you can put in robots and being much, much faster than. You need to open up all the different pipes, clean everything, but also from the, from this perspective, I mean, it's simply so much faster and yet, so, so beneficial to use it here. So yeah, I think we'll see a lot of different applications with small size robots as well. Vivek: Right.

That's a great point. Great point in all, all kinds of applications. Not just healthcare.

Like you said, usages, trains, airplanes, everywhere, everywhere. Since we discussed about robotics and how, you know, some of them could be groundbreaking. One of the humanoid robots that I have kind of came across and I wanted to talk to you about it is Ameca. What kind of AI is used in building it and what is your thought about how it can change how we see robotics? How is it possible to teach complex dexterity using AI simulations? Julia: Yes. So, I mean, for me, this humanoid robots are really.

I think one of the most complicated robotic systems you can think of in terms of development and complete interaction. And I mean, they are really built to interact with humans. And for me, this is a super difficult part.

So we are just talking about the interaction with humans. All this natural language processing. So understanding different people, understanding different accents or dialects in a language, and then processing the meaning and giving clever answers. Just this part on its own is already challenging and difficult. When you look at Ameca they, they they not only have them sound out, but they're also doing gestures. I think they have something with the glaze.

So that eye movements, when it speaks. Even much more difficult to learn this behaviour program. This behaviour probably learned it from watching human interactions. This is one already one big topic and we are not talking about the robot moving at the moment. So I mean, it's walking robot. And for this walking behavior, when you think of the controlling off just of the legs that it's not fall over, I think most of the listeners have seen the first now videos when you start to walk always constantly fall over and over and over again.

I mean, of course there are reinforcement learning algorithms, which you could use to learn such a behaviour, but it's still huge pile of work to make it get working that nicely. And, and now we are talking about, okay, we have the robot, the robot can move to robot to can also move to arms for the gestures to us and interact with the voice. But robot is not at the moment when you're thinking of it's not doing something meaningful. So usually the acceptance for robots for humans starts to raise when it's doing something meaningful.

Now, imagine just setting up the table for the dinner and letting your a robot, doing it. Having all these interactions with this human environment manipulation finding and perceiving the environment, finding the right parts, like the tables and cutleries and putting it there again, it completes different challenge. From the robotics part, it's super interesting because there are so many different aspects of you need experts of all the different fields, like from perception to human interaction and NLP controlling. Yes, it's really interesting project.

And I think the big benefit of those robots could be really the interaction with other. To get this acceptance of the robots and also to understand where the limitations are, because if you're speaking to such a bot, you see that, I mean, this is not general intelligence. You can't have clever conversation. With this, robot, you might ask him to perform a task A or task B. But really seeing this limitation I think would take away a lot of the fear from robots and from people also. So, I mean, there is no chance that this thing will take over the world or rule the world.

And I think seeing this life in action where the limitations of robots. It's really important for that brought up, like pause to see. So about Tesla. You know, because it's, AI is also top-notch in terms of autonomous pilot, and then and then computer vision and you know, everything that they do to make sure that self-driving works. Vivek: So as per you is Tesla world's most successful robotics.

Because basically they have a semi sanctioned AI placed inside their inside their cars? Julia: For me, it's hard to call something the most successful robotics company in the world based on which criteria, what I really admire Tesla for. Really how the build or the started off with building electrical cars, how they convinced. I mean, at the point when they started, I think everyone was against the electric cars. One could imagine using electric cars and they managed to build up a complete company just on their beliefs, that this is the next thing to happen. And for me, this is really also the big story of a startup, which is yet disrupting or transforming the market because if Tesla wouldn't have been there, I don't know if so many car manufacturers would have an electric house right now.

So I think this is really from, just from this perspective, it's really admiring. How they build this business? With just strongly believing that this is the right thing to do. And yet this is really the big, big positive aspect here just for this. Yes. It's a complete success story if it's the most successful.

As I said, so it's hard to tell from. From a robotics perspective, when we are talking about Tesla's autopilot, for me, this is a little bit more difficult. First of all, they call it auto pilot. But in, when you take a look at the autonomy levels.

It's probably something around level two, maybe level three, which means that in the end, the driver is completely responsible for taking ownership of something happens. So that. Software is not taking responsibility.

So if it says, take over yourself to immediately take over, you don't have much time to say, I want to finish reading. Paper articles. So you need to take over immediately.

So that's for me, the gap between calling something auto pilot and being level two is at least difficult, but state stated this way and yeah, from then continuing with this autopilot software, I mean, I think currently they're only using camera and riding censors. And they want, or at least there are rumors that they want to switch to camera only systems because they strongly believed driving cameras only is the way to go. And that's where I have a different opinion on, because for me, when I'm working with, with robotics perception, I really need this multimodal sensor setup Multimodal means I have different sensor types because each sensor has its own pros and cons. And we're works from environments. For example, the camera is, yeah. As you can imagine, it's working good, good, better conditions.

You can see far, you can see objects like we as humans. But for example, if it's getting foggy and you just have a white wall, you don't see much. That's where the radar hits in and, and could really predict, predict, or see objects, which are beyond this fog wall. And I mean, there, we could even become better than humans in predicting objects. "Be careful. There is something in 50 meters in a hundred meters".

This is something I can't follow or I can't understand why they are kicking out this multimodal sensor approach. And I think they're one of the few companies who are doing that. And yeah, so for me, this is really the difficult part yet. It's quite famous that they don't want to use LIDAR. And for me, this sensor, which really gives an accurate 3D representation of the environment in rationally distances.

So this could be a big, big benefit and the planning and the perception, and really estimating how far objects are. And in combination with the radar, which can really measure the velocities, plus the camera in detecting all the visual signs, which are set up for humans. For me, this would be the perfect combination.

And yes, of course then talking further about all these IOT environment, infrastructure sensors, which on cars, about accidents, about things on the street. Just for the status quo. I, for me that the only meaningful setup for truly autonomous cars are truly, let's talk about level, level three, level four, where you really have time to take over as driver. You need to send at the moment for me, I think it's it's. I can understand that you want to get rid of the sensors in terms of cost cutting. So not to using it.

And it's also from a system perspective, it's always easier if you don't have too many sensors inside or different sensors. but you're losing so much on the perception side. And for me, this is, yeah. As I said something I can't understand.

I mean, they have to have ta reason probably a good reason why they are following this approach. But yeah, so does this something I wouldn't feel comfortable to sit in and really use them level four autonomy functions in there. Once that happens when they bring the level level four autonomy would they require more technologies like LIDAR and a multimodal you know, sensors on, around the car. Vivek: Do you think the day they are there already, they already possess some kind of technology to replace it, but because of business pressure or because of the. You know, we have seen their cars have done so many accidents while doing the test pilot for, for all these.

So they might be scared about the fact that why would, why would we launch it when the market isn't ready for it? Julia: Yeah. So I think just from the complexity, what we currently see, I think autonomous cars are promised now for more than 10 years, probably something around yeah, probably 10 years. I think the complexity was simply underestimated. So when we are talking about really low level autonomy, where the assistance, when we see this ACC, so there's this automated cruise control by the radar, keeps the distance to your, your car front and just follows the car. I mean, this is one story which could be implemented.

You still have to drive or being full aware of. And the step then to going to autonomy level three, level four, I think the step or the gap was really underestimated. So I think they will, all manufacturers will provide something like in yeah, traffic jam pilot, something like this for the highways, which goes up to a certain velocity. Where it simply supports the driver in the driving scenarios? Which do you usually don't like it? So I think this is also one big topic, especially when I am thinking of German car manufacturers. Yeah.

I don't know if you know, but in Germany, the cars and the highway, and also having a speed limit is really a thing and. Yeah, having this pleasure during driving. I think this is also a big selling point for this cars. And so I think we'll definitely see something like an autopilot for this traffic jams, because this is also something people don't enjoy and which is really exhausting.

This is something that you can imagine, but how far does it then go for this autonomous real autonomy. I can't tell if it's meaningful for regular passenger cars, or if we then see the move to new mobility concepts, like people move for shovels or some special setups, which are really just car sharing inside the cities. So this is something I couldn't imagine more from a business perspective. That we see them once this traffic to pilot.

This was so we see also move more to this urban mobility concept here. What I could imagine is when you're done talking about passenger cars, but really about trucks that we see more and more autonomy of in this vehicles, because. overall they're much, much more expensive. And I mean, they are mainly driving and high ways until they get to the urban parts. But I think, yeah. So if I should make it bet here, we will see this traffic jam pilots.

And then hopefully we seeing some new mobility concepts in the urban environment. Vivek: So what is your opinion of the simulation theory of, you know, our favourite - Elon Musk has said something back and he said something about the simulation. Will we ever be able to live in a completed simulated reality using a VR headset or just a device magnetically connected to our brain outside? The reason as per me this is so interesting is that there are games now that if we play using a VR headset, are completely indistinguishable from reality? Then, perhaps the only thing left to do it add emotions using a device to my brain to feel all those impulses like in a natural environment such as ours. Julia: Yeah. Also really, really, really difficult topic also to talk about or when they start thinking about it.

So, I mean, what it definitely can say is it's also what we see in this past two years. Being at home, not having the social interaction, all the, all the conferences being virtual, not meeting our peers. I think we can all say that humans are real social or have real social habits and want to meet people in person.

You know, it's not only about seeing these people, but it's really about interaction. Communication is nonverbal communication, even just shaking hands, touching other people. This is something which I think we all learned in the past two years, that, that we are missing this kind of day. And if you would have asked me two years ago, I might have given a different answer than no, but I think based on this experience the social interaction really meeting in-person, I can't imagine that we've ever really kick this out or get rid of it and live in a purely simulated environment. So this is for me, impossible.

I think this is simply not going to happen, but maybe I'm completely wrong. Regarding the simulation. I mean, yes it is really realistic and that's what we are also using a machine learning to train our models.

I mean, it really gets thing, and it has a lot of benefits in this aspect. But I can't imagine that just having a device inserting emotions or what we are about to feel that we A. can learn this without having experienced that in a real world and B. that yes, I, I have the impression that people are really tending back to the social events, to the social happenings, because we want simply we want to meet other people. And yes, of course. The question is how well can we simulate this feelings and emotions which we have from other people, at least with the current technology? No, not at all. No way.

Maybe we'll see something in future, which could do that. But just speaking of myself, at least, yeah. Having this experience from the past two years, Imagine that I want to live in a truly virtual environment for just for the social interaction. Having this real simulated environments and all this virtual reality, I mean, it brings so many benefits to other aspects and we are talking about therapy about really bringing and physical training and, you know, there are lot of benefits with simulation, but I think the only point we can't get in this, this, for this social intellectual, Vivek: And then Elon Musk is building Neuralink which will have a device in our head. So sort of a device which would fit into our heads which would allow you know, people who have Alzheimer's. Because we do not have memory because of some accident or something that can be recalled or yeah.

That, that can help them understand their world better. So, so yeah, I think, I think you're right. That there are, there will be good applications of it as well. Julia: Yes, definitely. So, I mean, the idea area also talking about Oh, the diseases affecting the brain. Yeah.

there we see a huge range of positive applications, such devices are really needed for, and yeah, as I said, some really, really curious to see what's coming up in the future with new technology. Vivek: So then I think I've exhausted all the questions that I have designed for you. Before we'd ask about flood.

Do you have any parting thoughts? Julia: One part where he can ask the technical community can really make a difference. So I think we are really are the cutting edge research, developing super crummy robots with a lot of helpful applications and really bringing into signability to help people issue IC. We are developed with these applications. We see the benefit, but if people don't want to apply it. Developed it, yeah. For, for nothing.

So I think it's kind of our common responsibility than only to develop this technology, but also to explain it to the people democratise all this knowledge of how we are building it so that people are not afraid of this new development of new applications. And when we also ask you is an ASP positive about just new developments as we are. And I think this can only happen.

Yeah, collaborative way. And also really starting to explain the technology and not limiting to the few people who I don't know that the last dozen PhD degree in machine learning, but really spreading the word, making courses, explaining to that volunteer and really gifted people, the room with the chance to ask all the questions they have about this technology. Yes, because I think if you don't understand how it's working and really have no clue would come get your head around it, it's also kind of natural that you are afraid of it that you reject. And I think that would be a PT down.

We are really missing a big part in for the future. It is not just from the tech side. It should be from business.

Support from every game to be able to allow these technologies to evolve. And I mean, this is precisely the reason why I actually started the podcast is to make people more aware about what is going on in AI and exploit. To model when the AI and the tech will penetrate into all of our lives, which it, it has already done. I mean, we already keep our smart phones with us. It's built tech built out.

So, I mean, there's so many examples. I mean, if, if we want to live with it and we have to live with it in the future, then it's great that we should support the people who are building it for a good. For the technical community. And I think, but then on the other side too, there will be people who will be spreading all these nonsense conspiracy theories about, you know, technology digging us or, and AI that he has over and all that.

But I mean yeah, the good people just keep doing. That's a really, really good point too. So I think we also have to live with.

People just rejecting for whatever reason, I'm just checking this technology. And I think it's, this must be also something being simply being accepted. The only thing which, yeah, which I can't accept this.

Spreading wrong theories and some conspiracy series about technical parts, which are simply wrong. I mean, this is there is no way to go, but I think if we are really providing this open for discussions, or at least let's state it this way, I like to think, or after knife idea that if we provide this room, if we do the. Yeah, provide this education about the declarity in terms of we give them room for questions, we help people understand, show them the benefits, but people build more or get used to it and also use it. And this people who spread conspiracy theories that don't have any room for this series so that we really take.

Advocating being open, accepting different opinions here. So this is at least my absolutely. Absolutely.

And that is the thing that you're doing about bringing women together in this cause is just phenomenal. It should be supported everywhere in the world because it is the giant that needs to come up and need to be taught. And get judge to participate more in the tech world because that's going to be our future tomorrow. So yes, I mean kudos to you and to your efforts.

Thank you. Thank you. And yeah. Great for you hosting this interesting protocols.

Vivek: Great, fantastic. So thank you so much for your time and you have a great day and hopefully in the future, we can schedule some other rounds where we can ask you some more questions on these topics. Julia: Sure. Sure. Please go ahead.

Thank you for having me and this really, really interesting discussion and have a nice afternoon to you.

2022-03-28 22:24

Show Video

Other news