Péter Fankhauser – Doctoral Thesis Presentation

Péter Fankhauser – Doctoral Thesis Presentation

Show Video

Robots. Were cohabit, our environments. And in, building likes report. A car park or a warehouse mobile. Robots can rely on wheels to, look them out efficiently, around on these flat terrains. Alone. There's, a lot of people that work in environment, like these and. It's where, they have to do inspection. Of Mines and sewage systems that work in industrial facilities, with, stairs or, even after an earthquake, or, natural catastrophe, rescuers, go in these. Buildings and put themselves into dangerous situations. Wouldn't. Be great if you could help these people so with a mobile robot. It's. In these environments where, robot with legs can, relieve itself from constraints, that are posed by wheeled and tracked, vehicles. Look. At your walking going up steps and down climb, over obstacles crawl. Underneath obstacles. And even go up and down stairs. In. My work I'm gonna focus on the mobility of legged, robots in rough terrain and, there's several key, constraints, that we have taken to account. The. Robot has to make, contact, with its feet on, the intermittent, basis unless, you chose stable, foothold in order to lock them out just. To make sure it doesn't collide with the environment. Then. It has to all the time keep stability and. Make sure it doesn't collide with itself and. Keep. Joint, and torque limitations. And. I was talk to my work, in five, parts are, going to start with, the relation, and melding of rain sensors and relatives to perceive France's environment, in order to understand, it. When. It takes these measurements, and then, look at terrain mapping where the robot like sense out of his data and try to map it is understandable. To the robot. Then. I'm going to look at how to control the robot and, now that we have a map and know how to control the robot I'm, gonna look really, how, can I look into the future and, see. Where it's gonna step and find a safe, path, over those obstacles. And. Finally I want to broaden the context, and look at collaborative. Navigation, for flying in, the wheeled, vehicle, to improve their navigation, skills. We. Start up with the relation and modeling of rain sensors, and. Here we're gonna look at different technologies, and my goal is then to model. And understand, the errors and the noise that we get from these sensors because understanding. The elements we can create high-quality, maps, from, this data. So. Looking at the performance criteria here, we have a robot with a sensor in front of it and, important. Continuous. Measurements, such as the minimum of the maximum range horizontal. Vertical field of view but, also importantly the density, the number of points that we get out per measurement. To. Make sure the sensor works in sunlight and like I mentioned an important factor is what, is the result in error when taking, these measurements, I. Have. Evaluated four different sensor, technologies, only, structured, light laser. Range or lidar time. Flight cameras and assisted. Stereo cameras, in. The following I want to focus on the, noise Memling, of the collect version2 time of flight camera. His. Camera works by strobing, an infrared light onto, the scene for. Which it is reflected. Back, to a sensor. By. Measuring the phase difference, we can measure the time of flight that takes each ring and we can create the depth image from each pixel in the image. What. I'm interested in here showing. For one ray is the axial, noise among the measurement array at, the bachelors which is perpendicular to the measurement ready. To. Do this we have set up an experiment, where we use a target. At different. Distances, from which we can rotate and, on, the left you see an image a sample, image of the result in depth image. From. The top view we, can use their angle theta, to measure their. Target. Angle and alpha to medal, the some. Horizontal, insulins channel, in. This view you see how we measure the actual noise among, the measurement train from. The front view. Interesting the lateral noise but looking how sharp can we create an edge of the target. Taking. Many measurements at different distances. And angles, we, get this plot, where. We see how the, noise increases, over distance, and over. The target and the target elements plotted on the horizontal axis, in, throws on the vertical axis and, then colours are the different ranges we will measure that so.

We Came up with this, empirical. Noise model which very, accurately, predicts. The noise at these different measurements. For. The lateral noise there's no such clear tendency and we, choose to model the noise as, the 90th percentile. If. We go out in sunlight we see a different behavior we're, at more, direct angles of the Sun we have higher noise and we, can understand, this by looking at the leg line with cosine, law which, explains. The light intensity, as the sunlight. Incident, and as the sunlight hits the target and, we introduce this additional term depending. On the sunlight. Angle, alpha. Now. We can see that two men go outdoors from indoor overcast. And direct. Sunlight on, the, left hand side we see the axial, noise and other assets and electrons and then we go outside the, noise and sunlight is one of magnitudes bigger than indoors. And. If we compare this to all the other sensors, we see a similar behavior. And. We can say that, for a range that we're interested in with the walking robot from one to three meters, the noise is in a range where it's acceptable, in order to create the map that is useful and. We. Can also see that the race deviates. From very, low to high. Noise at distances. Such, that we need to take care of this noise in order to create the best quality, maps that we can get. Looking. At other measurement, characteristics. This, table can inform us how to select, the sensor for our mobile robot in our use case so. In one example the. Prime since the first century is really not sunlight resistance, or could not be used outdoors. Another. Important characteristics, is the density interactivity, sense and we're all good but there's one really, lacking behind and when I explain that in the following example. Here's, a top view of the robot and you see the, measurements, taken for one rotation of this lidar sensor, you. See in the close-up it, would take two, seconds, to cover a 1 by 1 centimeter, area, and for further away points takes up to 22 seconds to cover every cell. This. Isn't comparison, to their. Intelligence, their camera where, we got that many plants and closer up to two-and-a-half thousand points per, square, centimeter which, probably went, through need. So. We can approximate, the sonars woman with this model which lens. Depending. On the surface normal and the distance from the sensor to the plant we, can then use, the inverse of this model to predict what is the ideal, resolution. That when when the manness sends result so. For the real sense in this case we want to lower the resolution to a resolution of 300, 12 or 234. To cover every cell at least once with one measurement. Now. That we have understand, these sensors look, at rain up in here the power is to local, create the map online tens. Representation. Of the terrain 30 sensors we. Got a model there to. Train as a two and a half the surface, map to represent the train and, importantly. We can only rely on proprioceptive. Localization. Which, makes, it much more of us because we don't need an absolute or, externalization. Systems. So. Let's imagine the, robot, on, the terrain, at times, t1, and as it works for it it's, now there would like to create the map now. In a classical. View sitting. In the inertial frame from, the outside, we. Know from experience and, from literature that this property, sensing, which relies, only on inertial, and kinematic, data, lifts, over, time in position in joining, so.

Through The position of the robot as it walked becomes more. Uncertain. Here, depicted, as the orange, robot, now. If you do nothing straight forwards we, create, inconsistencies. In the map due to that which, all problem, for planning. By my work I proposed, to take a different approach and Madeline, Titan, from a robot centric. Perspective. Now. We sit in the situation. Of the current, time of the robot and it, knows about this position exactly but the past position, of the robot becomes, more. Uncertain. Now. If you do mapping we, see that in the front the map is very clear, but, the data that what the robot has not seen for a while becomes, more, unclear, and we have we can introduce, this covariance, boundaries, to upper and lower estimates, of where we expect the real train to be. Now. Formalizing. This in. My work I love separated, work in two parts which is the data collection and the data processing. I'm gonna go through, these steps. First. We have the range, measurements, and for. Each cell where we have a measurements, we have to transform, that to a height which we can straight forward from the range measurement, vector. And, transform. It to the map frame and. Then use a simple, projection mapping to get the vertical height of it and for. The earlier importantly, wants, to get the air of the cell resulting from, the sensory. Measurement, covariance, which means-- evaluated, in the part room and, from, the sensor orientation. Covariance, on random pitch will be which we get from the legged. State estimation. Now. If there's an empty cell we can fill in the stator but if there's already data within, a fuse, one. Date with the system in the sense of a common filter where, we evaluate the new height as follows and then, we have also the variance, for the cell in this common filter we can create the consistent, map. And. This is getting, the laser data into the map and in the second part I'm, gonna introduce. The areas that we get from the robot motion. And. We have to transform, the height, variance. To a full 3x3, covariance, matrix, to do that. Then. We can take the, robot, Poisson certainly from time k to k plus 1 and, get. Immune cell covariance, out by taking the old to previous cell for aliens and. Taking this robot pose covariance, updates, from tongue came to k1 and today's, I know that Yuko beans to do the proper and propagation, mapping. Now. That we have this. 3x3. Covariance, matrix, to each cell in the height what, we really want is to get the height in the lower and upper boundary, so, we have to look at each cell individually, and. I'm gonna explain it with the following illustration. So, imagine this is a profile cut from an obstacle from. The terrain and, then sample now one, point, and we, look at the, L ellipsoids, and create. A probability, density. Function, which is weighted, empirically, based, on the distance from this cell to the neighboring cells. Now. From this public, identity function I can integrate that up and read the cumulative, density, function. From. Each then I can sample. The first, and zero quantiles. And from, this predict the maximum, and the moment height expected. To be at this position. Now. I did this for when cell and if I do this for the entire map I can create this confidence, bound. Now. The, same view here, in 3d this is the original train, on the left and on the right of plotted, these. Ellipses. From the top view and. If. We go, through all of these points what, we get is correctness smooth, out terrain, on the left for the estimated terrain and here blue means that, we're more certain about this. Central point and, let our means were less certain about it and. In the middle. And try to see the up in lower confidence bounds, which tell us here is the maximum, expected. Thrown, to be and. Importantly, later wanna choose of course to be a certain. Areas to step on because, that the robot knows even though the train is uncertain, there's still certain positions, where it can safely step on. Now. Here's the train, mapping two examples. When. It does real-time, train mapping from, left hand side to robot, scarlett in door with a structured light sensor, with, a static gate on, the, right hand side it's out there complete different set up in first, with.

The Rotating laser and sensor. Dynamic. Trotting game to a robot animal, and. In both cases same. A principle. That we see the difference, in quality of the mouse. Now. To analyze. This, more, clearly we, have, done. The following experiment we have created a scene and then, scanned it as a ground truth with this huerta stationary treinta absolute. Point, cloud from the environment. And. You see in the video how the robots works through, this environment and in the front you see the current scan that it currently takes in, comparison. To the ground truth. Now. When evaluating this this, is a top view now if the train on the left-hand side we're. Going to do it on a show how, does snap evolves. And we're gonna look at one. Profile. Cut and look at it from the side from the right hand side plot. So. In the beginning we see that the map the estimated, terrain he. Plotted, in blue dots and the real train in the black line they, are very. Well aligned. That's. The way it works we notice, in the back that, the train, starting. It starts to drift away and in, the final picture we see that there's quite an error between, the estimated internal, terrain however. The. Method is shown to wait because the, true train lies well within the confidence bounds of the estimated training. So. Now that we have mapping. I wanna show you how we control, the robot the, entire goal was to create the controller, for the robust motion, tracking of legged robots and, read a focus on creating an interface, which. Separates, the motion generation from the control. So. How, do we control the robot typically we use an interface such as a joystick. Computer. Screen when we use a motion scripts, that we can replay in a robot if you're going to a step further we can create footstep, planners or for motion planners to to complex motions, and. Then we somehow, interfaces. With the controller, which was to real-time tracking of the robot and here we see classic the state estimation controller. Than thing in this real-time do tracks, the desired motion but. Every time create the new interface it's. Error-prone. Work so, I propose, a universal. Interface, four legged robots control, which one I call the free gate API. This. Is a unifying, interface. Where, I defined, emotions, by the sequence, of not, values. In. The second step I can transition, from the current state to, desired. Emotion. And spline. Through these nut points, for. Trajectory, generation. Then. From the real-time control, we. Sample, is trajectory, at. The resolution that we need and finally, we get to the control of the state for, the swing lags in the state for, the base I. Want. To focus a little bit on this free that API which is a very important part of this work. If. We get API consists. Of two. Main motions, one. Are for the leg motion which can be defined either in joint space or conditioned space for the end effector, here. Shown for the red leg and, base. Motions, where we define the position orientation of, velocities. Of the, torso, of the robot which, then. Automatically. Mean that the, legs. On grant. Are. Determined. Through its motion. There's. Different types, I can. Send, this commanding, when, is a target, well simpler, and go to this position your leg on the base and, if you know complex, motion I can do full trajectories.

And. I've created this library with a set of automatic. Tools, which, means I can send it a target for location, in, the case of a footstep and the, robot will automatically, fill out how, to step there the, best simulate. The base auto command, means it generates, the poles automatically. Given the, current, foothold, situation such, that all footholds. Can be reached but the robot stands stable. From. These elements of. API. Parts. We can mix and match or commands, together and. Here's a simple example of, the robot walking, so it uses the base auto command to, make sure the base is, aligned correctly in turn uses a simple footsteps, to walk. So. Another example, but we use a joint reject to inhale. Return a wanton arrogant, for, example change the configuration, and use the end effector to touch something said we can really, with. This tool. Take, these elements we paralyze them as we want them. Importantly. All, these commands, can be represented, in an arbitrary, frame which is important, for the task at hand, I read. The story we've credit an API for the versatile, robust and, task oriented control of legged robots and. They illustrate this with the federal example, here. We asked a robot to do with three legged push-ups, where, one, leg should stay in the air and on. The right hand side you see the motion script that we use to program this so. First we use the base other commands to move to base the stable position. Then. Hotel it's the right front leg should, move to. Transcendence, or height in the footprint frame which is he found between the legs. Let. Me simply ask with the pensado command to move your base to, a height of 38, centimeters, but, keeping, the leg at the same position and then this motion, if. Adopted the position occurs, automatically. You, know if the base up and down here to 45 centimeters, and then, lift it again down in straight profile. Type to the ground. So. With these 35, lines of codes I've already programmed robot to do this complex, motion through this API. Now. When working in real environments it's, important, like, in track. The. Robots motion with respect to the environment, accurately. Now. To show this with pre-program the sequence of footsteps here shown as blue dots on the ground in. The world frame and the robot is localized. With respect to the world frame with a scale, matching with the laser and, we're, going to sub multiple ones from different positions and show the, result. We. Can see that after one step already the, robot, steps on to their desired locations, and. When. Repeating is from different positions you, can see that the motion converges, very quickly and even it's hard to see but the person who pushes the robot or, later we use a pipe, on the ground too diverges, from the desired. Footsteps. So, the motion. Can be tracked for busting even on their disturbances. Now. We, use this in several. Projects, such, that in, terrain, that we know of structure rough terrain and here we see the robot. Choosing. From a set of templates, to climb over obstacles climb, over gaps and. Either. It's known, from the environment or the user chooses in adequate motion, here. We rotate the legs to go over, big, obstacles, and finally, we can also climb, very steep industrial, stairs which are stupid. In 45 degrees. Now. Since this is so flexible you, can go ahead and for example change the height to, make the robot crawl by, changing its layer configuration to, the spider like we, can really go into pipes, and. Use. This motion, the flexibility, of two robots to achieve these maneuvers. And. One step further we. Can do simple manipulation. And he would see the robots pressing a button of the elevator and in this task we use the april tab which you see in the video in order to determine the position and simply. Told that for the free that they apply this is where we want the robot to push. And. These are templated. Motions that from which we can choose in the library of, course this interface is also meant as an interface for motion planning and on, the left hand side is, what I've done with my student, where. We did kinematic, whole body motion planning in order, to climb these stairs. We. Can also do highly dynamic maneuvers, on the right hand side when we see the robot jumping. In the memory. Now. That we know how to control the robot our, goal is to put things together and create the locomotion planner, that, uses the map and these control abilities, to run over to train the.

Goal Is to walk. To, that the system works and previously unseen dynamic environments, and everything, should be fully. Contained, so there's no external, equipment and, other repairing, happens. In real time. This. Is the overview of the, entire scheme. And I'm gonna go through it step-by-step so. Here again we seen the classical, control group of state estimation whole body controller, and we've. Seen in part one and two how. We use the, distance. Sensors. In order to create the consistent, elevation, map of the terrain. Then. A the locomotion, planner, takes, a train data at the current position of the robot through. Run through a set of processes in. Order to create a free gait motion. Plan which is then translated, as. We have seen in part three through, the whole body controller. And. I'm going to focus now on the locomotion planner in this part I'm, going to go through it step-by-step. So. When we get the elevation map we, can processes. And check for every touch and figure out for example the surface normal which will be important, later. Then. We can process it with different quality. Measures such as slope, curvature. Reference of course in certain limit rain to, create a photo quality, measure telling, us where, is it cell feel good to step out and where is it dangerous. And. Finally, we can create a three-dimensional signed, distance field in, order to do fast, collision, checking. Now. First we want to generate, sequence of steps and here's a top view on, the left we see the robot standing in an arbitrary configuration. And on the right we see the golf pose, and. The, process works as follows first, we interpolate, a set, of stances, between the start and the goal and then, just move one standard and choose the, appropriate leg, which gives us the first step in. The next step we do the same thing over again interpolate. Choose. The next dance and, choose the second step now if, we do this we can start of course from any configuration which is nice but. Also since we do we compute a tional every time this. Motion. Converges, to a skew between the left and the right legs which is important, for stability, and, speeds during the commotion and also. Because we to recomputation, every, time its, robust. Deviation, from the original plan in. Five minutes nice. Motion. Generation always ends up in a squared position, of the robot I. Imagine. The robot stands in front of this gap and, the nominal football tells it to stand, right there in the gap so, now our goal is to adjust these footsteps in order to find. The same flick emotional return so, we sample, in search radius all the, candidates, and you. Can categorize, them first. We have trained, where, footsteps, candidate, which are invalid. From the terrain but would be actually reachable, by the robot like, here the yellow ones or, down. There we have valid. Blue, areas. Which are fine, to step out but I'm not reachable, by the robot and finally, their positions, which, are both valid, from a terrain and kinematics, point of view and we, choose the closest, point to the nominal as a adjusted. Foothold. Now. We have to check this kinematic reach, ability but how do we do this this. Is done in the so-called pose optimizer, where. It's task is given in foot locations, to, find the robots base position and, orientation. That. Maximizes, the reach ability and stability. So. In. The image the. Goal is really given those red parts at the feet to, find the robot pose position in the orientation such. That the. Legs. Can be Regent away about the still stable and, we can formalize it is a nonlinear, optimization, problem. Wherein, the cost function, P. Lies the deviation, of the current, set up for, to a default kinematic, configuration, as shown here, the difference between the, foot and the, foot in the default configuration. And. Then we can increase, this ability by penalizing the. Center of mass deviation. From the centroid, of the support polygon, as shown in the support polygon the ground out. Now. To constrain, the solution, we, add to constraint stability. Constraints to ensure that the center of mass is within support, Pentagon and to, joint limit constraints, which makes, sure that the robot doesn't it's, like don't always try to go too close. Now. We can solve this problem very efficiently. As a sequential, erotic, program, in roughly, open five to three milliseconds, on the onboard PC of anyone on. The left-hand side you see a couple of examples only. Given those footholds. How the optimizer, finds these solutions, which, fulfill. The, kinematic, instability constraints, on the. Right-hand side you see an interactive demo by a drag to fit around and, the, poles of the robot is automatically. Adapted. Now. That, we have to just at foothold on the last step the goal is to connect, the start in the target location with, the shortest collision-free, swing trajectory, and. Since we have. Parameterised. Also in trajectory of this plane we, can optimize over these lap points and, we do this in an optimization.

Problem Where. The. Goal is to minimize the path length. While. Making sure we don't run into collisions with help of a collision function, which is based on the scientists field that we generated from the elevation map and, here we use this trim to normalize, it 3ds. Collision fields. Now. Here's, an example where we have to train from the side and the robot standing, on it and. We. Have a train with a low confidence, bound, so typically, and then the signed distance fields which tells us how close you are to the, obstacle, and. Then. A typical solution in this case would be that the robot smoothly goes, over, the terrain but it's collision free I, imagine. We, don't know that well of the terrain and the confidence bound is much higher for example for a hind leg. Then. The collision field is bigger in the solution, this. Is a much steeper, path. The trajectory, of this swing. Leg which, is nice in an uncertain area with a robot step, much more carefully, from top down in, order to make sure it doesn't collide with. The environment. Now. Putting things together we, did a comparison, with the blind reactive, working that we implemented. Left-hand side the robot walks, blind takes big steps and has only to, feel. The ground through, the contact, forces. And. Like. We see in the success, rate we see that up to obstacles, of 10 centimeters, this works well, but. If we go up to higher obstacles, this, is going to simply fail that we also needed to ram to actually work up on this, obstacle. And. Comparison, on the right hand side you see with active, mapping, what, robot steps, much more certain onto the obstacles. And this, actually. Takes the same step length but is faster, in the execution, of the motion and we have shown that we can achieve running. Over obstacles with up to a 50% of, the leg length. Now. In a little bit more complex, scenario, we see here at the robot walking. Over stairs but we don't tell her about that these are so this is just an arbitrary training. For the robot, and. Here we use the stereo camera in front of the robot in, order to create this elevation month you see in blue are the areas which are valid to step on in white ones the ones which were not but, really see that the map is not play for telling our framework. Can, robustly, track this motion we're gonna see in a second how to hind legs slip. But. Due to re planning process, this is not a problem in the. Rebounding. Just continues, from where it started. Since. We have knowledge of the surface normals we can feed that back to the controller, and use, the first control ability of a robot in, order to constrain. Our reaction, forces on the ground such. As the robot does not slip on inclined, surfaces, like these. You're, reactiveness of our approaches shown here when we throw stones, in front of it and we see in the map up there how quickly, the, entire, process reacts and since. The replanting, happens it can work safely all these obstacles that. Have. Flown. In front of it can you show the robustness by pushing. And pulling the robot or changing, the plate rates and on Android. Uses localization, in order to navigate to global, coil in the room and, although we strongly, disturbed, it - robot -, robot finds. Generates, the sequence to the goal location. And. Finally. This is to showcase the robustness of the approach where. We walk over moving, obstacles over. Person, as. A soft body or even in a very narrow path so, this is foam it is really shown to be flexible, in all of these environments. And tasks. So. Now that we have a robot. Walking over. Rough, terrain I'm going to expand.

A Little bit and show, how we did. A work with, the collaborative, navigation, of a flying in a working robot and here, it's about to, use their different abilities, to create a bigger, framework. The. Motivation, is that, from, a flying viewpoint, that can very, quickly robot, can see, the terrain from up top and flying, fast. Around however, it has limited sensing, and payload capabilities, in a limited operation, time. On. The other hand of a walking robot which has a rather low viewpoint and is compared, to the flying vehicle rather, slow however, you can carry, high, payload. And sensing, and has an operation, time which is much higher than the flying vehicle. And. This is the overview of the approach and, I'm. Not gonna go into the deepest but I'd rather demonstrate. The complexity, that is gone into this work with many of my co-authors, so, it's really all these technologies. Bringing, them together and, I'm going to show you the demonstration that. Resulted, from, this work. So. Here the goal was to go from, a start location to a target, location where. There's only. One possible part and there's obstacles in between. So. First we let the flying robot explore. The environment and we see in the left. Bottom corner it, creates a set of visual. Features. Which are been added. In a simultaneous localization, and. Mapping framework. To consistent math we. Can use their camera, images to create with our elevation, map in framework a dense representation. Of the entire terrain. As. These two maps are then transferred, to the walking robot which, interprets, them so here it looks at the Traverse ability, and then. Finds a global path from a start to the goal location. And. It starts trucking. This and while, it does it uses another, camera, image which is on the robot to localize itself within the map that was created by the flying vehicle we. Can see here how it matches those visual features from. The current viewpoint in the global map. It. Updates the map continuously, and we're throwing an obstacle in front of it while walking and since. It updates, in real, plans to motion such other key adapt. To changing environment, and. Then make, it safely from the start position with help of the flying vehicle to, the goal location. So. In conclusion I, have shown five. Contributions. To the work in left rein locomotion, with legged robots. First. Have a variety of different sensor, technologies, and show. Them harder applicable in mobile terrain mapping I've. Noted the noise for actual activation to turn off light camera. Which is very important for mapping and this. Work can, be extended, this framework to new sensors as they are released and. Knowledge. About the. Sensors can they is applicable to other mobile, robots. In. The second part of shown, and robot centric formulation, of an elevation, map in framework which, explicitly. Incorporates. A drift of the stay estimation. We. Have open source software, which has been used by, many other projects, for example for our mapping navigation. Planning autonomous. Excavation, and co-localization, 3ds, elevation. Maps. Flood. Control I've shown, a framework for the versatile robust, and task around the control of legged robots. Similarly. Our software. Has been used in many applications such, as the, artist challenge the. Emergency. Challenge I will. Have created automated. Ducking and even, make the robot dance with listens to music and, creates, dance, generation, based, on the music kickers. For. Locomotion planning, we have created a framework that, enables a, robot to, cover left rein in realistic, environments, and some, of you might know these stairs it's just outside, this, building where, what we took you about for a walk in Zurich and stairs on a rainy day so it really shows real-world. Application. Of this robot in real-world settings. And. We. Walked up roughly. 30 meters over, a course of 25 steps. Lastly. I put, my work into a bit of context, where I've shown a framework for the collaboration, between flying, and, walking. Robot, but, they utilize, their complementary, features as a heterogenous team. With. That I would like to thank you for your kind attention.

2018-01-22 00:28

Show Video

Comments:

Congratulations, an extensive and well-conducted research!

Great work and valueable research thesis. Really good simulations and results. Can you please the name of software you used to carried out simulations?

The sound quality hurt my ear

We are super sorry about the sound quality, there was an error during the recording.

We use Gazebo for most of our simulations.

Thank you for your reply

Other news