# Video Gallery

The Video Gallery of Simultaneous Localization and Mapping is appended below:

### 1. Simultaneous Localization and Mapping (SLAM)

how are autonomous robots able to navigate in an unknown environment simultaneous localization and mapping or slam is a technique used by autonomous vehicles to build and update a map and find their location in an unknown environment our robot uses slam to help it navigate even with very noisy data the goal of our project was to create a robot capable of navigating a walled environment shooting a laser at green targets and returning to its starting position when it is finished exploring most of our data for navigation comes from either encoders an on-board camera or an IMU all of these devices produce very noisy data and are capable of giving us serious problems if we're too close to a wall in order to navigate without crashing our robot needs to know where the walls are we use the camera for this task and created a blue line detector because all of the walls have blue tape at the bottom our robot is able to see the walls by finding blue lines and projecting rays from the robots camera to the bottom of the blue tape we combine these points into lines and calculate their location in 3d space relative to the robot we also use the camera to find green triangles we do this by converting the image data to HSB running union-find on the green parts and looking for large results where the number of pixels is approximately half the size of the bounding box the IMU is used to determine our yaw as it tends to return much better results for finding our current angle than our odometry data and the air is based on time rather than our distance traveled our odometry uses the encoders on the two wheels to determine how far we've traveled and get our location using dead reckoning using this alone to calculate our angle would be very problematic as errors in theta build up quickly when you move to build the best map possible and determine our real location a robot uses a graph-based slam algorithm each observation of a feature like a triangle puts a constraint on the graph based on the position of the feature this constraint acts like a spring the more certain we are of the observation the stronger the spring is high uncertainties result in weak constraints by giving slam our odometry data and the observed feature locations along with their corresponding uncertainties we can solve the system to find the least error solution one issue that creeps up is the rehab survey shion's of a feature how do we know if the feature is a new feature or a previously observed feature this is the problem of data Association we solve this by taking the Mahalanobis distance between the new feature and previously observed feature and this helps us decide whether they are the same we can see that the map created very accurately resembles the real world and allows us to decrease the effects of noise this helps us get a much better understanding of our current position and our environment which helps our robot explore and meet the goal

### 2. SLAM Robot Mapping – Computerphile

and basically i'm just gonna do essentially what we did with the handheld version just now that it's on the robot today we are looking at the frontier device it has an embedded computer a multi-camera system a lidar and inside there is hidden an inertial measurement unit the cameras work pretty much as our eyes the lidar basically allows you to get a point cloud of the environment by firing lasers at the environment and by measuring the return time it can get you a metric measurement of what's around you and the imu works pretty much as our internal ear it gives us an idea of where gravity is and how fast we are rotating all of these sensors can be fused together to answer the main question in robotics called slum simultaneous localization and mapping so the localization bit is knowing where you are and mapping is what's around you islam is pretty much like a chicken egg problem you don't know where you are you want to know where you are in the environment but at the same time you also want to know how the environment looks like if you walk around you're going to see an orange line that shows where we were before so if i keep walking around and i come back to the same place i was before i can show you the main ingredient of slum which is loop closure there you go so here you see a red line this red line means that the system who recorded all our previous position recognize if we were back to the same place and in this way we can correct all of the errors we have accumulated this far and have a better map by adjusting it knowing that we were there before the screen was just to quick visualization but all of the problem is being solved in 3d so here is the exact same problem we have sold before so here you see the table we see all of the map in 3d being built and the loop closure is right here so after just a very quick walk we have a complete 3d map of the environment and we also have our past trajectory the map is mainly built using the lidar because it gives you a continuous stream of point clouds but to accurately know where we were uh we also need to fuse the imu because there is physics inside so the army tells us your acceleration for instance so mu is inertial measurement unit yes if for instance there is an error of registration between two point clouds and we don't use the imu the system i just think we teleported somewhere else but the army is telling us no your acceleration was like that so your velocity should have been like that and you should be here instead of somewhere else so if you use the imu on its own it'd be kind of dead reckoning it would be guessing yes yes so there is a component of that draconian but this is fused together with the other sensors so if you were doing that dragon all of the time then we would end up on the moon because there are many smaller errors that gets integrated over time so the imu is just giving us an idea of where we should be between two camera images or between two lidar uh point clouds and then the measurement from the point clouds and the cameras kind of correct those and are merged together so we can have a walk and go downstairs in the lab so we unofficially call it the bath lab and the reason for that is because we have a huge elevator that can carry an entire car here we are going to show the quadruped section of our lab what's funny is the camera can see some of the lidar yeah yes because it's um it's near infrared and camera don't have infrared filters there you go so here we have different quadrupeds today we're gonna see the spot one this is all the way from where we were before here with the the round table the loop closures and we went downstairs so all of the compensation is inside here but for ease of visualization we have some interfaces on the phone and on the computer most of our software is compatible with ros which is a framework that is widely used in robotics community there are many libraries and programs ready to use to accelerate robotics this one in particular is called roseboard and it's running a little server on the computer and on the other side there is a client that gets the outputs from our algorithms as messages and these messages are then relayed and visualized in the in in the web page web browser interface so on the phone that's just basically running yeah i mean the phone is yeah it's just an optional that we we use especially when we do handheld missions we want to have an immediate feedback of what's going on everything is going all right so we have developed this interface slam is very useful for robotics in particular because it allows us obviously to know where the robot is and what the map around it looks like so um i'm just going to drive around the lab a little bit and then we'll be able to see kind of uh what the lab looks like [Music] and now that we're kind of done again you can see we have a map of the environment we have a loop closure and slam is essentially a key component in enabling robot autonomy so a lot of the work that we're doing in kind of remote inspection mission planning all of those kind of things relies on the ability of the robot to know where it is and the structure of the map to actually do useful things um you know nuclear decommissioning those kind of things is what we're um looking at the handheld has its uses but also if we can do this with a robot obviously we don't have to send people into the same places sometimes it's easier to access places with humans so the handheld device is useful but other times we prefer to use a robot so we have both options this is the one we saw in bristol yes on the front we've got what we call a frontier device it's running the exact same mapping software but we're just not running the autonomy part of the system right now what i'm going to show is the algorithm that is running inside the frontier and the main technique that is using it which is called factor graph so a factor graph is a way to solve this islam problem uh by modeling the unknowns as nodes so we might have uh our position x1 and then position x2 position x3 and so on and this is what we want to know in addition to landmarks which are basically points in the environment which eventually will make up our map so we can think of the landmark says l1 l2 l3 and so on and the measurements that come from camera or imu or any other sensor create a relationship between these unknowns the imu might just tell us okay so between x1 and x2 your position should have incremented by this amount because your acceleration was this much over this period therefore velocity and therefore position and at the same time you can have you know multiple imei measurements so you connect this and from x1 you might just see also things from the camera features in the environment for instance a corner that you can track over time so this creates also some other relationships between our position and the landmark and you can see the same landmarks over different times so you can create these sort of relationships and if you keep adding all of these constraints then you have a strong idea of where you should have been and what was around you because all of these relationships are merged into an optimization problem where the unknowns are exactly x1 x2 x3 and l1 l2 l3 and in this way the algorithm will find the best solution that makes sense or all of this relationship together and then the last bit comes to solve the islam problem imagine that we we walked a very very long way so imagine i've made this chain very long and at some point we are at time x 50 and in reality we should be in x2 because we have kept track of all of the measurement we have collected so far and so we have a map of them and we close the loop by thinking okay i'm exactly in the same place as x2 so i can create a relationship between x2 and x50 in this way all of the error we have accumulated up to x50 gets corrected the map gets warped to match reality and we have solved the islam problem it's oversimplified if i say islam problem is solved many many professors will become angry yeah you have solved some problem in this specific situation yes you did but not generally solved no yeah so in practice is yet not solved because there are many problems related to noise in the sensors and inaccuracy in the calibration so there are techniques to make this uh solution more robust for instance so if you have islam system working then you know where you are and what's around you and also robots don't know where they are and what's around them so we can make do autonomous things jane street is a research driven trading firm with offices in new york hong kong and london full of impossibly creative and clever people they like to think the next star individuals that may be working for them could be watching this computer file video right now that's why they're hoping you might peruse their website including this section for internships are these the sorts of jobs that might launch your career check out jane street there's a link in the video description and our thanks to them for supporting this episode of computer file you

### 3. 2D / 3D Dual SLAM Robot using ROS and LiDAR with Raspberry Pi

### 4. SLAM – 5 Minutes with Cyrill

so what is slam slam stands for smell taneous localization and mapping and it is the task of estimating a map of the environment and at the same time localizing your sensor or your robot in that map that you're currently building and it's something that mobile robots need whenever they move into unknown or partially unknown environments because they need to build a map of the environment and localize itself within that map if a map is given then things are much easier you just need to localize yourself and if poles estimate is given then mapping is also relatively easy but solving both problems together is much harder and it's something that has been investigated in robotics and let's say at the beginning of the 1990s and we typically find different approaches in order to solve the slam problem we have to distinguish here between the front end and the back end for the front end is the part which takes the raw sensor data and turns them into an immediate representation such as constrains in an optimization problem or probability distribution about the location of let's say landmark derived from sensor data so transferring sensor data into an intermediate representation that's very task specific and then the second part of the back end which takes this intermediate representation and solve the underlying state estimation or optimization problem so estimating parameters that describe me for example where objects on the environment or where my platform is in the world and in this back end we typically find I'd say three different categories of approaches the first one are the extended common filters this is how it started at least in robotics or the alternative is particle filters or least squares or so-called graph based slam approaches and those graph based slam approaches are today the most popular ones so if you build a new slam system you would most likely start with a graph based approach in those craft based approaches we as the name says you use a graph to represent the variables and the relations between those variables and the different types of graphs which are out there the two most popular ones today are either a post graph which is a graph that contains only the poses and marginalizes out the map information or Fecteau graphs which have vector sitting in between nodes in between variables and the information coming from the front end or from other sources prior information can be stored in those vectors and they are both kind of similar what you can do with it and different optimization tool boxes use in different internal representations how to solve this least squares problem this is typically done using optimization techniques and there's a large variety of parameters options that you can choose there are also two boxes out there that you can use the three most popular ones for the back end are probably g2o gtcm and saris which are kind of three systems which help you to solve your optimization problem and these are the most popular ones today they either use vector graphs or they use for most lamb grass or post graphs as a special form of of these graphs so in post graphs for example every node represents the pose all the sensor at a certain point of time and the address between those nodes nodes represent the spatial relations that are extracted from that sensor data and then the optimization system basically tries to find a new configuration of the nodes so that the overall error that is introduced through the edges actually gets minimized until I find the minimum error configuration which is my least square solution and again this is a technique that is used whenever you need to build a map of the environment and localize yourself within that map so something that mobile robots need to do autonomous cars need to do you have ease need to do so every system which move through the environment potentially if it does it on its own there's also strong links between visual slam and bundle adjustment so bundle adjustment has been there for much longer so in the 90 fifties people from photogrammetry developed bundle adjustment and you can see this as a special form of visual slam which has a special form of constrain that is basically the reproduction error of pixel coordinates but there are very strong similarities between those approaches the slam formulation is typically somewhat more general but basically came that's a thirty to forty years after this has been developed in photogrammetry

### 5. Lecture 11: Simultaneous Localization and Mapping (SLAM)

so this is a 2160 on the lecture 11 simultaneous localizations and mapping or slam so so far you know we discussed um a common filter various types starting with the the discrete time and then continuous time moving to extend the common filter um and then i'll send to the comma filter um then the last time we look at the base filters right so the today's lectures are all unique because then i i just want to slow down a little bit and talk about the application side uh now before going there i just want to point out a few things of which many of you got confused so um just a clarification over linearize the computer and also extend karma filter so uh the in ordinary common filter is to just reduce the non-linear dynamic systems um to uh linearize to the systems and then we applied the same uh exactly the same um you know algorithms of the common filter um uh now the the important point is that if you realize it uh you get the some some linear time buying systems but uh common filter covers that so you know taking um the advantage of that uh exploiting that the time following the nature of the systems even though it is a linear we can actually somehow deal with the non-linear dynamics right so a very important point here is that we linearize it you know along a uh nominal trajectory um which has been fixed in other words it's a pre scribe the time function you know x star t this is a given and they never change so we just take the value from you know um from this nominal trajectory and actually evaluate the jacobian um to obtain the you know the you know a matrix and associated with it and the original part is exactly the same um in common filter now the point is that after you estimate the uh um you know change to the state now that's basically delta x it's a local coordinate and to get the global coordinate you have to add this x star and on the trajectory coordinates back to uh this global coordinate so right that's uh there's all your things so you know um that's how the linear like the call map works and obviously the weak point is that they know this normalized excuse me um the nominal track trajectory and then actual trajectory may uh be different things are not going as as demanded as expected then actually the error becomes significant and to cope with that the difficulty extended karma hooter came in and then this one is in a sense a totally different algorithm the original common filter it does not actually uh guarantee it optimality um it is basically non-linear systems so um you know we evaluate the you know ft and hd at the estimated the x star x haptic system estimated the extra heart instead of a pre times inscribed time function x star and more importantly i'd say that actually state propagation is non-linear right and also um the predicted the sensor output is also nonlinear function no new function full non-linear function as a result the the estimated the new state 2 is in the global coordinates only the difference is this actually kanban gain kt which actually you know is actually dependent on h which comes out from this and then this one is the jacobian you know adrenalization uh no please and actually pt2 so um so this actually a change to be made based on the prediction error this gain kt is basically you know linearized model based so um you know estimated the state xd and the predicted output y d both are in the original coordinate systems okay they're not the deviation from a reference so that's actually a very different and a difference big difference from this linearized original karma filter okay um now uh this method um because um the gradient actually the approximation uh jacobian's approximation so sometimes it causes some error so i'm sending the comma filter it's completely get you know it get get get get rid of on the derivative of the jacobian instead we use the samples right you know sigma points samples and and actually samples propagate that propagation too is a full non-linear dynamics okay observation uh you know um you know and the measurement equation two for non-linear equation so we just propagate each sample point in this way and then we take the average which is to get the amine and then covariance and the base on these distributions so you know and then actually come again kt2 based on samples and the py is actually uh the innovation matrix and the pxy is actually you know xy um you know cross-correlation um matrix right so um again the um you know estimated the state xt updated the xd hut 2 it's in global coordinates and then no data action no jacobian is involved in our centered computer now one weak point of this uh sent to the common filter is that even though we are we originally assumed that the noise is actually you know a gaussian but it should go through some non-linear process distribution is secured you know sometimes having multiple modes multiple peaks like this but on center computer we approximate this to a gaussian distribution so sometimes this one is actually a major gap between the reality and this method uh centric common filter so to improve this in fact sometimes we have a very clear double peaks like this and actually uh approximating this to uh uh unipole unimodal on the gaussian are these two are very much you know different you know erroneous results so this is a pointer we give up we give up that the estimator is just a single quantity but instead we accept the multiple you know modes like this or um we want to estimate the full distribution of the basically you know pdf of the you know state that we want to estimate it is called the belief and the belief is to uh i know propagate and the belief is to be updated based on the new measurements right so that's actually um in the chapman kolmogorov equations and this one's based on the bayes rules now um this was a very powerful tool because we can deal with the non-gaussian and the non-linear dynamical systems however obviously the cost you have to pay is that a heavy computation so you have to compute this for instant chapman kolmogov equations to propagate the belief from uh gt minus like three minus one to gt slash to t minus one x x t this is you know a priori um you know belief of xt right so this guy you know needing a lot of computation because just to uh find the um probability density um you know at the xt you know this one is is reached from many points from the previous state so you have to integrate all this as shown here right and then this one must be repeated for every single point along this extreme so if you have uh if you digitize this say 1000 points here and then 1000 points here you have to actually repeat this computation for you know you know what medium points and if you have a state variables and this one is a you know you have a 10 dimensional you know state then you have the 10 million computations needed only for this and every time cycle so it is heavy computation so um tomorrow in fact we we're going to discuss um you know particle filter so that is to compute this one more effectively so that's a start you know story so far um now what we'd like to do today is uh to talk about the one of the major applications in these data slam and then how it is to use the um you know on how the common filter is used and for this you know project uh for this the type of uh applications so um you know self-driving cars is still very uh popular although car industries are now a little bit actually slowing down um and maybe only you know id companies can continue the current pace of investment on the self-driving car development um nonetheless you know uh it's gonna be a huge market the six trillion dollars the market to worldwide so it's a big one uh look at history it is not a hundred percent clear who did this first but uh to my knowledge in late 80s at the cunning menu robotics institute chuck though they do those just um i remember they actually um putting a lot of axel computers in this band and then moving around the nearby park um you know without the driver so that would be the uh the first uh very successful visible on the report you know that i know um the word the slam simultaneous localizations and mapping was basically um invented and proposed by professor uh hugh darren white and john leonard at that time our colleague john lennon was a student uh working supervisor by a professor duran white and they actually the first um to formulate this one in a very um sorted way and and then actually build the systems and demonstrate it so but that's actually vintage and this paper has vintage in 1991. so the key concept behind and simultaneous localization localization uh basically estimating the position and orientation of a vehicle sometimes people call this a of the vehicle position and orientation of a vehicle in the two-dimensional space in particular um and actually mapping generate and update a map um something like this so they proposed that the localization and the mapping have very synergistic effect so because a more accurate map can localize the robot more accurately and a more accurate localization can generate a more accurate map so you know as robot to go around getting more information robot can actually better understand the environment and you know improving the map accuracy and as the robot has a better better map more accurate map it can locate itself you know more accurately so imagine that uh you go to uh some you know some place uh an on you know unfamiliar building um maybe you go through a hallway maybe you know checking the how many you know little rooms or pirates um you pass through and before you you know um entered uh such a new building you have some rough uh image about the environment and there may be some many rooms that are in the floor all right as you step in there and then look around um you can get the more accurate you know um you know the map image originally you have no you know clue as to the specific dimensions and lengths of the hallway and so forth but as you walk through it then you get a much better understanding right and then maybe look around that and then um if you want to uh go to room 351 and then you can check the you know room number to get there so you know this process is very similar to you know the humans um finding some some a place to go now it is actually mathematically and methodologically formulated to make that happen so the key idea is that we like to include we like to estimate the some of the landmark or some of peach like a walls uh or edges you know in the room as a parameters to be estimated and uh you know you know if that is actually uh an aesthetic environment this doesn't change and you know a map you know the environment doesn't change then we can treat these parameters the location of the walls and the location of the pillars and some landmarks and so forth these parameters are basically um you know unknown but a constant constant unknown and what comes into mind i know if you want to estimate such you know um constant unknowns well you can recall that you know karma filter is very flexible and there are there's no fundamental difference between state estimate and the parameter estimate right so what you need to do you can just augment your state space not only having the robot the location x y and the theta including the orientation pose x y theta but also some of the parameters involved in a map well we're going to discuss this a little later um say the location of the walls if you have some actually knowledge about it you can actually parameterize the location of the wall uh with the central parameter say alpha and r then you can write many of these landmark parameters um in the state of variables so state variables extended and the dimensions are much much higher than the original three okay so this is the way how we actually formulate islam so tonight outline of my um lecture today well i just introduced islam uh formulation basic formulation and then on the next one you know we go about the individual step involved in this big localization based on karma filter well in fact there are so many steps involved and there's so much so many variations to the application about kalman filter extended the kalman filter uh unscented the common filter as well as um the base filters to this cluster program there are so many uh uh works being uh and reported that in this field i started with the just as you know simple vehicle kinematics program um how uh displaying the system is a non-linear and how we model that and the state equation pro you know process noise is formulated and then we talk about the map representation very very simple example because this is actually a huge area but i know we just you know save time to look at the very simple case and also we described some of the ranges sensor lighter in particular and the measurement to prediction and then matching the data association we'll talk about it later and then state the update okay and this goes to uh you know segway to uh tomorrow's the lecture the particle filter and the more advanced one the lower black well filters and and many other uh filters so let's start with the vehicle kinematics this is some of the materials involved in the other subject i teach that's a two one two two one two zero introduction to robotics um so basically these are similar materials um i just present i'm presenting this in the discrete time format it's a little bit different but basically the objective here is that you know what the sort of non-linear dynamics kind of comes in and how how we quantify how we formulate um you know they plant the dynamics as well as their their um uncertainties the noise model okay so we deal with this simple vehicles you know just a thing that you have been you know um playing with um so two wheels two powered wheels the right and the left and as they rotate um at the same speed then it goes straight forward uh if the right and left uh and a weeds are different velocity it turns right it turns so the basic you know equations kinematic equation is that well you know the average of this wheel you know displacement in a shorter time delta sr and then delta sl if you take the average you get the you know the line velocity moving forward at the same time you have to think about the rotation of the vehicle uh rotation is kind of you know geometrically determined in this way so we want to actually you know quantify the this delta theta the preset is this angle and then that is to say that you know i just draw the these two lines in parallel and so this distance is delta s r minus delta s l and the distance between the two waves uh tread is devoted to b so um you know you can find this angle delta theta from delta sr minus delta sl divided by b that's actually the delta theta rotation of the vehicle cool okay now this is a little too tricky systems um it is called the non-holonomic uh the systems and it can't move a sideway right it can move a port and then rotate but it can't move a sideway so there's some restrictions applied but i know as far as you want to know x y coordinates you can actually find these two equations so i know delta x change to x coordinate as b is actually this velocity you know for the velocity times the cosine you know theta because this is discrete time form so um i know a little bit better actually discretization is to use this formula theta plus one over the two delta theta if you know uh the delta theta delta theta has to determine the bind there's a dirty solid test sl so using that one theta is to change you know halfway through so you know half of the delta theta is added to this this may be a bit of discretization and that applies to uh you know delta x and delta y using the sine function in here okay so a state equation to build from a global x y coordinate i just cut and paste delta x delta y and then delta theta over here so next time step x t plus one y theta t plus one is the current you know oppose x t y t theta t plus the delta uh something that we just found in the previous slide so obviously this one is a non-linear system so we apply extended common filter extended column filter so when we propagate on the next step we use this equation directory now for the purpose of obtaining the common game um and the covariance so we need to take the um linear approximation right so and taking the the linear approximation this is jacobian and one tricky thing is that you know uh this guy is basically uh you know um so the first term is first x component equation this guy differentiated with the xt and actually i'm so sorry this xt is actually state the variable vectorial xt and then this one the scalar actually this is about the uh abuse of notation but i know i hope you you find it actually you know clear so you know this is vector so first the components under this guy a divide differentiated with xt that's one and this one has no yt such a zero and a third one uh differentiate this guy with respect to theta t zeta t we happen here so minus v t sine theta t plus one over two delta theta t okay and likewise you can actually differentiate these in three and then you get the three by three matrix and interestingly uh your input uh basically the delta s t delta i delta s r and l are involved in here so um it is actually you know included and to compute that actually you know it is required to include those you know input into a anyway so this is like a key equation to need it for applying the uh extended comma filter okay now um noise uh processing noise how noise comes in what the sort of noise is how we can quantify it okay so um you know this is one of the methods you know uh not necessarily the general but i know um otherwise it's up to us how we really uh in a model you know uh the model dynamics so this may be a good example so we actually look at the individual wheel so it actually moves from here to here kinematically it is supposed to move a certain distance but in fact because of slip or some roughness of the you know floor uh ground or um some of the versatility you know uh read about controlled error whatever the reason um we have a sum error delta sr and then delta sl okay so um well actually you saw in the error thing here so error is actually described by you know random noise omega and w r and the wl so delta sr is actually some an average value plus and w r so this noise wr is actually you know mean zero that mean value is shifted to this one and this one you know mean zero and mean centered uh you know noise so we have uh you know wt's you know um right the weed and the leftover i know this is our noise source process noise now we had to uh provide the covariance but this actually noise characteristics we assume that they know um uncorrelated but i know uh the cobarians must be um provided and this is nothing but an assumption then also people are on determined deaths through experiments it turns out that this will this is a very good an approximation that is we basically ignore the um cross uh terms here and the downward terms down the terms increases in proportion to the distance traveled the reason why we put the absolute value is actually moving backward like you know variance can't be negative so actually putting the absolute value in here so it is actually proportional to the distance of trouble the k r k l actually is something you know we need to determine through experiments okay so now go back to this so how this noise is basically influencing the location of x t y t and then theta t now this is the um um you know deterministic part of the dynamics right we like to include this guy in here as the additive term now you can see that this equations if dt theta t you know this one is given by this on noise free then this one we can say um you know deterministic however because the delta sr this as else are involved in here which may be perturbed by w-r-w-l as shown here so you know we can actually split the v-t and the delta theta into the one that that's such a noise-free part and no it's corrupted the term right so knowledge free time has been taken care of by the discrete you know the deterministic state transition okay so what we need to do is to find the effect of um you know delta s r delta s l perturbed this much w r and the w l so let's see we what we need to do is basically i know we need to find the this you know gt um so um again we differentiate this with respect to delta theta and delta s r and then delta s l right so yeah you know this one is the three dimensional vector and then this one the two dimensional vector and the resultant matrix a three by two matrix the first element is actually differentiate this term with respect to n o s r okay sr right and they know sr you know does include the wr so and that's the reason so differentiated this with the sr you know just not the data but vt behind the bt there's actually sr so first you know differentiate to this and on this one is to just in a portal and then dt is differentiated to divide delta s r is to give you just you know one over two so this is four star and then now now this one is to be also you know differentiated with respect to you know chain rule first differentiated with the theta t and the theta t that include that you know delta sr and the delta story so excuse me not this theta r you know this guy just the delta theta t excuse me delta theta t include the delta s r so differentiate it and it comes out the minus sign this this one and the differentiated this guy with respect to delta sr gives actually you know 2b and then the delta s actually you know delta s over 2 and coming from this it's actually used here so this equation this matrix is represented with respect to you know the delta uh sr delta sl um and and actually delta s is actually the function of those so this is actually gt so we now have uh you know the two major components um and a t has been obtained and then gt has been obtained and then the delta s original you know um original i know error you know you know in the uh right wheel and then the left weed this is the original covariance right so processor actually noise comes in so putting here the sandwiched within the gt from both sides uh it turns out this is the ptt plus one slash t as a vehicle troubles i know a certain distance then actually this one comes out and it basically it grows as you travel farther this you know error covariance getting larger larger um of course that would make sense right um you know basic uh covariance of the noise source it's proportional to the distance traveled so because of that this grows and farther and harder and also interesting to see is that um you know sideway error is basically larger than the uh this longitudinal you know directions and that's also going to make sense because you know sideway that this actually one uh if the um you know angle um you know it's a little bit you know um more erroneous times the distance of trouble you know this uh sideway deviation it gets larger so this distribution reflects that but now vico is actually a little bit you know moving along the circular trajectory i know the um process noise covariance actually resulted in the larger um you know pt plus one t um a priori uh era covariance right as shown here as vehicle travels in this way it shows a very uh kind of unique secured um yeah in a distribution so having this money in mind the vehicle you know dynamics if you quantify the noise dynamics it's quite interesting and very unique and these are all incorporated into karma filter next let's actually represent the the map of the environment and what we do here is the simplest one we just consider only the line representation walls and the furniture etc it creates some lines right so taken on the line from here we have this kind of you know representation of the environment uh by using in the lines so we discovered each in a line segment in our locations and and actually distance uh so force and uh you know we we use this as the um you know map representation now um the state of the art in you know 3d map building with the local returnable you know resolution based on the point to cloud the data obtained from a 3d scanner 3d lidars or depth cameras it's actually i know this this area of research is a technology development going very you know quickly further but we just you know um use the simplest one just a 2d and actually this area has a lot of image processing is involved and come up an interesting common filter is locally used for image processing in in particular integrating rgb or rgb depth coming out with an igniter on information you have multiple cameras and you want to estimate the location of wall for instance you know you need to use some framework for sensor fusion and the common filter is used for that purpose too now i i released the last week the um um you know context oriented project number two and in that we introduce the lighter it's light detection and the ranging sensor and then it uses the um in a laser later light is basically um you know scanned this way and then all uh impedance the laser the dot is detected thereby we can detect the bosa you know um the locations and they know distance um of the target to point um by structure neither uses the portal coordinate um you know the you know laser beam is scanned in this direction and actually you know measure the distance when first that the light is intersect or impinged on the object right so that is actually the information we get from this sensor so the sensor and naturally it's non-linear um in that okay so uh using this actually um you know lighter information importer coordinates how can we actually detect the wall and find some parameters involved first of all uh war is described in poorer coordinates and allowed to say that you know just to use the under this representation um so supposed to be the straight line that's sitting here and what we'd like to describe this line is in terms of the distance from here to this in the wall and it's a direction or orientation and alpha this is you know the you know measured from the sum of x and coordinates and indicate the the directions of uni unit vector cosine phi and the sine phi so if you pick any line any coordinates x y from this line and if you take the you know inner product between that coordinate x y and this unit vector here that gives the projection the length this r so you know uh taking the inner product x y the cosine alpha sine alpha you get the r or expanding this into x cosine alpha plus y sine alpha equal r right so if a lidar detects a pointer on the line say that that's a point you know also and the like information is basically in a portal coordinator so at the distance low um and then on orientation the theta you find the some dot over here so if this one um if this spot is on the um the straight line uh it has to satisfy this conditions right uh you know either way you can substitute the x is actually low you know cosine theta um and then rho sine theta that's x y coordinates of this point this one and all substitute it into this and the massage equation you get the uh and i'll do this equations or or more simply uh you have a theater and alpha so this this you know difference is theta minus alpha so if you take a cosine of uh you know law uh cosine of theta minus out of r multiplied to below that is actually this distance right so that's actually this equation rho cosine theta minus alpha equal r now you suppose you have a total capital n um i know data points you know cloud uh point clouds and here and i equal one to n and all and rho i theta is actually index with this you know i then what you like to do is to draw the least square and a straight line right curves right so the distance between any point detected and this line is basically given by this right given by this you know this equation is the case when the distance is zero away from it you get the this expression so d i is actually squared and then we take the um an argument the minimum of this guy with respect to r upon the n r alpha and r to find the best effect of the straight line represented with the alpha and the r that is to best fit the uh these sample points point clouds okay now you know one more twist here is that the since the data points are coming from that the lidar right as i said the lidar um i know accuracy is pretty good um if that's actually a shorter distance from here but as it goes further it is actually you know more inaccurate inaccurate so why not actually take into account the the accuracy or valiance of the you know individual you know and pointer detected so um let's see the distance is related to the variance so you know um we um you know weighted each penalty squared error from the straight line a di square weighted by one over sigma square and a large sigma means that the unreliable data that is actually taken um you know scaled down and if the sigma is small actually shorter distance like this point you know has a more credit so i know larger weight now let me just remind you that this is this actually um ellipse indicate the um kind of you know two-dimensional you know covariance and that is actually uh sensor signal covariance right now uh you know unlike the uh process noise that's actually a i know um sideway uh error is larger and then actually no error is you know smaller the sensor uh covariance is it opposite right so so remember i just showed this one a bit some time ago the only two no directions is small and the sideway is a large but i know in here is the opposite right so um but anyway so distance is very critical factor um to evaluate to quantify the accuracy of the measurement so let's take into account this okay now um well as you repeat the this curve fitting for different set of data sometimes you get the straight line and best fitted this way or sometimes it is actually a range line here sometimes red line so you may see some kind of variance about this right so now we can conceive the error covariance of the parameter out of the r as you repeat this many times you get the many set of straight lines parallelized with the alpha and r you can actually quantify that error covariance with this matrix and r so ar and sigma you know alpha out of r sigma alpha r sigma alpha sigma r r okay so um you know sigma alpha r is basically uh the variance of uh you know alpha parameter and the sigma r r is actually um the r number parameters so this is a little actually very much we cooked the original signals and we end up with the you know alpha and the r the things we are doing is basically um we in many ways we process the original the signals um and relate that one to the wall we like to detect so in a sense we can think about the suppose you have a special sensor that is to detect the inner wall so that sensors output is alpha and r not the original you know um you know the lidar signals but i know this special virtual sensor i would say virtual sensor can actually produce alpha and r and then for that sensor we quantify the error covariance in this way so pretty think about in that way okay so we have special sensors that produce the r and the alpha maybe in the line and the segment okay so where we are now so we all talk about the this measurement side because the measurement is actually not just a simple one point measurement but i know we collected many many measurements from the you know lighter and rgb cameras and so forth so um and then that is uh um you know converted to um kind of you know landmark or some marker in the in in the in the environment involved in the map so that's actually a story about the wine team now this you know yt must be compared why they had what is yt hat white hat is something that you should be able to predict based on the current a priority state then from that position you should be able to get the this signals this sensor signals and that signals is built upon your model and your current you know estimate of location then that is to be compared to this yt and the physical sense of yt now is basically the uh the wall uh parameters alpha direction of the wall and the distance r okay so these two are to be complete you know uh compared so in doing so first we have to create the white hat white hat the base on xt minus one hot so how we did it do that so the point is that the this is the measurement prediction suppose you are origin you know the previously xt minus one this position and then you do some actually you know um propagation of the state from this point to this point so this one x to t minus one you supposed to be this point now if you are in at this point what the sort of uh parameters of each wall you're supposed to receive well here you know saying you know detecting this wall this wall you should be able to see the wall in this direction out of her and then at this distance r right that is something that you expect to receive from your in a world detection center our r yeah so so we need to create the desk okay of our hut and then our heart there are many uh that passes multiple walls so in this case we just put the you know upper script j to describe this is the jth ball okay just walk so the thing is it's like just like this suppose you are here okay including your orientation so now your local coordinator pointing in this direction okay you know you are here from this coordinator you should be you know see a uh able to see the award in this direction so that is this guy you know from that the position you see this wall you know at the distance r and then you know um you know the orientation out of her what is involved in this computation is nothing but the coordinate transformation you know based on the disposition and the orientation then you look at this okay just a you know coordinate transformation so now with that we started with the point to clouds and identifying the lines okay but maybe you detect the you know many lines these are all possible line segments here and there and if you plot those lines in the r and the other parameter space you have a you know candidate to point here and a point to here and the point here maybe four or five the lines are described in here with the different uh and over variance covariance right just as we quantify that remember that we did quantify um the variance in terms of parameter r and alpha right so this is you know important information how reliable they are okay the next next thing is that uh you know so you know if you have a multiple walls you need to distinguish which which and which one is which so that's actually the next two things it's called the data association data association so at the time you know t minus one you had uh some estimator here and then you had the estimating end here now you have the at the time of the t uh you get the um you know your predicted one here and here and here here right so you know um these are basically walls sometimes visible sometimes because of the occlusions you can't see it you know here one water is away from it so there's no correspondence to that so among the many possible you know the walls um you have to make the right matching between the actually you know observed one and then predicted one these are very distinctive so easy to distinguish but i know these distances are you know getting closer it is a little bit tricky so you have to use the right tools and actually again we like to take into account the reliability of each measurement quantified by this um you know ellipse uh coming from the error covariance we actually um and await the difference by means of the era covariance and such uh the distance is called mahalanobis distance and actually vectors basically you know this is a quadratic form is basically you know um quantum weighted the distance is weighted by the inverse of the error covariance a lot of computing things actually the the reference i look at is actually they are using the row vector so i just put the transpose here and so this looks like the wrong notation but in fact not because this guy is a row vector so this represents a quadratic form and it's a scalar quantity so that square quantity is you know smaller than the certain values that we can say that this i uh measured in a parameters and the predicted the the json parameters are basically the right match so we can compare the predictions prediction against this actual measurement for that pair remember we have many i know you know walls so we have many you know um pairs of uh measured and predictive ones so it sounds we have many sensors right again our sensors is to a very special virtual sensor producing alpha and r and uh you know this is the uh companions we use so with that that we can finally update our state you know previous state is here somewhere here and then actually move to xt t minus one you know this point and then all with with with with that we have a final prediction here era covariance ptt minus one so putting these together we can actually find the common gain you know from this expression and the common gain it can be obtained um you know innovation matrix and then actually um you know a py's innovation matrix and then h and the p um and then actually this is actually meaningful comparison you have to make the comparison uh between matched you know pairs of walls right so with that uh you can make a comp you know correction to uh in a current state and then you know dunder marx okay so and maybe this guy is actually you know um extended the comma filter although um i know many of them uh the basic dynamic equations is governed by this so you know instead of using this we use this you know original you know non-linear equations and this this y is actually given by this this is actually long distance because we have so many walls in here okay um and uh this is the way that we used to use the updating the filter on updating the gains okay so just to summarize this islam um again the localizational mapping synergistic effect in estimating these you know simultaneously so the basic techniques is a augmented state of variables by including the feature parameters and all on in this derivation we use the line parameters out of r and r in addition to original main um state to estimate x y theta posts and we have many pairs of them okay and you know um we predict the uh expected the um ward parameters alpha and j based on the previous one and then actually this one is to be compared to the right the matched actual measurement so you know we cycle through that process um to uh you know update both you know current to position and the parameters involved in the systems yeah so common filter based salaam generally very fast to compute and we assume a gaussian noise but in reality measurement noise and the process noise are non gaussian you know some tricky cases so non-linear dynamics so we just use the extended common filter okay but i said kama filter is limited in dealing with the rapidly changing and non-reality so as you know that is actually a little bit in trouble so um i know uh many people use the uncentered common filter instead uh facing uh some sharper non-nearly and that's the way to go and also you know in some cases uh computation of uh jacobian is not that simple also that's actually you know uh ukf is to get rid of that kind of you know stuff now i have to say that the slam is limited to rather slowly changing environment we assume that the parameters and landmarks the wall locations are basically stationary right but in fact the you know uh think about the busy uh environment um even the factory floor many things are brought in and many things are brought out and the map changes right so if the big goal is to locate itself based on those you know landmarks but in landmarks to change it so sometimes it is to arrive at the wrong and wrong reasoning wrong estimate so this basic uh concept behind islam is that at least the environment is changing much more slowly than that actually robot is to find its locations um in there so that's actually some one of the you know important open questions that people are now working on that kind of stuff um how you deal with the rapidly changing stuff also islam is limited you know um parameter representation 3d environment is often a challenge and that's actually sometimes people are uh you know having troubles let me just show you a few um videos and then uh entertain the question to suppose um so uh that this is this video is a rather old um the um this is basically the company i established you know working on this kind of stuff this is a vehicle looking around the map uh you know moving around this environment and augmenting the map right so as you visit the many corners it actually gained more you know information and you know the map is more actually you know augmented and you know details have been uh added to to the map so it is to show the process how it can actually you know augment and then you know improve the you know map you know details okay so the map is revised and it becomes more accurate as a robot moves around and the more images are obtained okay now so once you have some uh you know even though the map is not complete you can still actually go around um you know purposely if you specify the you know certain uh destination to go nobody can go on the way it finds a more an environment information and actually it can arrive at the uh destination so in this actual video robot is here and then it indicates a lot difficult to see this green light or white line green light is actually the point pointing the destination so we'll see how it works so initially a robot is actually facing this direction and will move around um yeah sniffing around a little bit and i have to uh detect a bit more information um and you know trajectories being the set and actually it goes yeah so as it goes the more information and actually it can go you know even a narrow corner and then i arrive at the destinations so um and then now uh it appoints a um another destination here in a green light actually you know 0.3 um yeah in an immediately trajectory is planned and execute the task so robot actually detecting the whole environment um and go around and yeah even to go through a narrow doorway um when it's not too sure it goes slowly yeah and here's the doorway and going to the hallway like that so i think that's good enough so that's that's the uh you know the technology so um well maybe i should just stop uh my talk here and entertain some questions and then let's discuss the project number two a little bit so i have one question you guys a little bit you know quiet today oh yeah i know um long form notes yeah i know for this particular lecture i have a two or three lectures having no written uh you know complete notes yeah this one is more or less just applications and so um i did not prepare for it um but i think it's a clear just applying there so you can actually work on the you know project number two and so forth so unfortunately i don't have it yeah okay uh so any questions and well i know a little bit the smaller gathering today irregular schedules so i don't see many students today but i know uh if you have any questions i know please just a speaker directory yep okay a little bit very quiet uh let me see and i posted the uh the slides uh so um maybe you can check it in some of the details but that's all about the you know today's uh you know contents so back to this um yeah project number two um is actually uh it's to be discussed this friday and the you know due the following monday right so it has the two questions and i know first question yeah first question is pedestrian trucking so this is not actually navigation okay um but uh you know one key component and maybe i should talk about this scenario the robot actually you know need to be deployed to so a busy environment like this and train stations or airport right so you know robot has to move around um safely and also um you know it has to it should not actually impede the flow of the people right so you know um you have to really find the you know what way that is at least interfering with the people and then actually should not be a burden should not be you know obstacle for other people so moving on this at the corn at the same speed as other people that's the best right and then it should be able to walk and it should be able to move in the same way as other average pedestrian so that's actually the important you know um you know you know functional requirements for this vehicle now the first thing is that how you you know detect the people you know surrounding you and this is not only just the snapshot the measurement you know you identify uh the location of those people but also you have to estimate and predict the course and the speed of individual pedestrians right so if you see this person moving from here to here you know you keep eye on this and because it might collide with this pedestrian so you know uh keep track of it and you you can establish in karma filter to each of the pedestrian and and actually estimate their locations the next location the future locations and then they have rusties based on that you can actually um you know design and actually find create the trajectory to your photo right it has to avoid the you know collisions obstacle um you know collision and it should be able to going towards your destinations right uh first thing is that you have to know these moving targets where they are moving and then how they go at a certain time later so that's actually the first the questions and part one of this project so and a lot of the data actually has been identified you can download the niger data on uh so so we use the lidar data so um it is support of coordinates right so that's a prostate challenge you have to take border coordinates right um yeah you know um yeah you know uh we want you to uh you know implement extended the common filter and then uncentered the comma filter for you know predicting the uh single passenger you know trajectory okay now we have a little bit more complicated situations you know because multiple you know passengers are moving around right moving around and the one the very tricky part maybe i can see it here and it's not too clear but if you are sitting here and look at maybe you can see these people but you can't see the you know person behind it right or sometimes actually you know some persons walking uh you know oblique directions and they know at a certain point um the person literally behind the you know that you know and the walker is not visible occluded so that kind of thing happens so this question c if you actually download the the data you have two passengers detected by the slider however the challenge is that the um at certain time period the person behind uh one person is not visible so how do you really deal with this kind of situations right you you know suppose you are actually walking um yeah um well along the infinite corridor it's a little bit challenging to me but i know that sometimes the students walking up together and then you know occlusion happens right and occasionally we see abrupt uh abruptly some some some pedestrian shows up because it's being occluded and suddenly it becomes invisible but uh you know otherwise i know when you actually you know um look at the multiple persons and then someone uh became invisible however you know that the previously that person is walking in certain direction at a certain speed so you can predict you know after a certain time it he or she should have shown up at this point you know at a certain velocity right of course this one is based on the previous estimate and then no new information is basically obtained right so what you do you can basically you know simulate you know you know you know on um open loop simulations for multiple time steps right while actually depressing on that position pedestrian is not visible you can't affect the new data it's assimilated and update your location but instead you have to keep going your simulator simulations right so this requires a little bit of thinking as to how to deal with the such situation and also interesting is that you know i know imagine and let me just you know change my face so um imagine that the certain times you don't have much data right um you can not only um you can you are unable to update your estimated location of the passenger but also you cannot update your covariance matrix too right you know uh the qurani answer is actually you know um uh getting worse and worse uh because i know repeated the use of um propulcation and making the you know um a priori um inner covariance uh and large and larger right so this is the one scenario that the extended common filters you know cannot actually uh capture that well right so um i think the common filter too it's a challenge but i know um we'll see what's gonna happen so that's actually something you should keep in mind yeah okay um so you know uh both cases uh uh actually this second case in particular you know it is i and the worst comes worse you know it tends to diverge diverge yeah the linearization and based on the jacobian the user for external computer in particular may be more vulnerable you know in such situations okay um the second part is the localizations so the robot is in a room actually uh i tried to use the more realistic room but i know just to make your effort you know um simple i just use a very uh in a simple room and we have uh you know totally six walls and here and here and the road is moving around and then actually you know you know locating uh itself more accurately as it goes so this part is very much following the today's lecture starting with the um uncertain uncertainty quantifications with respect to the weed slip so you know as you know vehicle troubles or longer distance um the iraq variants are getting a little larger right so in terms of the delta sr delta sl on here you know it is to be more you know on the in our audience uh in proportion to the disastrous we use this model okay so starting with the initial conditions which is also given in the downloaded uh you you download the data it indicate the initial conditions or so force so obtain the predicted the output the measurement for the initial step so what's that you know basically um you have uh um you know you start with the original map information right which may be incorrect and then you have an initial estimate of your position that's propagated into this position x one zero so you just ask yourself suppose uh the robot is here robot is here and then look at this on each wall where the you know wall is to be located so beauty from this actually particular local coordinate and the you know polar coordinates is defined you know to this vehicle oh yeah this one is simplified to prove a little bit you know so we use the uh you know uh global coordinate x and theta is measured from that point so measure the from this line what is the location of this y and also in terms of alpha and then a distance r okay so you know if you know the estimated the position so you can actually create the out of under r right and this one is to be compared to the actual measurement okay now you know measurement aside uh we can it's a little bit involved right uh first the data you download into the data data it's basically at a collection of points all around this point all around this point so first the task is segmentation segmented like the data into the individual walls by hand so i um there's some nice tools to apply to do automatic segmentation but you know you know for this in this case let's do it by hand so if you look at the data you can easily find the you know you know there's some discrete change in the data number 55 to 59 that you have some of the change in the you know kind of location so so first you know point one two point two fifty you know the nine uh you know that's actually representing this wall or number one and the next one aboard number two so you know um i ask you to do it by hand just to simplify the problem okay but the main part is this the estimate and then estimate the word you know feature parameters anaphon the r from that actual set of uh and a point to clouds okay obtain the initial measurement in a y and one i this is basically you know on wall and then pose right yeah yeah so we we create this and then uh you know you can actually create the um you know your predicted output compared to the actual output and you can actually um yeah um yeah uh uh you know update the state but now in doing so since we have multiple walls again we need to do the data associations and that's a little hard so you know the assignment i gave you here is you know expecting you to do this data association on your own manually manually by doing this you really understand what you're reading and doing and then how this kind of algorithm really works okay and you know obtain the measurement error covariance are one and then form the karma gain k1 actually this is the standard techniques and then finally update the state and i want you to repeat this one you know twice or three times a few times just to get to know how this method really works now some you know maybe extra credit extra credit to job is to yeah you know somehow automate the the data segmentation and the data association that is two parts it can be you know automated there are many actually uh open source uh software is available uh and in the community a ros community in particular uh and many other uh software um you know you know github too you know they have listed the many software so you can play with this but the key point is that uh you know as we did with the image sensors getting the informations and yeah we have to do a lot of things yeah and as far as we use the um you know landmark walls and so forth so we need to relate the original point out to those um in a meaningful and landmark in some way yeah so that's that's the thing that you need to do so uh this week's uh discussions you know i hope the reporter can walk on that and to some extent and then exchange your ideas i do know that the some of them some of you are in in this field so uh you know you can take the lead and then actually you know share your experience with other members slam beyond the standard karma filter yeah you know one issue is dealing with the non gaussian the noise properties and the dealing with the noise you know non-linear process dynamics without linearizations so it's basically base filter it's a natural choice um non-linear dynamics and non-gaussian noise we can deal with that but this one is very heavy in the computation so tomorrow i talk about the particle filter and the basically the monte carlo implementation of a beta filter and uh even though we use the particle periods sometimes uh i know too much computation is involved so some clever idea is to you know combine particle filter with the common filter so common filter is more actually i know uh streamlined the computations right you know um in a bit filter is to uh predict the entire uh you know distributions common filter assuming that the gaussian the single peak singular mode estimation so the just the pinpoint that the particular you know pointwise estimation everything is more you know streamlined even ukf but we used just you know um small number of uh sample points um called the sigma points compared to uh particle periodizer which deal with the thousands of points uh them are much more you know streamlined so uh still use of common filter for those actually you know um uh state variables and and then associated with the dynamics which are more or less linear and actually the you know noise distributions associated with them are more or less actually gaussian you don't have to uh push it to a you know base pivoter or particle filter you can use the common filter so combining and you know blackwell and row you know type of formulation is very effective okay so that's all about it for today yep okay so um questions alexandro does the accuracy of islam depends heavily on how well you stored map yeah what if the stored map is not the complete or you move several objects around from the environment yeah that's a good question so yeah um well we have to start with some um i know um you know fidelity of the original map information otherwise first actually a robot just as i showed you in the video um you know robot the first doesn't move very much but are scanned around the uh you know environment um and then getting a little bit more you know relevant information before moving of course uh you know the dilemma is that you can't get the you know rich information uh unless you move around right but in order to move around you have some minimal information so the threshold what is actually minimum information needed in the map it's something to allow the robot to you know move around a little bit even though that is actually uh not necessarily going to the destination uh very effectively you know but there's something that the robot actually moves around safely uh and then getting the more information at least the nearby information that can be obtained so you know um you know or all kind of uh um estimation theory including this islam we need to as far as it is actually recursive method we need to start with some you know initial conditions so in your condition it's far away from or or in the case of more theoretical analysis um you know it doesn't actually meet the basic requirements so for instance the uh symmetric uh you know positive definite you know matrix for the initial covariance and matrixes that's actually you know fundamental requirements right so if that kind of commandment is violated there's no way to you know run the recursive computation properly it's a little different story but you know map to it you have to provide the certain you know operating knowledge on the other hand if the map is almost perfect there's nothing to value but you don't have to estimate the parameters involve the map right um you can concentrate on the localization even though um the features you can detect is very you know poor it's not very limited but a map is very much you know uh you know well defined and a high accuracy one um that doesn't help the robot to do more very quickly yeah now you know in the second question is basically dynamic uh environment and this is a little hard actually you know things have been done already very well to some extent i would say is actually um collision avoidance if something approaching you and then you set the camera filter for that object and then keep tracking the motion of that and then you know you can predict the future you know position the locations thereby you can find the summer pathway to avoid the you know collisions so that that has been that's actually dealing with the more you know radically changing the environment but that's actually very specific collision avoidance so that is not that hard a little hard is that you know um some environment you have to look at these certain you know corners or you know landmarks um to reach a certain point but that the landmark is changing sometimes people move it that way and that's really happening in your real environment like a you know factory you know automations and that's really needed and that's still an open question yeah people and also working on that yeah and also the rider uh the laser doesn't reach you know too far it's about at 20 meters at the maximum and the father that the point is actually you know you know the difficult and so they use and they have to use rgb cameras um that can go a little farther but not too much depth information so yeah it still actually and a long way to go yeah okay but i know i can say that the you know uh the the um uh you know self-driving cars you have to wait the i don't recommend you to buy the new you know self-driving cars immediately um and that's a hard but in a more structured environment like a factory it's already happening and then that's a drastic change in the happening these days so this is the real technology oops i just overrun a little bit so i have to close my talk today so i'll see you tomorrow one o'clock i will work on particle filters that is to conclude the estimation okay take care bye

### 6. Object SLAM for Augmented Reality

### 7. Simultaneous localization and mapping with some augmented reality

### 8. Augmented Reality and SLAM | AI Incorporated

### 9. SD-SLAM with AR