LIDAR based 3D SLAM for autonomous vehicles

Find more about using LiDar to get 3D point cloud for 3D scene construction.

LIDAR was originated from the idea behind Sonar (sound navigation and ranging) that was built upon the idea of using echolocation inspired by some marine animals and bats as well. The main difference between LIDAR technology and Sonar is that LIDAR uses light beams (laser beams).

Sonar works excellently in water, that’s because sound travels far better than light or radio waves. We can see Sonar sensors today in cars as parking sensors. They can detect obstacles in a short range up to 5 meters or so. They are cheap and effective at short range measurements yet they can’t be used in self-driving applications that demand accurate measurements up to 6o meters or more.



Radar (radio direction and ranging) works in a different way. It utilizes radio frequency waves to measure distance and can accurately detect and track objects as far as 200m away.


You can rarely find a downside to using Radar, it can be used in almost any environment, they are affordable relative to other technologies. Radar is used mainly in detection as well as tracking. It can’t give you the same point cloud of the environment provided by LIDAR but both of them are rather complementary.



LIDAR was first used in 1971 during the Apollo 15 mission where astronauts used it to map the surface of the moon. LIDAR’s idea was originated after the development of laser technology that allowed the communication and exchange of high quality light beams.

Image of LODAR hardware

Fig. 1 LIDAR hardware


LIDARs have gained higher-resolutions and this in turn elongated their ranges so that new applications have emerged that utilize the use of them. LIDAR can help you map, localize and track. This is the dream for self-driving enthusiasts all over the world.




As we have discussed SLAM before, it is concerned with two simultaneous problems, the problem of building a map of an unknown area and the problem of navigating through environment using the map. We have discussed before how visual SLAM is done using cameras and segmentation neural networks. This time we will shed the light on SLAM techniques using LIDAR sensor fed data.


SLAM consists of multiple parts like "Landmark extraction, data association, state estimation, state update and landmark update “. Every part of those can be solved by a lot of ways. Lots of factors can affect the use of a certain way to do each of the aforementioned processes as “what robot will be used in terms of motion/kinematics”, “Will it be an indoor or outdoor environment” and therefore “Will it need a 2D or 3D vision”.


Image: 2D versus 3D slam

Fig.2 2D vs 3D SLAM


In case of indoor environments, the robot usually encounters only static objects. As a result, 2D SLAM is used while outdoors, the car/robot will encounter lots of dynamic objects that will have to be detected, that will call for a 3D SLAM.

The reason why most of the people doing SLAM using the LiDAR as measurement device is, that they are very accurate and the data output does not need a high computing power. A main downside for laser is that the readings can be corrupted in the presence of glass objects that tend to affect the laser beam by reflection or refraction. They also found to perform poorly underwater due to the same reasons.



The first step is to scan the environment of the initial vehicle pose using the LIDAR sensor data. This data contains the distance to certain landmarks as well the angle to that landmark relative to the initial position. These landmarks are various points like obstacles in the local environment and are combined with the distances to the vehicle and their angles.


During the movement and on every new position, the LiDAR collects new data of the environment. The significant landmarks get extracted again and they get associated to the landmarks the vehicle has previously seen. Based on the observed points the vehicle can update its new position. This observation is based on the generic observation function where y is the noisy measurement, x is the full state, h() is the observation function and v is the measurement noise.

These data is used to populate an array that will include the distance values detected around the LiDAR, some LIDARs have 270 degrees of motion and others have full 360 degrees view. So, the LiDAR publishes every 25 milliseconds an array of 1080 double values (270 * 4) or 1140 values (360* 4)


The SLAM map is considered the basis of the navigation part. It has to be a detailed map, so that the localization works as it is supposed to be. As we said, they are simultaneous complementary processes.


OpenCV is one of the most famous computer vision libraries today providing hundreds of APIs. The maps used here are created by an OpenCv library called hector_slam.

The algorithm used here uses the LiDAR sensor data to create a map of the whole environment in which the car is supposed to drive through. The whole area is represented as an occupancy 2D grid map. Because of the high update rate of the LiDAR sensor it is possible to use only approximative data.


The scanned endpoint data of the sensor is converted to a point cloud using the estimated platform orientation and joint values. As scan matching algorithm, only filtering based on the endpoint z coordinates is enough, so „only endpoints within a threshold of the intended scan plane are used in the scan matching process“. This process is known as FastSLAM.


Fig: Fast slamFig: Fast slam

Fig 3. Fast SLAM


hdl_graph_slam is an open source ROS package for real-time 3D slam using a 3D LIDAR. It is based on scan matching-based odometry estimation and loop detection. It also utilizes floor plane detection to generate an environmental map with a completely flat floor. This package can be used in both indoor and outdoor environments.


hdl_graph_slam consists of four nodelets.

  • prefiltering_nodelet
  • scan_matching_odometry_nodelet
  • floor_detection_nodelet
  • hdl_graph_slam_nodelet

The input point cloud is first downsampled by prefiltering_nodelet, and then passed to the next nodelets. While scan_matching_odometry_nodelet estimates the sensor pose by iteratively applying a scan matching between consecutive frames (i.e., odometry estimation), floor_detection_nodelet detects floor planes by RANSAC. The estimated odometry and the detected floor planes are sent to hdl_graph_slam. To compensate the accumulated error of the scan matching, it performs loop detection and optimizes a pose graph which takes odometry, loops, and floor planes into account.


The output of the process can be something like the following:

Image: Output of fast SLAM

Fig. 4 the output of Fast SLAM


Generating a local map can be as the following:


% Generate a local map from the world map
    if useScan2World && UpdateMap
        % Translate current scan to map coordinates
        dx    = init_guess(1);
        dy    = init_guess(2);
        theta = init_guess(3);
        M = [ cos(theta) -sin(theta) dx ; sin(theta)  cos(theta) dy ;0           0          1  ];
        scanWorldFrame = [scan ones(size(scan,1), 1)];
        scanWorldFrame = scanWorldFrame * M';
        scanWorldFrame = scanWorldFrame(:,[1,2]);
        % extract points around the current scan for a reference map
        map = map(map(:,1) > min(scanWorldFrame(:,1)) - MapBorderSize, :);
        map = map(map(:,1) < max(scanWorldFrame(:,1)) + MapBorderSize, :);
        map = map(map(:,2) > min(scanWorldFrame(:,2)) - MapBorderSize, :);
        map = map(map(:,2) < max(scanWorldFrame(:,2)) + MapBorderSize, :);



To sum up: SLAM can be two or three dimensional, completely dependent on the environment and the surroundings as well as the expected obstacles/objects in such environment. SLAM can be done using sonar sensors, Laser sensors/LIDAR or directly using camera feed. Each technique has its own applications. SLAM is the basis of self-driving cars technology today.


Do you have any comments or questions? Would you like to read more about this topic? Just log in or register and leave a comment here.




Join the community!

Imaginghub: your community ... Show more