In the last few decades, SLAM, (an acronym for simultaneous localization and mapping) has developed as a nexus of technologies that enable autonomous robotic mobile scanning of the indoor, outdoor and subterranean environment.
What is SLAM?
First coined as a term in a scientific paper presented at the 1995 International Symposium on Robotics Research, SLAM attempts to address a classic “chicken and egg” problem. How does an autonomous or semi-autonomous device, a robot, for instance, identify its geospatial location in real-time — without the aid of triangulating global-positioning satellites — and at the same time, create a map of its surroundings, placing its location on that map?
Which comes first? (Hence, the chicken and the egg conundrum.) The map? Or the geolocation of the robot within the map it’s trying to create?
The answer is revealed in the SLAM acronym itself. SLAM technology uses sophisticated computer algorithms and light-ranging technology like LiDAR (Light Detection and Ranging) and 360° cameras, to “solve” the chicken and egg problem by performing both functions at the same time. A SLAM-equipped mobile device will use LiDAR, which emits a laser light pulse at an object and measures the time it takes that light to be reflected back toward the device, or time of flight (ToF).
LiDAR, SLAM and IMUs
Similar to how Radar (Radio Detection and Ranging) relies on electromagnetic waves to better understand its surroundings, LiDAR SLAM relies on light waves to a build a picture of the area around it. LiDAR is what builds the map and creates a 3D point cloud of the data gathered. SLAM algorithms, which helps calculate a best estimate of location, is further augmented with an inertial measurement unit, or IMU. The IMU includes an accelerometer, a gyroscope, and sometimes, a magnetometer.
Combined, these tools enable the measurement of how an object is moving, spinning, or changing position in a 3D space. Often the technology is imbedded in a variety of products, including drones, smartphones — and even “smart” athletic equipment to help track an object’s positional whereabouts in real time. The technology is most ideal in situations where GPS triangulation is difficult or impossible like in mining or forestry where dense tree canopies block global positioning data.
It’s also useful in driverless cars, UAVs (unmanned aerial vehicles or drones), augmented and virtual reality applications, indoor navigation within buildings or enclosed spaces, architecture, engineering and construction, and long-term urban planning where, like a forest, the “concrete jungle” can block or partially interfere with a GPS signal.
In addition to LiDAR-based SLAM (which includes 2D and 3D mapping) there’s also visual SLAM, or vSLAM. As its name suggests, visual SLAM calculates the position and orientation of a device with respect to its surroundings while mapping the environment, using only a camera. Meanwhile, feature-based visual SLAM tracks points of interest through successive camera frames to triangulate the 3D position of the camera. This information is then used to build a 3D map.
The Value of Mobile Scanning with SLAM
To a large extent, mobile scanning is a technological complement to static reality capture, achieved for the last three decades through terrestrial laser scanning, or TLS. Laser scanners like the FARO Focus and the FARO® Focus Premium, now with Flash Technology™, is a perfect example of the (current) pinnacle of what fixed position scanning can achieve.
But even with that speed and accuracy success, increasingly accurate mobile scanning, up to 6mm, can be a critical time saver, and is better at capturing hard-to-reach locations efficiently through its versatility.
With the technology’s latest iteration, in fact, at normal walking speeds of 2 to 4 mph (3.2 to 6.4 kph), a SLAM mobile scanning system can capture data up to 10X faster than traditional TLS methods alone. (Mobile scanning even has the capacity to achieve quality data at speeds of 30 mph (48 kph) — making data capture from a moving vehicle possible).
This efficiency gain has a significant productivity multiplier effect. With SLAM-based mobile scanning, more locations can be scanned faster, with fewer personnel on-site, and with less risk of data gaps, resulting in reduced or eliminated repeat on-site visits. Enhanced throughput and project-to-project agility (thanks to speed, accuracy, ease-of-use, and portability) can mean important new business opportunities in both existing and yet-to-be-tapped markets.
The advancements in SLAM technology has made it the perfect complement to TLS, completing a continuum of static and mobile scanning solutions for a variety of customers and applications.
The Future of SLAM Technology
For SLAM technology it’s about uniting the best of static laser scanning, which includes Hybrid Reality Capture™, powered by Flash Technology™, and mobile scanning solutions.
That means using stationary scanning for highly granular work, but being able to quickly map and measure the natural and as-built environment with a level of accuracy that meets the needs of most projects, satisfying their project stakeholders with far greater end results.
While it’s near impossible to predict what the future will bring, if the past is prologue one thing is certain: SLAM technology will continue to advance, speed and accuracy will increase, and its use applications will continue to broaden.
All of which is to say that SLAM technology is fast becoming a “slam dunk,” when it comes to mapping and measuring the physical world — indoors and out.