1) From the 2D LIDAR point cloud, use algorithms like Hough Transform to find the best fit line to generate floor maps.
2) From the 3D LIDAR point cloud, construct a 3D map of surroundings using Structure from Motion techniques,
- Use Detectors such as SIFT, SURF, ORB, Harris to find features like corners, gradients, edges, etc.
- Use Descriptors such as HOG to encode these features.
- Use Matchers such as FLANN to map features across images.
- Use 3D Triangulation to reconstruct a 3D Point Cloud.
3) We can use the idea of SLAM indoor navigation to deploy an autonomous mobile robot inside closed environments like airports, warehouses, or industrial plants.
4) SLAM navigation can be combined with Solution #5 (Navigation Assist for Blind & Elderly) and Solution #4 (Gesture Cam) to create a full-fledged Elderly Assistance Gadget that can not only help navigate but also understand the surrounding objects and even gestures made by the elderly. If deployed on an SoC with onboard IMU such as Pico4ML then the gadget can identify its own motion as a gesture.
Discussions
Become a Hackaday.io Member
Create an account to leave a comment. Already have an account? Log In.