Abstract:Traditional methods based on single-frame pose estimation in simultaneous localization and mapping (SLAM) algorithms often suffer from cumulative pose estimation errors, map overlap, and drift due to factors such as unreliable IMU data and sparse point cloud features. To address these issues, this study focuses on optimizing front end scan matching strategies and proposes an enhanced method based on multi-frame data fusion matching. This approach achieves multi-frame fusion of LiDAR data by leveraging the pose transformation relationships of consecutive frames. It employs a weighted fusion strategy of LiDAR-IMU pose transformation for pose prediction. During the scan matching phase, statistical filtering is introduced to remove point cloud noise, and the pose estimation is optimized through secondary matching. Experimental results demonstrate that, compared to traditional mainstream algorithms, the proposed method improves localization accuracy in real-world scenarios by 28.4%, 30.1%, and 65.3%, respectively, effectively reducing cumulative errors and enhancing trajectory estimation accuracy and mapping quality. This research provides a novel solution for addressing inaccurate pose estimation and excessive cumulative errors in mobile robots during mapping and self-localization processes.