Lidar-Monocular Camera Mutually Fused Odometry Guided by Visual Feature
DOI:
CSTR:
Author:
Affiliation:

1.College of Mechanical and Vehicle Engineering;2.National Elite Institute of Engineering,Chongqing University

Clc Number:

TP242

Fund Project:

  • Article
  • |
  • Figures
  • |
  • Metrics
  • |
  • Reference
  • |
  • Related
  • |
  • Cited by
  • |
  • Materials
  • |
  • Comments
    Abstract:

    The effective fusion of monocular camera can enhance the robustness and accuracy of Simultaneous Localization and Mapping (SLAM) tasks.. However, most existing camera-LiDAR fusion SLAM methods are unidirectional and do not fully utilize the rich textural information from camera images to guide the registration process of point clouds. To address this issue, this paper proposes a bidirectional fusion odometry of monocular camera and LiDAR guided by visual feature points. This odometry enhances the robustness of visual feature point depth estimation using a dual-plane method. It also predicts the projection positions of current-frame LiDAR points in adjacent frames based on the matching relationships of visual feature points. Furthermore, it constructs a potential LiDAR point cloud matching set for the current frame by leveraging the characteristics of co-visible keyframes, thereby accelerating the speed and accuracy of point cloud registration. Extensive experiments were conducted on the KITTI dataset and our self-collected dataset. The results show that the proposed bidirectional fusion odometry not only achieves better accuracy and robustness in pose estimation compared to existing methods but also produces more realistic 3D reconstruction results.

    Reference
    Related
    Cited by
Get Citation
Related Videos

Share
Article Metrics
  • Abstract:
  • PDF:
  • HTML:
  • Cited by:
History
  • Received:February 28,2025
  • Revised:November 09,2025
  • Adopted:December 03,2025
  • Online:
  • Published:
Article QR Code