视觉特征点引导的单目相机和激光雷达 双向融合里程计
DOI:
CSTR:
作者:
作者单位:

1.重庆大学机械与运载工程学院;2.重庆大学国家卓越工程师学院

作者简介:

通讯作者:

中图分类号:

TP242

基金项目:

山西省科技重大专项计划“揭榜挂帅”项目(202301150401011)


Lidar-Monocular Camera Mutually Fused Odometry Guided by Visual Feature
Author:
Affiliation:

1.College of Mechanical and Vehicle Engineering;2.National Elite Institute of Engineering,Chongqing University

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    单目相机和激光雷达的有效融合可以提高同时定位与建图的鲁棒性和精度,然而现有的相机—激光雷达融合SLAM方法较多为单向融合,且未充分利用相机图像丰富的纹理信息来指导点云的配准过程。鉴于此,本文提出了一种视觉特征点指导的单目相机与激光雷达双向融合的里程计。该里程计通过双平面法提高视觉特征点深度估计的鲁棒性,同时利用视觉特征点的匹配关系预测当前帧激光点云在相邻帧的投影位置,并结合共视关键帧的特性构建当前帧激光点云的潜在激光点云匹配集,从而加速了点云配准的速度和精度。本文在KITTI数据集和自采数据集上进行了大量实验,结果表明所提出的双向融合里程计相比现有方法不仅在位姿估计上具有更好的精度和鲁棒性,也能得到更为真实的三维重建结果。

    Abstract:

    The effective fusion of monocular camera can enhance the robustness and accuracy of Simultaneous Localization and Mapping (SLAM) tasks.. However, most existing camera-LiDAR fusion SLAM methods are unidirectional and do not fully utilize the rich textural information from camera images to guide the registration process of point clouds. To address this issue, this paper proposes a bidirectional fusion odometry of monocular camera and LiDAR guided by visual feature points. This odometry enhances the robustness of visual feature point depth estimation using a dual-plane method. It also predicts the projection positions of current-frame LiDAR points in adjacent frames based on the matching relationships of visual feature points. Furthermore, it constructs a potential LiDAR point cloud matching set for the current frame by leveraging the characteristics of co-visible keyframes, thereby accelerating the speed and accuracy of point cloud registration. Extensive experiments were conducted on the KITTI dataset and our self-collected dataset. The results show that the proposed bidirectional fusion odometry not only achieves better accuracy and robustness in pose estimation compared to existing methods but also produces more realistic 3D reconstruction results.

    参考文献
    相似文献
    引证文献
引用本文
分享
相关视频

文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2025-02-28
  • 最后修改日期:2025-11-09
  • 录用日期:2025-12-03
  • 在线发布日期:
  • 出版日期:
文章二维码