多视角投影学习隐函数的点云重建方法
作者单位:

上海科技大学

基金项目:

国家自然科学基金项目(面上项目,重点项目,重大项目)


Learning implicit surfaces from point clouds by multi-view projection
Author:
Affiliation:

ShanghaiTech University

Fund Project:

The National Natural Science Foundation of China (General Program, Key Program, Major Research Plan)

  • 摘要
  • | |
  • 访问统计
  • |
  • 参考文献 [22]
  • | |
  • 引证文献
  • | |
  • 文章评论
    摘要:

    点云重建需要处理采集过程中存在的噪声空洞以及不均匀采样等问题。传统方法无法利用大量已有的高质量数据来获取先验信息。本文提出结合多视角投影和隐函数的端到端点云重建方法。首先对点云进行多视角投影得到深度图,基于卷积操作的编码器将深度图转化到到特征空间,并在这个过程中进行去噪和补全。基于多层感知机(MLP,Multi-layer perceptron)实现的隐函数用来表达几何。在训练过程中,对高质量的几何进行点采样,通过点云和点采样的隐函数值来监督学习编码器和隐函数的参数。实验在多个数据集上进行,通过与传统方法以及现有数据驱动的方法进行对比,验证了新方法的有效性,在重建细节、噪声处理和空洞补全均有明显的提高。预训练的模型直接适用于多相机深度采集系统,并能自动权衡去噪和细节保留的程度,在真实数据上重建出光滑的几何表面。

    Abstract:

    Point cloud reconstruction must deal with the noise, the holes and uneven sampling problems in acquisition. Traditional methods lack the way to learn prior knowledge from large datasets of high-quality geometric models. This paper proposes an end-to-end reconstruction method by combining multi-view projection and implicit function. First the depth map is obtained by projecting the point cloud to multiple views. The convolution-based encoder transforms the depth map into feature space, at the same time denoising and complementing the feature. Implicit function is implemented with Multi-layer Perceptron (MLP), and is used to represent geometry. In the training process, the high-quality mesh is sampled, the encoder and implicit function are supervised by the pairing pointclouds and implicit values of sampling. Experiments are carried on several datasets to show the effectiveness of the new method. By comparing with traditional method and existing data-driven methods, it shows the new method significantly improve the reconstruction details while suppressing scanning noise, and complement the holes in more natural way. The pre-trained model is directly applicable to multi-view depth capturing system, and can automatically balance the degree of denoise and preservation of detail, and reconstruct a visually acceptable geometric surface on real data.

    参考文献
    [1] Berger M, Tagliasacchi A, Seversky L M, et al. A survey of surface reconstruction from point clouds[J]. Computer Graphics Forum, 2017, 36(1):301-329.
    [2] Digne J, Morel J M, Souzani C M, et al. Scale space meshing of raw data point sets[J]. Computer Graphics Forum, 2011, 30(6):1630-1642.
    [3] Ohrhallinger S, Mudur S, Wimmer M. Minimizing edge length to connect sparsely sampled unstructured point sets[J]. Computers Graphics, 2013, 37(6):645-658.
    [4] Sharf A, Lewiner T, Shamir A, et al. Competing fronts for coarse-to-fine surface reconstruction[J]. Computer Graphics Forum, 2006, 25(3):389-398.
    [5] Li G, Liu L, Zheng H, et al. Analysis, reconstruction and manipulation using arterial snakes[C]//SIGGRAPH ASIA' 10: ACM SIGGRAPH Asia 2010 Papers. Association for Computing Machinery, 2010.
    [6] Hoppe H, DeRose T, Duchamp T, et al. Surface reconstruction from unorganized points[C]//SIGGRAPH'92: Proceedings of the 19th Annual Conference on Computer Graphics and Interactive Techniques, 1992:71-78.
    [7] Kazhdan M. Reconstruction of solid models from oriented point sets[C]//SGP'05: Proceedings of the Third Eurographics Symposium on Geometry Processing. Eurographics Association, 2005:73-es.
    [8] Manson J, Petrova G, Schaefe S. Streaming surface reconstruction using wavelets[C]//SGP'08: Proceedings of the Symposium on Geometry Processing. Eurographics Association, 2008:1411-1420.
    [9] Ohtake Y, Belyaev A, Seidel H P. 3D scattered data interpolation and approximation with multilevel compactly supported RBFs[J]. Graphical Models, 2005, 67(3):150–165.
    [10] Alliez P, Cohen-Steiner D, Tong Y, et al. Voronoi-based variational reconstruction of unoriented point sets[C]//SGP'07: Proceedings of the Fifth Eurographics Symposium on Geometry Processing. Eurographics Association, 2007:39-48.
    [11] Kazhdan M, Bolitho M, Hoppe H. Poisson surface reconstruction[C]//SGP'06: Proceedings of the Fourth Eurographics Symposium on Geometry Processing. Eurographics Association, 2006:61-70.
    [12] Kazhdan M, Hoppe H. Screened poisson surface reconstruction[J]. ACM Transactions on Graphics, 2013, 32(3).
    [13] Ladicky L, Saurer O, Jeong S, et al. From point clouds to mesh using regression[C]//Proceedings of IEEE International Conference on Computer Vision (ICCV). 2017:3913-3922.
    [14] Tatarchenko M, Dosovitskiy A, Brox T. Octree generating networks: efficient convolutional architectures for high-resolution 3D outputs[C]//Proceedings of IEEE International Conference on Computer Vision (ICCV). 2017.
    [15] Groueix T, Fisher M, Kim V G, et al. AtlasNet: A Papier-Mache approach to learning 3D surface generation[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2018.
    [16] Chen Z, Zhang H. Learning implicit fields for generative shape modeling[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2019.
    [17] Mescheder L, Oechsle M, Niemeyer M, et al. Occupancy networks: learning 3D reconstruction in function space[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2019.
    [18] Park J J, Florence P, Straub J, et al. DeepSDF: learning continuous signed distance functions for shape representation[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2019.
    [19] Newell A, Yang K, Deng J. Stacked hourglass networks for human pose estimation[C]//Proceedings of the European Conference on Computer Vision (ECCV). 2016:483-499.
    [20] Saito S, Huang Z, Natsume R, et al. PIFu: Pixel-aligned implicit function for high-resolution clothed human digitization[C]//Proceedings of IEEE International Conference on Computer Vision (ICCV). 2019.
    [21] Loper M, Mahmood N, Romero J, et al. SMPL: A skinned multi-person linear model[J]. ACM Transactions on Graphics, 2015, 34(6):248:1-248:16.
    [22] Chibane J, Alldieck T, Pons-Moll G. Implicit functions in feature space for 3D shape reconstruction and completion[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2020.
    相似文献
    引证文献
    网友评论
    网友评论
    分享到微博
    发 布
引用本文
分享
文章指标
  • 点击次数:525
  • 下载次数: 0
  • HTML阅读次数: 0
  • 引用次数: 0
历史
  • 收稿日期:2021-03-14
  • 最后修改日期:2021-06-08
  • 录用日期:2021-06-08
文章二维码