共享知识引导和模态一致性学习的可见光-红外行人重识别
DOI:
CSTR:
作者:
作者单位:

昆明理工大学

作者简介:

通讯作者:

中图分类号:

U448.213???????

基金项目:

云南省科技厅科技计划项目(面上项目)(202101AT070136)


Shared knowledge guidance and modal consistency learning for Visible-Infrared person re-identification
Author:
Affiliation:

1.Kunming University of Science&2.Technology

Fund Project:

Supported by Science and Technology Planning Project of Yunnan Science and Technology Department (General Project) (202101AT070136).

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    在可见光-红外行人重识别(VI-ReID)中,提取不受模态差异影响的鉴别性特征是提升模型识别性能的关键。目前常见的解决方法是通过一个双流网络学习模态共享的特征表示,然而,这些方法没有挖掘模态间更多的共享知识,并且模态间的差异依然存在。为此,提出共享知识引导和模态一致性学习的可见光-红外行人重识别(SKMCL)。该方法由跨模态共享知识引导(SKG)和模态一致性学习(MCL)组成,前者通过跨模态注意力充分挖掘模态间的共享知识并作为引导,辅助模型更好地提取鉴别性特征,后者通过设计的模态分类器与双流网络的对抗学习缩小模态差异,两个模块相互配合,强化特征学习。为了进一步减小模态差异,引入特征混合策略来增强双流网络提取模态一致性特征的能力。提出的方法在两个公共数据集SYSU-MM01和RegDB上的性能与相关工作比较,优势明显,Rank1/mAP精度分别达到了58.38%/56.10%和87.41%/80.75%,证明本方法的有效性。源代码地址:https://github.com/lhf12278/SKMCL.

    Abstract:

    In the visible-infrared cross-modality person re-identification(VI-ReID), how to extract discriminant features which are not affected by modal discrepancy is critical to improve recognition performance. The current common solution is to learn the shared feature representation of two modalities by using a dual-stream network, however, these methods did not mine more shared knowledge between the modalities, and the discrepancy between the modalities still exits. Therefore, Shared Knowledge guidance Modal Consistency Learning(SKMCL) is proposed. This method is composed of cross-modal shared knowledge guidance(SKG) and modal consistency learning. The former fully explore the shared knowledge between modalities by cross-modal attention mechanism and serves as a guide to assist the model to extract discriminative features, the latter reduce the discrepancy of two modalities through the adversarial learning of the designed modal classifier and the dual-stream network, and the two modules cooperate with each other to strengthen feature learning. Meanwhile, in order to further reduce the modal discrepancy of two modalities, a features mixing strategy is introduced to enhance the ability of the dual-stream network to extract modal consistency features. The performance of the proposed method on the two public datasets SYSU-MM01 and RegDB obviously superior to that of related works. The accuracy of Ramk1/mAP is 58.38%/56.10% and 87.41%/80.75%, respectively, which proves the proposed method is effective. The source code has been released in https://github.com/lhf12278/SKMCL.

    参考文献
    相似文献
    引证文献
引用本文
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2021-12-29
  • 最后修改日期:2021-12-29
  • 录用日期:2022-03-02
  • 在线发布日期:
  • 出版日期:
文章二维码