Abstract:In the visible-infrared cross-modality person re-identification(VI-ReID), how to extract discriminant features which are not affected by modal discrepancy is critical to improve recognition performance. The current common solution is to learn the shared feature representation of two modalities by using a dual-stream network, however, these methods did not mine more shared knowledge between the modalities, and the discrepancy between the modalities still exits. Therefore, Shared Knowledge guidance Modal Consistency Learning(SKMCL) is proposed. This method is composed of cross-modal shared knowledge guidance(SKG) and modal consistency learning. The former fully explore the shared knowledge between modalities by cross-modal attention mechanism and serves as a guide to assist the model to extract discriminative features, the latter reduce the discrepancy of two modalities through the adversarial learning of the designed modal classifier and the dual-stream network, and the two modules cooperate with each other to strengthen feature learning. Meanwhile, in order to further reduce the modal discrepancy of two modalities, a features mixing strategy is introduced to enhance the ability of the dual-stream network to extract modal consistency features. The performance of the proposed method on the two public datasets SYSU-MM01 and RegDB obviously superior to that of related works. The accuracy of Ramk1/mAP is 58.38%/56.10% and 87.41%/80.75%, respectively, which proves the proposed method is effective. The source code has been released in https://github.com/lhf12278/SKMCL.