基于知识蒸馏与ResNet的声纹识别
作者:
作者单位:

作者简介:

通讯作者:

中图分类号:

TP751

基金项目:

教育部-中国移动科研基金资助项目(MCM20180404);国家自然科学基金(52272388)。


Voiceprint recognition based on knowledge distillation and ResNet
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    针对声纹识别领域中存在信道失配与对短语音或噪声条件下声纹特征获取不完全的问题,提出一种将传统方法与深度学习相结合,以I-Vector模型作为教师模型对学生模型ResNet进行知识蒸馏。构建基于度量学习的ResNet网络,引入注意力统计池化层,捕获并强调声纹特征的重要信息,提高声纹特征的可区分性。设计联合训练损失函数,将均方根误差(MSE,mean square error)与基于度量学习的损失相结合,降低计算复杂度,增强模型学习能力。最后,利用训练完成的模型进行声纹识别测试,并与多种深度学习方法下的声纹识别模型比较,等错误率(EER,equal error rate)至少降低了8%,等错误率达到了3.229%,表明该模型能够更有效地进行声纹识别。

    Abstract:

    Aiming at the problem of channel mismatch in the field of voiceprint recognition and incomplete acquisition of voiceprint features under short speech or noise conditions,a method that combines traditional methods with deep learning is proposed, and the ResNet model is used as the student model to perform knowledge distillation on the I-Vector model as the teacher model. We construct a ResNet network based on metric learning, introduce an attentive statistics pooling layer, capture and emphasize the important information of voiceprint features, and improve the distinguishability of voiceprint features. The mean square error (MSE) is combined with the loss based on metric learning to reduce computational complexity and enhance model learning capabilities. Finally, the trained model is used for voiceprint recognition test, and compared with the voiceprint recognition model under a variety of deep learning methods. It's found that the equal error rate (EER) is reduced by at least 8%, and the equal error rate has reached 3.229%, indicating that the model can perform speaker verification more effectively.

    参考文献
    相似文献
    引证文献
引用本文

荣玉军,方昳凡,田鹏,程家伟.基于知识蒸馏与ResNet的声纹识别[J].重庆大学学报,2023,46(1):113-124.

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2021-07-12
  • 最后修改日期:
  • 录用日期:
  • 在线发布日期: 2023-02-06
  • 出版日期: