Joint supervision of center loss and softmax loss for face recognition
Article
Figures
Metrics
Preview PDF
Reference
Related
Cited by
Materials
Abstract:
Nowadays, deep learning has made great achievements in face recognition. Most of the convolutional neural network uses the Softmax loss function to increase the distance between classes. However, adding samples of new classes will reduce the distance between classes and the performance of the network. In order to improve the recognition ability of the network characteristics, a face recognition approach based on joint supervision of center loss and Softmax loss is proposed. On the basis of Softmax, first of all, a class center is maintained in the feature space for each class of the training set. When a new sample is added to the training process, the network will constrain the distance of the classification center of the sample, and thus both intra-class aggregation and inter-class separation are considered. Secondly, the concept of momentum is introduced. When the classification center is updated, by retaining the previous update direction and using the gradient of the current batch to fine-tune the final update direction, the method can increase the stability and improve the learning efficiency of the network. Finally, the test experiments on the face recognition benchmark library LFW (labeled faces in the wild) prove that the proposed joint supervision algorithm achieves 99.31% of face recognition accuracy on a small network training set.