Abstract:Helmet wearing detection is susceptible to interference from complex background. To address this challenge, the YOLOv4 algorithm is employed. However, YOLOv4 faces issues such as slow detection speed, high memory consumption, computational complexity and demanding hardware performance requirements. This study integrates the MobileNet network to alleviate these challenges in YOLOv4. Additionally, cross-module feature fusion is introduced in MobileNet network, enabling effective fusion of high-level semantic features and low-level semantic features. Despite these advancements, small targets in images poses problems such as low resolution, limited informative features, coexistence of multiple scales, and potenial loss of feature information during continuous convolution. To mitigate these issues, this paper introduces neck optimization strategies, such as improving the feature pyramid FPN and introducing/ improving attention mechanism. These strategies focus on target information and reduce interference from background information during helmet detection. Simulation results show that the improved YOLOv4 neck-optimized network achieves a detection speed of 34.28FPS on the CPU platform, which is about 16 times faster than YOLOv4 network. Moreover, its detection accuracy is 4.21% higher than that of the YOLOv4 algorithm. This optimized algorithm strikes a balance between speed and accuracy.