基于跨层多尺度特征注意力增强融合的多场景遥感图像分割算法
DOI:
CSTR:
作者:
作者单位:

1.西南计算机有限责任公司;2.重庆科技大学 智能技术与工程学院

作者简介:

通讯作者:

中图分类号:

TP391

基金项目:

重庆市教委科学技术研究计划重点项目(KJZD-K202301505)


Lightweight Micro-expression Recognition Method Based on Sparse Inverse Covariance Estimation
Author:
Affiliation:

1.Southwest Computer Co,Ltd;2.CQUST

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    在复杂军事对抗环境中,无人机多场景遥感图像对地面多类别目标进行自动侦察识别时,目标主体通常具有与背景颜色相似、边界模糊、背景环境多变等特征,传统卷积网络处理多场景遥感图像多类别分割时精度不高、鲁棒性不强。针对这一问题,本研究提出了一种多场景遥感图像分割算法,整合残差跨层多尺度通道与空间注意力模块(Rs-CMACM)和路径聚合网络(FAN),以增强关键特征,抑制无关信息,提升分割主体边界效果;引入动态数据增强和图像复原子网络(IRSN),提高复杂背景下的分割精度。首先,将CMACM和FAN跨层插入典型目标分割网络的主干网络中,增强模型特征提取能力并跨层融合不同深度的多尺度特征,减少噪声背景带来的分割偏差。其次,动态数据增强和IRSN引入迫使模型更加关注图像的本质特征,使其在多种场景条件下提取更加鲁棒的特征表示。这些改进显著提高了模型在多场景遥感图像多类别分割任务中的准确性和鲁棒性,提升了目标分割的精准度。

    Abstract:

    In complex military confrontation environments, where unmanned aerial vehicles (UAVs) conduct automatic reconnaissance and identification of multi-class ground targets using multi-scene remote sensing images, targets often exhibit characteristics such as color similarity with the background, blurred boundaries, and variable background environments. Traditional convolutional networks struggle with low accuracy and robustness in multi-class segmentation of multi-scene remote sensing images under these conditions. To address this issue, we propose a novel multi-scene remote sensing image segmentation algorithm. This algorithm integrates the Residual Cross-layer Multi-scale Channel and Spatial Attention Module (Rs-CMACM) and the Path Aggregation Network (FAN) to enhance key features, suppress irrelevant information, and improve the delineation of target boundaries. Additionally, dynamic data augmentation and an Image Restoration Sub-network (IRSN) are introduced to enhance segmentation accuracy in complex backgrounds. Firstly, Rs-CMACM and FAN are cross-layer integrated into the backbone network of a typical object segmentation network, enhancing the model"s feature extraction capabilities and fusing multi-scale features at different depths, thereby reducing segmentation bias caused by noisy backgrounds. Secondly, the incorporation of dynamic data augmentation and IRSN compels the model to focus on the intrinsic characteristics of images, enabling the extraction of more robust feature representations under various environmental conditions. These improvements significantly enhance the accuracy and robustness of the model in multi-class segmentation tasks for multi-scene remote sensing images, thereby increasing the precision of target segmentation.

    参考文献
    相似文献
    引证文献
引用本文
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2024-05-27
  • 最后修改日期:2024-06-14
  • 录用日期:2024-08-14
  • 在线发布日期:
  • 出版日期:
文章二维码