生成式人工智能应用中的意识形态安全边界厘清、风险监测及规制理路——基于DeepSeek、Manus、ChatGPT、Sora的思考
CSTR:
作者:
作者单位:

1.新疆师范大学 马克思主义学院,新疆 乌鲁木齐 830017;2.石河子大学 马克思主义学院,新疆 石河子 832003

作者简介:

刘成,法学博士,新疆师范大学马克思主义学院教授,Email: liucheng2010yeah@sina.com
莫生叶(通信作者),法学博士,石河子大学马克思主义学院副教授。

通讯作者:

中图分类号:

B82-057;B036;TP18

基金项目:

国家社会科学基金项目“新时代维护新疆意识形态安全研究”(22BKS192);新疆师范大学博士科研启动基金项目“新时代维护新疆基层意识形态安全研究”(XJBSRW20240002);教育部高校思想政治理论课教师研究专项“新疆大中小学思政课中铸牢中华民族共同体意识教育一体化研究”(25JDSZK041);新疆维吾尔自治区教育科学规划课题“数字赋能背景下新疆高校数字素养培育路径研究”(HES2024012)


Clarification of ideological security boundaries, risk monitoring, and regulatory framework in generative AI applications—Reflections on DeepSeek, Manus, ChatGPT, and Sora
Author:
Affiliation:

1.School of Marxism, Xinjiang Normal University, Urumqi 830017, P.R.China;2.School of Marxism, Shihezi University, Shihezi 832003, P.R.China

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    作为人工智能发展进程中新的技术,生成式人工智能革命引领社会发生了全方位变革。诸如DeepSeek、Manus及ChatGPT、Sora等生成式人工智能的应用已经渗透政治、经济、文化、社会等诸多领域,深度融入数字化生活多元场景运用,悄然改观大众日常应用,而对其意识形态的影响尚无自觉认知。作为科学技术的创新,生成式人工智能运用中存在的意识形态风险是无法回避的现实问题,并且已经成为影响总体国家安全的重要考量因素。正是基于上述逻辑和对生成式人工智能大模型的思考,笔者首先致力于从政治、经济、文化、社会等层面厘清生成式人工智能应用中的意识形态安全边界,明晰意识形态安全红线与底线,在此基础上结合生成式人工智能发展的实际,探寻其意识形态安全风险监测点,直击信息失真、价值观冲突、社会分化、日常生活异化等导致的意识形态安全风险,为更好监督管理和服务生成式人工智能,筑牢其应用进程中的意识形态安全防线提供坚实依据。基于生成式人工智能应用意识形态安全边界考量以及风险点监测,主要从思想教育层面回归技术赋能美好生活本质认知;从技术发展层面注重技术精进筑牢社会安全防线;从政策监管层面兼顾管理服务力促业态健康持续发展;从社会协同层面以多元主体共治格局推动技术向善等四个维度寻求规避意识形态安全风险的有效理路。最终回归生成式人工智能作为技术创新本质共识达成,推动人工智能技术更好赋能新质生产力发展,助力人民美好生活实现,促使成为维护总体国家安全的意识形态新的增量。

    Abstract:

    As an emerging technology in the development of artificial intelligence, generative artificial intelligence revolution has driven a comprehensive societal transformation. Applications of generative artificial intelligence such as DeepSeek, Manus, ChatGPT, and Sora have permeated economic, political, cultural, and social domains, deeply integrating into diverse scenarios of digital life while subtly reshaping the public’s daily applications. However, there remains limited conscious awareness of its ideological implications. As a technological innovation, the inherent ideological risks in generative artificial intelligence applications constitute an unavoidable reality and have become a crucial factor affecting overall national security.Building upon this logical framework and reflections on generative artificial intelligence models, this study first endeavors to clarify the ideological security boundaries of generative artificial intelligence applications across political, economic, cultural, and social dimensions, delineating red lines and bottom lines for ideological safety. Subsequently, based on the practical development of generative artificial intelligence, it identifies key monitoring points for ideological security risks, directly addressing threats arising from information distortion, value conflicts, social fragmentation, and daily life alienation. This analysis provides a solid foundation for enhanced supervision, management, and service of generative artificial intelligence to reinforce ideological security defenses during its application processes.Considering both security boundaries and risk monitoring points, the study proposes four-dimensional solutions to mitigate ideological security risks: 1)At the educational level, returning to the essential understanding of technology empowering better lives; 2)At the technological level, advancing technical refinement to strengthen social security safeguards; 3)At the regulatory level, balancing governance and services to promote healthy industry development; 4)At the societal level, establishing multi-stakeholder governance to steer technology toward ethical applications.Ultimately, the research reaffirms the fundamental consensus that generative AI represents a technological innovation. It aims to facilitate AI’s enhanced empowerment of new quality productive forces, contribute to realizing people’s aspirations for better lives, and ultimately transform this technology into a new positive increment for maintaining ideological security within the framework of overall national security.

    参考文献
    相似文献
    引证文献
引用本文

刘成,莫生叶.生成式人工智能应用中的意识形态安全边界厘清、风险监测及规制理路——基于DeepSeek、Manus、ChatGPT、Sora的思考[J].重庆大学学报社会科学版,2025,31(6):236-249. DOI:10.11835/j. issn.1008-5831. pj.2025.04.001

复制
分享
相关视频

文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:
  • 最后修改日期:
  • 录用日期:
  • 在线发布日期: 2026-01-20
  • 出版日期:
文章二维码