Clarification of ideological security boundaries, risk monitoring, and regulatory framework in generative AI applications—Reflections on DeepSeek, Manus, ChatGPT, and Sora
CSTR:
Author:
Affiliation:

1.School of Marxism, Xinjiang Normal University, Urumqi 830017, P.R.China;2.School of Marxism, Shihezi University, Shihezi 832003, P.R.China

Clc Number:

B82-057;B036;TP18

Fund Project:

  • Article
  • |
  • Figures
  • |
  • Metrics
  • |
  • Reference
  • |
  • Related
  • |
  • Cited by
  • |
  • Materials
  • |
  • Comments
    Abstract:

    As an emerging technology in the development of artificial intelligence, generative artificial intelligence revolution has driven a comprehensive societal transformation. Applications of generative artificial intelligence such as DeepSeek, Manus, ChatGPT, and Sora have permeated economic, political, cultural, and social domains, deeply integrating into diverse scenarios of digital life while subtly reshaping the public’s daily applications. However, there remains limited conscious awareness of its ideological implications. As a technological innovation, the inherent ideological risks in generative artificial intelligence applications constitute an unavoidable reality and have become a crucial factor affecting overall national security.Building upon this logical framework and reflections on generative artificial intelligence models, this study first endeavors to clarify the ideological security boundaries of generative artificial intelligence applications across political, economic, cultural, and social dimensions, delineating red lines and bottom lines for ideological safety. Subsequently, based on the practical development of generative artificial intelligence, it identifies key monitoring points for ideological security risks, directly addressing threats arising from information distortion, value conflicts, social fragmentation, and daily life alienation. This analysis provides a solid foundation for enhanced supervision, management, and service of generative artificial intelligence to reinforce ideological security defenses during its application processes.Considering both security boundaries and risk monitoring points, the study proposes four-dimensional solutions to mitigate ideological security risks: 1)At the educational level, returning to the essential understanding of technology empowering better lives; 2)At the technological level, advancing technical refinement to strengthen social security safeguards; 3)At the regulatory level, balancing governance and services to promote healthy industry development; 4)At the societal level, establishing multi-stakeholder governance to steer technology toward ethical applications.Ultimately, the research reaffirms the fundamental consensus that generative AI represents a technological innovation. It aims to facilitate AI’s enhanced empowerment of new quality productive forces, contribute to realizing people’s aspirations for better lives, and ultimately transform this technology into a new positive increment for maintaining ideological security within the framework of overall national security.

    Reference
    Related
    Cited by
Get Citation

刘成,莫生叶.生成式人工智能应用中的意识形态安全边界厘清、风险监测及规制理路——基于DeepSeek、Manus、ChatGPT、Sora的思考[J].重庆大学学报社会科学版,2025,31(6):236~249

Copy
Related Videos

Share
Article Metrics
  • Abstract:
  • PDF:
  • HTML:
  • Cited by:
History
  • Received:
  • Revised:
  • Adopted:
  • Online: January 20,2026
  • Published:
Article QR Code