从ChatGPT到DeepSeek:生成式人工智能的法律风险与三维规制
CSTR:
作者:
作者单位:

江西财经大学 法学院,江西 南昌 330013

作者简介:

熊进光,法学博士,江西财经大学法学院教授,博士研究生导师
张峥(通讯作者),江西财经大学法学院博士研究生,Email:2234074912@qq.com。

通讯作者:

中图分类号:

D923;D922.17

基金项目:

江西省社会科学“十四五”重点基金项目“元宇宙背景下虚拟数字人侵权法律问题研究”(23FX01);江西省研究生创新专项资金项目“生成式人工智能侵权风险分配与责任承担研究”(YC2025-B134)


From ChatGPT to DeepSeek: Legal risks and three-dimensional regulations of generative artificial intelligence
Author:
Affiliation:

School of Law, Jiangxi University of Finance and Economics, Nanchang 330013, P.R.China

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    从ChatGPT到Sora再到DeepSeek,生成式人工智能在不断发展与创新,其潜在法律风险也日益增加。通过分析生成式人工智能的“准备—运算—生成”三阶段运行机制可知,其在不同阶段涉及的核心技术不同,存在的法律风险也有所差异。具体而言,生成式人工智能准备阶段的核心是海量数据与机器学习,运算阶段主要涉及算法技术、人工标注与自主学习,生成阶段则依赖于数据解码与样本生成。相应的,其法律风险主要在于准备阶段的隐私权与个人信息保护,运算阶段的数据安全与算法偏见,生成阶段的版权归属、意识形态与社会秩序。然而,现有立法无法对生成式人工智能的法律地位、监管尺度、样本归属等核心内容提供精细指引。基于此,在比较分析美国、英国、欧盟等域外国家治理范式与治理经验的基础上,结合我国国情与实践现状,应立足于“民法保护—监管尺度—行业规范”三维路径,规制生成式人工智能法律风险,保障公民合法权益不受侵害。首先,在民法层面,明确弱生成式人工智能的法律客体地位及强生成式人工智能的拟制法律主体地位;通过落实私密信息需明确授权与数据加密技术强化个人信息保护;从生成样本的属性及权利归属出发完善知识产权认定标准。其次,在监管尺度层面,对算法等技术实施涵盖“准备—运算—生成”阶段的全过程监管;通过制定技术透明度标准、引入可解释性技术及建立“问责—反馈”保障制度提高算法等相关技术的透明度;构建“政府—社会—企业”联动的特色监管模式,政府为公民与企业提供政策支持、公民积极参与监管治理并将侵权信息反馈给政府与企业、企业以政府政策与公民需求为导向,助力社会发展。最后,在行业规范层面,从生成式人工智能三种侵权形态出发明确其侵权适用的归责原则;落实服务提供者与使用者的法律义务,前者应承担内容审查及安全保障等义务,后者应尽到合理使用与操作及信息反馈等义务;开放社会性咨询与反馈渠道,通过普及公民权利义务、规范公民的使用方法、加强企业与公民的联系等方式遏制侵权现象的发生,并提高侵权纠纷的解决效率。

    Abstract:

    From ChatGPT to Sora and then to DeepSeek, generative artificial intelligence is constantly evolving and innovating, but its potential legal risks are also becoming increasingly serious. By analyzing the three-stage operation mechanism of preparation-operation-generation of generative artificial intelligence, it can be known that the core technologies involved in the three stages are different so that the existing legal risks also vary. Specifically, the core of the preparation stage of generative artificial intelligence lies in massive data and machine learning. The operation stage mainly involves algorithm techniques, manual annotation, and autonomous learning. The generation stage relies on data decoding and sample generation. Correspondingly, its legal risks mainly lie in the protection of privacy and personal information in the preparation stage, data security and algorithm bias in the operation stage, and copyright ownership, ideology and social order in the generation stage. However, the existing legislation fails to provide detailed guidance on the core contents such as the legal status, regulatory standards, and sample attribution of generative artificial intelligence. Based on this, on the basis of a comparative analysis of the governance paradigms and experiences of countries such as the United States, the United Kingdom, and the European Union, in combination with China’s national conditions and practical status, we should focus on the three-dimensional path of civil law protection-regulatory standard-industry norms to regulate the legal risks of generative artificial intelligence to safeguard the legitimate rights and interests of citizens. Firstly, at the civil law level, we should clarify the legal object status of weak generative artificial intelligence and the fictitious legal subject status of strong generative artificial intelligence, strengthen the protection of personal information through explicit authorization of private information and data encryption technology, and improve the intellectual property determination standards based on the attributes and rights ownership of generated samples. Secondly, at the regulatory standards level, we should implement the whole process supervision covering the preparation-operation-generation stage for technologies such as algorithms, enhance the transparency of related technologies by formulating technical transparency standards, introducing interpretable technologies and establishing an accountability-feedback guarantee system, and form a characteristic regulatory model of government-society-enterprise linkage that the government provides policy support for citizens and enterprises, citizens actively participate in regulatory governance and feed back infringement information to the government and enterprises, and enterprises, guided by government policies and citizens’ demands, contribute to social development. Finally, at the industry norms level, we should clarify the applicable liability principle for its infringement through the three infringement forms of generative artificial intelligence, implement the legal obligations of service providers and users that providers should undertake obligations such as content review and security guarantee, while users should fulfill obligations such as reasonable use and operation as well as information feedback, and open up social consultation and feedback channels to curb the occurrence of infringement phenomena and improve the efficiency of resolving infringement disputes by popularizing citizens’ rights and obligations, standardizing citizens’ usage methods, and enhancing the connection between enterprises and citizens.

    参考文献
    相似文献
    引证文献
引用本文

熊进光,张峥.从ChatGPT到DeepSeek:生成式人工智能的法律风险与三维规制[J].重庆大学学报社会科学版,2026,32(1):253-268. DOI:10.11835/j. issn.1008-5831. fx.2025.09.001

复制
分享
相关视频

文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:
  • 最后修改日期:
  • 录用日期:
  • 在线发布日期: 2026-04-02
  • 出版日期:
文章二维码