生成式人工智能涉虚假信息犯罪的刑法归责
CSTR:
作者:
作者单位:

1.华东政法大学 刑事法学院,上海 200042;2.南京大学 法学院,江苏 南京 210093

作者简介:

姜涛,华东政法大学刑事法学院二级教授,博士研究生导师,Email:iangtao4010@163.com。

通讯作者:

中图分类号:

D914

基金项目:

国家哲学社会科学基金重大项目“数字经济的刑事安全风险防范体系建构研究”(21&ZD210)


Criminal law imputation for crimes involving false information in generative artificial intelligence
Author:
Affiliation:

1.School of Criminal Law, East China University of Political Science and Law, Shanghai 200042,P.R.China;2.School of Law, Nanjing University, Nanjing 210093, P.R.China

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    随着生成式AI技术的快速发展,其在创造全新内容的同时,也因训练机制缺陷引发了复杂的虚假信息风险,为虚假信息犯罪的刑法归责带来多重挑战,构建科学合理的归责体系成为亟待解决的问题。生成式AI虚假信息风险的产生,源于训练过程中数据准备、人工介入、自我优化、人类反馈四大核心环节的隐患。数据准备阶段,训练数据来源复杂、真假参半,甚至存在AI生成虚假信息回流训练库的恶性循环;人工介入阶段,标注人员的主观偏差影响模型输出质量;自我优化阶段,技术局限导致AI易生成错误内容,算法黑箱又引发用户盲目信任;人类反馈阶段,用户与AI的双向互动可能相互误导,尤其在医疗、金融等关键领域,虚假信息传播会威胁重要法益。这些技术成因使虚假信息风险呈现必然性与无间断性、主体非唯一性与智能化、危害后果不确定性与扩散性、过错要素不可查性与欠缺性的规范特点。由此,生成式AI涉虚假信息犯罪的刑法归责面临五大难题:一是价值矛盾难题,涉及技术创新副作用与可避免风险、言论自由与违法传播、技术中立原则适用三方面冲突;二是责任主体难题,技术设计者、服务提供者、使用者、监管部门职责划分模糊,现有规定偏重服务提供者责任;三是因果关系难题,过失型和无过错型犯罪中,部分主体对危害结果的支配力弱,用户对AI的合理信赖可能阻断因果链;四是罪名适用难题,编造、故意传播虚假信息罪在行为对象、类型、保护法益上存在局限,以单一罪名规制力有不逮;五是手段选择难题,需平衡刑法预防性保护与谦抑性的关系,协调民行刑责任衔接。为解决上述难题,应当构建“以注意义务理论为框架,衔接多种责任类型”“以AI使用者为中心,合理分配其他主体责任”的归责方案。注意义务理论可明确各主体责任边界,优先通过民法、行政法防控风险,刑法仅作为最后防线。责任分配上,使用者承担更高注意义务,非法操纵AI或违反审查义务且未采取标识措施的,可能承担破坏数据安全、传播虚假信息和引起特定法益侵害结果的刑事责任;技术设计者主要履行数据清洗、模型优化等义务,故意污染训练数据可能承担破坏数据安全和引起关联法益侵害的刑事责任;服务提供者未履行风险提示、分级别介入等义务且拒不改正的,可能承担拒不履行信息网络安全管理义务罪的刑事责任;监管部门职责集中在程序层面,不涉及信息内容真实性审查,应审慎认定其监督过失责任。

    Abstract:

    With the rapid development of generative artificial intelligence (AI) technology, while it creates brand-new content, it also gives rise to complex false information risks due to defects in its training mechanism, posing multiple challenges to the criminal law imputation of crimes involving false information. Constructing a scientific and reasonable imputation system has become an urgent issue to be addressed. The emergence of false information risks in generative AI stems from hidden dangers in four core links of the training process: data preparation, human intervention, self-optimization, and human feedback. In the data preparation phase, the sources of training data are complex, mixed with true and false information, and there even exists a vicious cycle where false information generated by AI flows back into the training database. In the human intervention phase, the subjective biases of annotators affect the quality of model output. In the self-optimization phase, technical limitations make AI prone to generating incorrect content, and the algorithm black box leads to blind trust from users. In the human feedback phase, the two-way interaction between users and AI may mislead each other. Particularly in key fields such as healthcare and finance, the spread of false information can threaten important legal interest. These technical causes result in false information risks exhibiting normative characteristics such as inevitability and continuity, non-uniqueness and intelligence of subjects, uncertainty and diffusibility of harmful consequences, and non-investigability and deficiency of fault elements. Consequently, the criminal law imputation for crimes involving false information in generative AI faces five major difficulties. Firstly, the difficulty of value conflicts, involving conflicts between side effects of technological innovation or avoidable risks, freedom of speech or illegal dissemination, and the application of the principle of technological neutrality. Secondly, the difficulty of identifying responsible subjects, as the division of responsibilities among technology designers, service providers, users, and regulatory authorities is vague, and existing regulations focus more on the responsibilities of service providers. Thirdly, the difficulty of establishing causal relationships. In cases of negligent and non-fault crimes, some subjects have weak control over harmful consequences, and users’ reasonable trust in AI may break the causal chain. Fourthly, the difficulty of applying charges. The crime of fabricating or intentionally spreading false information has limitations in terms of the object, type of conduct, and protected legal interests, making it inadequate to regulate such crimes with a single charge. Fifthly, the difficulty of selecting means, which requires balancing the relationship between preventive protection and the principle of modesty in criminal law, and coordinating the connection of civil, administrative, and criminal liabilities. To solve the above difficulties, an imputation plan should be constructed that takes the theory of duty of care as the framework and connects multiple types of liabilities and takes AI users as the core and reasonably distributes the responsibilities of other subjects. The theory of duty of care can clarify the responsibility boundaries of each subject; priority should be given to preventing and controlling risks through civil law and administrative law, and criminal law should only serve as the last line of defense. In terms of responsibility distribution, users bear a higher duty of care; those who illegally manipulate AI, violate review obligations, and fail to take identification measures may bear criminal liability for endangering data security, spreading false information, and causing harm to specific legal interest. Technology designers mainly fulfill obligations such as data cleaning and model optimization; those who intentionally contaminate training data may bear criminal liability for endangering data security and causing harm to related legal interest. Service providers who fail to fulfill obligations such as risk prompts and hierarchical intervention and refuse to make corrections may bear criminal liability for the crime of refusing to perform information network security management obligations. Regulatory authorities focus their responsibilities on the procedural level and do not involve the review of the authenticity of information content; their supervisory negligence liability should be determined with prudence.

    参考文献
    相似文献
    引证文献
引用本文

姜涛,郭欣怡.生成式人工智能涉虚假信息犯罪的刑法归责[J].重庆大学学报社会科学版,2025,31(6):183-197. DOI:10.11835/j. issn.1008-5831. fx.2025.11.003

复制
分享
相关视频

文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:
  • 最后修改日期:
  • 录用日期:
  • 在线发布日期: 2026-01-20
  • 出版日期:
文章二维码