Criminal law imputation for crimes involving false information in generative artificial intelligence
CSTR:
Author:
Affiliation:

1.School of Criminal Law, East China University of Political Science and Law, Shanghai 200042,P.R.China;2.School of Law, Nanjing University, Nanjing 210093, P.R.China

Clc Number:

D914

Fund Project:

  • Article
  • |
  • Figures
  • |
  • Metrics
  • |
  • Reference
  • |
  • Related
  • |
  • Cited by
  • |
  • Materials
  • |
  • Comments
    Abstract:

    With the rapid development of generative artificial intelligence (AI) technology, while it creates brand-new content, it also gives rise to complex false information risks due to defects in its training mechanism, posing multiple challenges to the criminal law imputation of crimes involving false information. Constructing a scientific and reasonable imputation system has become an urgent issue to be addressed. The emergence of false information risks in generative AI stems from hidden dangers in four core links of the training process: data preparation, human intervention, self-optimization, and human feedback. In the data preparation phase, the sources of training data are complex, mixed with true and false information, and there even exists a vicious cycle where false information generated by AI flows back into the training database. In the human intervention phase, the subjective biases of annotators affect the quality of model output. In the self-optimization phase, technical limitations make AI prone to generating incorrect content, and the algorithm black box leads to blind trust from users. In the human feedback phase, the two-way interaction between users and AI may mislead each other. Particularly in key fields such as healthcare and finance, the spread of false information can threaten important legal interest. These technical causes result in false information risks exhibiting normative characteristics such as inevitability and continuity, non-uniqueness and intelligence of subjects, uncertainty and diffusibility of harmful consequences, and non-investigability and deficiency of fault elements. Consequently, the criminal law imputation for crimes involving false information in generative AI faces five major difficulties. Firstly, the difficulty of value conflicts, involving conflicts between side effects of technological innovation or avoidable risks, freedom of speech or illegal dissemination, and the application of the principle of technological neutrality. Secondly, the difficulty of identifying responsible subjects, as the division of responsibilities among technology designers, service providers, users, and regulatory authorities is vague, and existing regulations focus more on the responsibilities of service providers. Thirdly, the difficulty of establishing causal relationships. In cases of negligent and non-fault crimes, some subjects have weak control over harmful consequences, and users’ reasonable trust in AI may break the causal chain. Fourthly, the difficulty of applying charges. The crime of fabricating or intentionally spreading false information has limitations in terms of the object, type of conduct, and protected legal interests, making it inadequate to regulate such crimes with a single charge. Fifthly, the difficulty of selecting means, which requires balancing the relationship between preventive protection and the principle of modesty in criminal law, and coordinating the connection of civil, administrative, and criminal liabilities. To solve the above difficulties, an imputation plan should be constructed that takes the theory of duty of care as the framework and connects multiple types of liabilities and takes AI users as the core and reasonably distributes the responsibilities of other subjects. The theory of duty of care can clarify the responsibility boundaries of each subject; priority should be given to preventing and controlling risks through civil law and administrative law, and criminal law should only serve as the last line of defense. In terms of responsibility distribution, users bear a higher duty of care; those who illegally manipulate AI, violate review obligations, and fail to take identification measures may bear criminal liability for endangering data security, spreading false information, and causing harm to specific legal interest. Technology designers mainly fulfill obligations such as data cleaning and model optimization; those who intentionally contaminate training data may bear criminal liability for endangering data security and causing harm to related legal interest. Service providers who fail to fulfill obligations such as risk prompts and hierarchical intervention and refuse to make corrections may bear criminal liability for the crime of refusing to perform information network security management obligations. Regulatory authorities focus their responsibilities on the procedural level and do not involve the review of the authenticity of information content; their supervisory negligence liability should be determined with prudence.

    Reference
    Related
    Cited by
Get Citation

姜涛,郭欣怡.生成式人工智能涉虚假信息犯罪的刑法归责[J].重庆大学学报社会科学版,2025,31(6):183~197

Copy
Related Videos

Share
Article Metrics
  • Abstract:
  • PDF:
  • HTML:
  • Cited by:
History
  • Received:
  • Revised:
  • Adopted:
  • Online: January 20,2026
  • Published:
Article QR Code