生成式人工智能对个人信息保护的挑战及应对
CSTR:
作者:
作者单位:

中国矿业大学 人文与艺术学院,江苏 徐州 221116

作者简介:

朱荣荣,中国矿业大学人文与艺术学院,Email:zhurongrong1305@163.com。

通讯作者:

中图分类号:

D923

基金项目:

2024年度江苏省社会科学基金青年项目“数字风险社会预防性侵权责任研究”(24FXC010);2024年度江苏省高校哲学社会科学研究一般项目“解释论视野下个人信息动态平衡保护的体系构造研究”(2024SJYB0779);中央高校基本科研业务费项目“健康数据商业化利用的民法规则优化研究”(2024SK16)


The challenge and response to personal information protection for generative artificial intelligence
Author:
Affiliation:

School of Humanities and Arts, China University of Mining and Technology,Xuzhou 221116, P. R. China

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    以ChatGPT、DeepSeek为代表的生成式人工智能是指能够根据用户指令生成文字、图片、视频等相应内容的人工智能。个人信息是生成式人工智能的基础,生成式人工智能在模型训练、模型生成,以及模型优化等各个阶段均需要处理大量的个人信息,同时也对传统的个人信息保护规则带来了一定的冲击。在信息收集阶段,生成式人工智能可能虚化知情同意规则,侵犯信息主体的隐私权。在信息利用阶段,生成式人工智能可能冲击目的限制原则、公开透明原则等基本的个人信息处理规则,提高个人信息泄露的风险。在信息生成阶段,生成式人工智能可能产生虚假信息以及歧视性信息。在生成式人工智能变革式发展背景下,亟须审视个人信息保护的基本理念,寻求其在生成式人工智能领域的应然价值取向。通过考察比较法以及我国个人信息保护理念的发展脉络可知,个人信息保护或个人信息利用的单极性思维难以适应数字社会的现实需要,而个人信息保护与个人信息利用的动态平衡则是平衡各方主体利益的理想路径。生成式人工智能可以作为基础模型被广泛应用于教育、金融、科技等诸多领域,鉴此,应当协调推进个人信息保护与生成式人工智能发展的平衡兼顾。个人信息与信息主体密切相关,个人信息一旦被泄露或滥用可能使信息主体面临较高的风险,因此需要构建事前风险预防与事后损害赔偿的协同救济机制,在实现个人信息全生命周期保护的基础上,促进生成式人工智能的良性发展。就风险预防机制而言,需要在风险识别基础上完善去识别化措施,并赋予信息主体限制处理权、算法解释权,全方位遏制潜在的风险。责任主体的确定是损害赔偿的基础,应由生成式人工智能服务提供者与使用者证明其与个人信息侵权损害不存在因果关系,否则需要承担连带赔偿责任。在归责原则方面,可以根据被侵害的对象为个人一般信息或个人敏感信息,分别适用过错推定责任原则或无过错责任原则。为了更好地救济信息主体所受的损害,除了采取财产损害赔偿与精神损害赔偿等传统的补偿性赔偿外,还应当引入惩罚性赔偿,最大程度保障信息主体的受损权益。

    Abstract:

    Generative artificial intelligence, represented by ChatGPT and DeepSeek, refers to artificial intelligence that can generate text, pictures, videos, and other corresponding content according to user instructions. Personal information is the basis of generative artificial intelligence. Generative artificial intelligence needs to deal with a large amount of personal information in all stages of model training, model generation and model optimization, which also brings a certain impact on the traditional personal information protection rules. In the stage of information collection, generative artificial intelligence may blur the rules of informed consent and infringe the privacy of information subjects. In the stage of information utilization, generative artificial intelligence may impact the basic personal information processing rules such as the principle of purpose limitation and the principle of openness and transparency, and increase the risk of personal information disclosure. In the stage of information generation, generative artificial intelligence may generate false information and discriminatory information. In the context of the transformational development of generative artificial intelligence, it is urgent to examine the basic concept of personal information protection and seek its value orientation in the field of generative artificial intelligence. By examining the development of comparative law and the concept of personal information protection in China, we can see that the unipolar thinking of personal information protection or personal information utilization is difficult to adapt to the practical needs of the digital society, and the dynamic balance between personal information protection and personal information utilization is the ideal path to properly balance the interests of various subjects. Generative artificial intelligence can be widely used as a basic model in many fields such as education, finance, science, and technology. In view of this, a balance between personal information protection and generative artificial intelligence development should be coordinated and promoted. Personal information is closely related to the information subject. Once personal information is disclosed or abused, the information subject may suffer higher risks. Therefore, it is necessary to build a collaborative relief mechanism of risk prevention and damage compensation, to promote the benign development of generative artificial intelligence based on whole life cycle protection of personal information. As far as the risk prevention mechanism is concerned, it is necessary to improve the de-identification measures based on risk identification, and grant the information subject the right to limit processing and the right to interpret algorithms, to curb the potential risks in an all-round way. Determination of the subject of liability is the basis of damage compensation. It should be proved by the service provider of generative artificial intelligence and the user that there is no causal relationship between them and the damage, otherwise they need to bear joint and several liability for compensation. In terms of the principle of imputation, the principle of presumptive fault liability or no-fault liability can be applied respectively according to the fact that the infringed object is personal general information or personal sensitive information. In order to better remedy the damage suffered by the information subject, in addition to the traditional compensatory compensation such as property damage compensation and mental damage compensation, punitive compensation should also be introduced to protect the damaged rights and interests of the information subject to the greatest extent.

    参考文献
    相似文献
    引证文献
引用本文

朱荣荣.生成式人工智能对个人信息保护的挑战及应对[J].重庆大学学报社会科学版,2025,31(4):222-235. DOI:10.11835/j. issn.1008-5831. fx.2023.09.001

复制
分享
相关视频

文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:
  • 最后修改日期:
  • 录用日期:
  • 在线发布日期: 2025-10-15
  • 出版日期:
文章二维码