Abstract:When judging the negligence of the designer of artificial intelligence product algorithm, the opinion of modified theory of old negligence is in conflict withthe nature of black box of algorithm that depends on correlation rather than causationwhen making decisions; the logic of this standpoint that only valuing results while ignoringconductmay blow algorithm designers’ enthusiasm, impeding algorithm’s progressing; Although the new negligence theory takes the obligation of result avoidance as the core standard of criminal negligence, it lacks specific design of the standard of possibility of foreseeing and is often at a loss when judging the possibility of foreseeing. Therefore, both standpoints are not reasonable schemes to judge the criminal negligence of intelligent products algorithm designers. In contrast, although the perspective of the sense of fearing theory (hyper new negligence theory) holds that the possibility of results foreseeing only requires the conductor to have a sense of fearing about the harmful result is enough, which is criticized by the mainstream view, this criticism is worth of discussion. Firstly, it only sees the fear requirement on the surface of this stance, but it does not see the core view behinds of the perspective of the theory of fearing: The correlation between the possibility of foreseeing and the obligation of avoiding results. Secondly, the opinion that the author’s judgment on individual cases is equivalent to the theory of fearing itself is an overgeneralization. Compared with the modified old negligence theory and the new negligence theory, the core point of the theory of fearing that there is correlation between the possibility of the foreseeing of result and the obligation of result avoidance is a reasonable scheme to judge the criminal negligence of the algorithm designer of intelligent products. Based on the core view of the theory of fearing, criminal negligence includes objective possibility of the possibility of foreseeing results, objective obligation of foreseeing results and objective obligation of avoiding results. The criterion for the objective the possibility of foreseeing results of the algorithm designer of artificial intelligence products is that the algorithm system is likely to make an adverse decision once it encounters a special situation containing abnormal factors, which may lead to negative consequences. The objective result avoiding obligation of the algorithm designer is that the algorithm system should be foreseen not only for the normal situation without abnormal factors, but also for the special situation accompanied by abnormal factors. Once the designed algorithm system encounters special situations, the system may make adverse decisions. The content of objective result avoiding obligation of algorithm designer is as follows: it is necessary to avoid implanting values that are generally opposed or disapproved by the public when designing algorithms; The quality of the data fed to the algorithm system should be checked at designing time to prevent the risk of "garbage in" of defective data into the algorithm machine learning training to the greatest extent, Informing the product producer promptly that the algorithm may face abnormal conditions.