摘要
立法是治理人工智能风险、促进人工智能可持续发展的重要举措。人工智能的法律定义界定、立法类型与模式选择,以及法律权责认定,是人工智能立法中面临的三大问题。立足中国国情,比较和借鉴了欧盟和美国人工智能立法研究与实践经验,中国人工智能立法应以人工智能本质和未来发展为抓手厘定其法律定义,构建促进型立法和规范型立法兼具,综合性立法、分散式立法与附属立法相结合的人工智能立法体系,并通过立法否认人工智能法律主体地位、设置举证责任倒置等机制合理划分人工智能侵权责任。
Legislation is an important measure to control the risk of artificial intelligence(AI)and promote the sustainable development of AI.The three main challenges in AI legislation are the legal definition,legislation type and mode selection,and the determination of legal rights and responsibilities.Based on China’s national conditions,the research and practical experience of AI legislation in the European Union and the United States are compared.China should define its legal definition with the essence and future development of artificial intelligence as the starting point,build an AI legislative system with both promoting normative legislation and legislation,combine comprehensive legislation with decentralized legislation and subsidiary legislation,and rationally divide artificial intelligence tort liability through legislation denying the legal subject status of AI and setting up mechanisms such as inversion of burden of proof.
作者
司伟攀
SI Weipan(Institute of Scientific and Technical Information of China,Beijing 100038)
出处
《全球科技经济瞭望》
2023年第7期6-14,共9页
Global Science,Technology and Economy Outlook
基金
中国科学技术信息研究所创新研究基金面上项目“人工智能数据安全风险及治理研究”(MS2023-10)。