期刊文献+

An Adversarial Attack System for Face Recognition

下载PDF
导出
摘要 Deep neural networks(DNNs)are widely adopted in daily life and the security problems of DNNs have drawn attention from both scientific researchers and industrial engineers.Many related works show that DNNs are vulnerable to adversarial examples that are generated with subtle perturbation to original images in both digital domain and physical domain.As a most common application of DNNs,face recognition systems are likely to cause serious consequences if they are attacked by the adversarial examples.In this paper,we implement an adversarial attack system for face recognition in both digital domain that generates adversarial face images to fool the recognition system,and physical domain that generates customized glasses to fool the system when a person wears the glasses.Experiments show that our system attacks face recognition systems effectively.Furthermore,our system could misguide the recognition system to identify a person wearing the customized glasses as a certain target.We hope this research could help raise the attention of artificial intelligence security and promote building robust recognition systems.
出处 《Journal on Artificial Intelligence》 2021年第1期1-8,共8页 人工智能杂志(英文)
基金 This work is supported in part by the National Natural Science Foundation of China under Grant 61902082,U1636215 the Guangdong Province Key research and Development Plan under Grant 2019B010136003.
  • 相关文献

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部