摘要
卷积神经网络是一种具有强大特征提取能力的深度神经网络,其在众多领域得到了广泛应用。但是,研究表明卷积神经网络易受对抗样本攻击。不同于传统的以梯度迭代生成对抗扰动的方法,提出了一种基于颜色模型的语义对抗样本生成方法,利用人类视觉和卷积模型在识别物体中表现出的形状偏好特性,通过颜色模型的扰动变换来生成对抗样本。在样本生成过程中其不需要目标模型的网络参数、损失函数或者相关结构信息,仅依靠颜色模型的变换和通道信息的随机扰动,所以这是一种可以完成黑盒攻击的对抗样本。
Convolutional neural network is a deep neural network with powerful feature extraction capabilities,it has been widely used in many fields.However,recent research shows that convolutional neural networks are vulnerable to adversarial attacks.Different from the traditional method of iteratively generating anti-perturbation by gradient,this paper proposes a color model-based method for generating semantic adversarial samples,which uses the shape preference characteristics of human vision and convolution model in object recognition,and generates the anti-disturbance sample by disturbing the color channel based on the transformation of color model.In the process of sample generation,it does not need the network parameters,loss function or related structure information of the target model,but only relies on the transformation of the color model and random disturbance of channel information.Therefore,it is a counter sample that can complete the black box attack.
作者
王舒雅
刘强春
陈云芳
王福俊
WANG Shuya;LIU Qiangchun;CHEN Yunfang;WANG Fujun(College of Tongda,Nanjing University of Posts and Telecommunications,Yangzhou,Jiangsu 225127,China;College of Computer,Nanjing University of Posts and Telecommunications,Nanjing 210023,China;College of Computer Science and Technology,Nanjing University of Aeronautics and Astronautics,Nanjing 211106,China)
出处
《计算机工程与应用》
CSCD
北大核心
2021年第15期163-170,共8页
Computer Engineering and Applications
关键词
对抗样本
卷积神经网络
语义信息
黑盒攻击
adversarial examples
convolutional neural network
semantic feature
black-box attack