期刊文献+

调制识别中目标对抗攻击 被引量:1

Targeted adversarial attack in modulation recognition
下载PDF
导出
摘要 由于深度学习算法具有特征表达能力强、特征自动提取以及端到端学习等突出优势,因此被越来越多的研究者应用至通信信号识别领域。然而,对抗样本的发现使得深度学习模型极大程度地暴露在潜在的风险因素中,并对当前的调制识别任务造成严重的影响。本文从攻击者的角度出发,通过对当前传输的通信信号添加对抗样本,以验证和评估目标对抗样本对调制识别模型的攻击性能。实验表明,当前的目标攻击可以有效地降低模型识别的精确度,所提出的logit指标可以更细粒度地用于衡量攻击的目标性效果。 Since deep learning algorithms have outstanding advantages such as strong feature expression ability,automatic feature extraction,and end-to-end learning,more and more researchers have applied them to the field of communication signal recognition.However,the discovery of adversarial examples exposes deep learning models to potential risk factors to a great extent,which has a serious impact on current modulation recognition tasks.From the perspective of an attacker,adversarial examples are added to the currently transmitted communication signal to verify and evaluate the attack performance of the target countermeasure sample to the modulation recognition model.Experimental results show that the current targeted attack can effectively reduce the accuracy of model recognition,and the constructed logit indicator can be better applied to measure the targeted effect more fine-grained.
作者 赵浩钧 林云 包志达 史继博 葛斌 ZHAO Haojun;LIN Yun;BAO Zhida;SHI Jibo;GE Bin(College of Information and Communication Engineering,Harbin Engineering University,Harbin Heilongjiang 150001,China;Key Laboratory of Advanced Marine Communication and Information Technology,Harbin Engineering University,Harbin Heilongjiang 150001,China;College of Mathematical Sciences,Harbin Engineering University,Harbin Heilongjiang 150001,China)
出处 《太赫兹科学与电子信息学报》 2022年第8期836-842,共7页 Journal of Terahertz Science and Electronic Information Technology
基金 国家自然科学基金面上资助项目(61771154) 中央高校基本科研业务费资助项目(3072020CF0813)。
关键词 卷积神经网络 调制识别 对抗样本 无线电安全 Convolution Neural Network(CNN) modulation recognition adversarial examples wireless security
  • 相关文献

参考文献4

二级参考文献8

共引文献67

同被引文献2

引证文献1

二级引证文献1

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部