期刊文献+
共找到4,618篇文章
< 1 2 231 >
每页显示 20 50 100
An Empirical Study on the Effectiveness of Adversarial Examples in Malware Detection
1
作者 Younghoon Ban Myeonghyun Kim Haehyun Cho 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第6期3535-3563,共29页
Antivirus vendors and the research community employ Machine Learning(ML)or Deep Learning(DL)-based static analysis techniques for efficient identification of new threats,given the continual emergence of novel malware ... Antivirus vendors and the research community employ Machine Learning(ML)or Deep Learning(DL)-based static analysis techniques for efficient identification of new threats,given the continual emergence of novel malware variants.On the other hand,numerous researchers have reported that Adversarial Examples(AEs),generated by manipulating previously detected malware,can successfully evade ML/DL-based classifiers.Commercial antivirus systems,in particular,have been identified as vulnerable to such AEs.This paper firstly focuses on conducting black-box attacks to circumvent ML/DL-based malware classifiers.Our attack method utilizes seven different perturbations,including Overlay Append,Section Append,and Break Checksum,capitalizing on the ambiguities present in the PE format,as previously employed in evasion attack research.By directly applying the perturbation techniques to PE binaries,our attack method eliminates the need to grapple with the problem-feature space dilemma,a persistent challenge in many evasion attack studies.Being a black-box attack,our method can generate AEs that successfully evade both DL-based and ML-based classifiers.Also,AEs generated by the attack method retain their executability and malicious behavior,eliminating the need for functionality verification.Through thorogh evaluations,we confirmed that the attack method achieves an evasion rate of 65.6%against well-known ML-based malware detectors and can reach a remarkable 99%evasion rate against well-known DL-based malware detectors.Furthermore,our AEs demonstrated the capability to bypass detection by 17%of vendors out of the 64 on VirusTotal(VT).In addition,we propose a defensive approach that utilizes Trend Locality Sensitive Hashing(TLSH)to construct a similarity-based defense model.Through several experiments on the approach,we verified that our defense model can effectively counter AEs generated by the perturbation techniques.In conclusion,our defense model alleviates the limitation of the most promising defense method,adversarial training,which is only effective against the AEs that are included in the training classifiers. 展开更多
关键词 Malware classification machine learning adversarial examples evasion attack CYBERSECURITY
下载PDF
Image segmentation of exfoliated two-dimensional materials by generative adversarial network-based data augmentation
2
作者 程晓昱 解晨雪 +6 位作者 刘宇伦 白瑞雪 肖南海 任琰博 张喜林 马惠 蒋崇云 《Chinese Physics B》 SCIE EI CAS CSCD 2024年第3期112-117,共6页
Mechanically cleaved two-dimensional materials are random in size and thickness.Recognizing atomically thin flakes by human experts is inefficient and unsuitable for scalable production.Deep learning algorithms have b... Mechanically cleaved two-dimensional materials are random in size and thickness.Recognizing atomically thin flakes by human experts is inefficient and unsuitable for scalable production.Deep learning algorithms have been adopted as an alternative,nevertheless a major challenge is a lack of sufficient actual training images.Here we report the generation of synthetic two-dimensional materials images using StyleGAN3 to complement the dataset.DeepLabv3Plus network is trained with the synthetic images which reduces overfitting and improves recognition accuracy to over 90%.A semi-supervisory technique for labeling images is introduced to reduce manual efforts.The sharper edges recognized by this method facilitate material stacking with precise edge alignment,which benefits exploring novel properties of layered-material devices that crucially depend on the interlayer twist-angle.This feasible and efficient method allows for the rapid and high-quality manufacturing of atomically thin materials and devices. 展开更多
关键词 two-dimensional materials deep learning data augmentation generating adversarial networks
原文传递
Quantum generative adversarial networks based on a readout error mitigation method with fault tolerant mechanism
3
作者 赵润盛 马鸿洋 +2 位作者 程涛 王爽 范兴奎 《Chinese Physics B》 SCIE EI CAS CSCD 2024年第4期285-295,共11页
Readout errors caused by measurement noise are a significant source of errors in quantum circuits,which severely affect the output results and are an urgent problem to be solved in noisy-intermediate scale quantum(NIS... Readout errors caused by measurement noise are a significant source of errors in quantum circuits,which severely affect the output results and are an urgent problem to be solved in noisy-intermediate scale quantum(NISQ)computing.In this paper,we use the bit-flip averaging(BFA)method to mitigate frequent readout errors in quantum generative adversarial networks(QGAN)for image generation,which simplifies the response matrix structure by averaging the qubits for each random bit-flip in advance,successfully solving problems with high cost of measurement for traditional error mitigation methods.Our experiments were simulated in Qiskit using the handwritten digit image recognition dataset under the BFA-based method,the Kullback-Leibler(KL)divergence of the generated images converges to 0.04,0.05,and 0.1 for readout error probabilities of p=0.01,p=0.05,and p=0.1,respectively.Additionally,by evaluating the fidelity of the quantum states representing the images,we observe average fidelity values of 0.97,0.96,and 0.95 for the three readout error probabilities,respectively.These results demonstrate the robustness of the model in mitigating readout errors and provide a highly fault tolerant mechanism for image generation models. 展开更多
关键词 readout errors quantum generative adversarial networks bit-flip averaging method fault tolerant mechanisms
原文传递
Sparse Adversarial Learning for FDIA Attack Sample Generation in Distributed Smart
4
作者 Fengyong Li Weicheng Shen +1 位作者 Zhongqin Bi Xiangjing Su 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第5期2095-2115,共21页
False data injection attack(FDIA)is an attack that affects the stability of grid cyber-physical system(GCPS)by evading the detecting mechanism of bad data.Existing FDIA detection methods usually employ complex neural ... False data injection attack(FDIA)is an attack that affects the stability of grid cyber-physical system(GCPS)by evading the detecting mechanism of bad data.Existing FDIA detection methods usually employ complex neural networkmodels to detect FDIA attacks.However,they overlook the fact that FDIA attack samples at public-private network edges are extremely sparse,making it difficult for neural network models to obtain sufficient samples to construct a robust detection model.To address this problem,this paper designs an efficient sample generative adversarial model of FDIA attack in public-private network edge,which can effectively bypass the detectionmodel to threaten the power grid system.A generative adversarial network(GAN)framework is first constructed by combining residual networks(ResNet)with fully connected networks(FCN).Then,a sparse adversarial learning model is built by integrating the time-aligned data and normal data,which is used to learn the distribution characteristics between normal data and attack data through iterative confrontation.Furthermore,we introduce a Gaussian hybrid distributionmatrix by aggregating the network structure of attack data characteristics and normal data characteristics,which can connect and calculate FDIA data with normal characteristics.Finally,efficient FDIA attack samples can be sequentially generated through interactive adversarial learning.Extensive simulation experiments are conducted with IEEE 14-bus and IEEE 118-bus system data,and the results demonstrate that the generated attack samples of the proposed model can present superior performance compared to state-of-the-art models in terms of attack strength,robustness,and covert capability. 展开更多
关键词 Distributed smart grid FDIA adversarial learning power public-private network edge
下载PDF
Boosting Adversarial Training with Learnable Distribution
5
作者 Kai Chen Jinwei Wang +2 位作者 James Msughter Adeke Guangjie Liu Yuewei Dai 《Computers, Materials & Continua》 SCIE EI 2024年第3期3247-3265,共19页
In recent years,various adversarial defense methods have been proposed to improve the robustness of deep neural networks.Adversarial training is one of the most potent methods to defend against adversarial attacks.How... In recent years,various adversarial defense methods have been proposed to improve the robustness of deep neural networks.Adversarial training is one of the most potent methods to defend against adversarial attacks.However,the difference in the feature space between natural and adversarial examples hinders the accuracy and robustness of the model in adversarial training.This paper proposes a learnable distribution adversarial training method,aiming to construct the same distribution for training data utilizing the Gaussian mixture model.The distribution centroid is built to classify samples and constrain the distribution of the sample features.The natural and adversarial examples are pushed to the same distribution centroid to improve the accuracy and robustness of the model.The proposed method generates adversarial examples to close the distribution gap between the natural and adversarial examples through an attack algorithm explicitly designed for adversarial training.This algorithm gradually increases the accuracy and robustness of the model by scaling perturbation.Finally,the proposed method outputs the predicted labels and the distance between the sample and the distribution centroid.The distribution characteristics of the samples can be utilized to detect adversarial cases that can potentially evade the model defense.The effectiveness of the proposed method is demonstrated through comprehensive experiments. 展开更多
关键词 adversarial training feature space learnable distribution distribution centroid
下载PDF
Conditional Generative Adversarial Network Enabled Localized Stress Recovery of Periodic Composites
6
作者 Chengkan Xu Xiaofei Wang +2 位作者 Yixuan Li Guannan Wang He Zhang 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第7期957-974,共18页
Structural damage in heterogeneousmaterials typically originates frommicrostructures where stress concentration occurs.Therefore,evaluating the magnitude and location of localized stress distributions within microstru... Structural damage in heterogeneousmaterials typically originates frommicrostructures where stress concentration occurs.Therefore,evaluating the magnitude and location of localized stress distributions within microstructures under external loading is crucial.Repeating unit cells(RUCs)are commonly used to represent microstructural details and homogenize the effective response of composites.This work develops a machine learning-based micromechanics tool to accurately predict the stress distributions of extracted RUCs.The locally exact homogenization theory efficiently generates the microstructural stresses of RUCs with a wide range of parameters,including volume fraction,fiber/matrix property ratio,fiber shapes,and loading direction.Subsequently,the conditional generative adversarial network(cGAN)is employed and constructed as a surrogate model to establish the statistical correlation between these parameters and the corresponding localized stresses.The stresses predicted by cGAN are validated against the remaining true data not used for training,showing good agreement.This work demonstrates that the cGAN-based micromechanics tool effectively captures the local responses of composite RUCs.It can be used for predicting potential crack initiations starting from microstructures and evaluating the effective behavior of periodic composites. 展开更多
关键词 Periodic composites localized stress recovery conditional generative adversarial network
下载PDF
An Intelligent Secure Adversarial Examples Detection Scheme in Heterogeneous Complex Environments
7
作者 Weizheng Wang Xiangqi Wang +5 位作者 Xianmin Pan Xingxing Gong Jian Liang Pradip Kumar Sharma Osama Alfarraj Wael Said 《Computers, Materials & Continua》 SCIE EI 2023年第9期3859-3876,共18页
Image-denoising techniques are widely used to defend against Adversarial Examples(AEs).However,denoising alone cannot completely eliminate adversarial perturbations.The remaining perturbations tend to amplify as they ... Image-denoising techniques are widely used to defend against Adversarial Examples(AEs).However,denoising alone cannot completely eliminate adversarial perturbations.The remaining perturbations tend to amplify as they propagate through deeper layers of the network,leading to misclassifications.Moreover,image denoising compromises the classification accuracy of original examples.To address these challenges in AE defense through image denoising,this paper proposes a novel AE detection technique.The proposed technique combines multiple traditional image-denoising algorithms and Convolutional Neural Network(CNN)network structures.The used detector model integrates the classification results of different models as the input to the detector and calculates the final output of the detector based on a machine-learning voting algorithm.By analyzing the discrepancy between predictions made by the model on original examples and denoised examples,AEs are detected effectively.This technique reduces computational overhead without modifying the model structure or parameters,effectively avoiding the error amplification caused by denoising.The proposed approach demonstrates excellent detection performance against mainstream AE attacks.Experimental results show outstanding detection performance in well-known AE attacks,including Fast Gradient Sign Method(FGSM),Basic Iteration Method(BIM),DeepFool,and Carlini&Wagner(C&W),achieving a 94%success rate in FGSM detection,while only reducing the accuracy of clean examples by 4%. 展开更多
关键词 Deep neural networks adversarial example image denoising adversarial example detection machine learning adversarial attack
下载PDF
Chained Dual-Generative Adversarial Network:A Generalized Defense Against Adversarial Attacks
8
作者 Amitoj Bir Singh Lalit Kumar Awasthi +3 位作者 Urvashi Mohammad Shorfuzzaman Abdulmajeed Alsufyani Mueen Uddin 《Computers, Materials & Continua》 SCIE EI 2023年第2期2541-2555,共15页
Neural networks play a significant role in the field of image classification.When an input image is modified by adversarial attacks,the changes are imperceptible to the human eye,but it still leads to misclassificatio... Neural networks play a significant role in the field of image classification.When an input image is modified by adversarial attacks,the changes are imperceptible to the human eye,but it still leads to misclassification of the images.Researchers have demonstrated these attacks to make production self-driving cars misclassify StopRoad signs as 45 Miles Per Hour(MPH)road signs and a turtle being misclassified as AK47.Three primary types of defense approaches exist which can safeguard against such attacks i.e.,Gradient Masking,Robust Optimization,and Adversarial Example Detection.Very few approaches use Generative Adversarial Networks(GAN)for Defense against Adversarial Attacks.In this paper,we create a new approach to defend against adversarial attacks,dubbed Chained Dual-Generative Adversarial Network(CD-GAN)that tackles the defense against adversarial attacks by minimizing the perturbations of the adversarial image using iterative oversampling and undersampling using GANs.CD-GAN is created using two GANs,i.e.,CDGAN’s Sub-ResolutionGANandCDGAN’s Super-ResolutionGAN.The first is CDGAN’s Sub-Resolution GAN which takes the original resolution input image and oversamples it to generate a lower resolution neutralized image.The second is CDGAN’s Super-Resolution GAN which takes the output of the CDGAN’s Sub-Resolution and undersamples,it to generate the higher resolution image which removes any remaining perturbations.Chained Dual GAN is formed by chaining these two GANs together.Both of these GANs are trained independently.CDGAN’s Sub-Resolution GAN is trained using higher resolution adversarial images as inputs and lower resolution neutralized images as output image examples.Hence,this GAN downscales the image while removing adversarial attack noise.CDGAN’s Super-Resolution GAN is trained using lower resolution adversarial images as inputs and higher resolution neutralized images as output images.Because of this,it acts as an Upscaling GAN while removing the adversarial attak noise.Furthermore,CD-GAN has a modular design such that it can be prefixed to any existing classifier without any retraining or extra effort,and 2542 CMC,2023,vol.74,no.2 can defend any classifier model against adversarial attack.In this way,it is a Generalized Defense against adversarial attacks,capable of defending any classifier model against any attacks.This enables the user to directly integrate CD-GANwith an existing production deployed classifier smoothly.CD-GAN iteratively removes the adversarial noise using a multi-step approach in a modular approach.It performs comparably to the state of the arts with mean accuracy of 33.67 while using minimal compute resources in training. 展开更多
关键词 adversarial attacks GAN-based adversarial defense image classification models adversarial defense
下载PDF
Dual Attribute Adversarial Camouflage toward camouflaged object detection 被引量:1
9
作者 Yang Wang Zheng Fang +3 位作者 Yun-fei Zheng Zhen Yang Wen Tong Tie-yong Cao 《Defence Technology(防务技术)》 SCIE EI CAS CSCD 2023年第4期166-175,共10页
The object detectors can precisely detect the camouflaged object beyond human perception.The investigations reveal that the CNNs-based(Convolution Neural Networks)detectors are vulnerable to adversarial attacks.Some w... The object detectors can precisely detect the camouflaged object beyond human perception.The investigations reveal that the CNNs-based(Convolution Neural Networks)detectors are vulnerable to adversarial attacks.Some works can fool detectors by crafting the adversarial camouflage attached to the object,leading to wrong prediction.It is hard for military operations to utilize the existing adversarial camouflage due to its conspicuous appearance.Motivated by this,this paper proposes the Dual Attribute Adversarial Camouflage(DAAC)for evading the detection by both detectors and humans.Generating DAAC includes two steps:(1)Extracting features from a specific type of scene to generate individual soldier digital camouflage;(2)Attaching the adversarial patch with scene features constraint to the individual soldier digital camouflage to generate the adversarial attribute of DAAC.The visual effects of the individual soldier digital camouflage and the adversarial patch will be improved after integrating with the scene features.Experiment results show that objects camouflaged by DAAC are well integrated with background and achieve visual concealment while remaining effective in fooling object detectors,thus evading the detections by both detectors and humans in the digital domain.This work can serve as the reference for crafting the adversarial camouflage in the physical world. 展开更多
关键词 adversarial camouflage Digital camouflage generation Visual concealment Object detection adversarial patch
下载PDF
A generative adversarial network-based unified model integrating bias correction and downscaling for global SST
10
作者 Shijin Yuan Xin Feng +3 位作者 Bin Mu Bo Qin Xin Wang Yuxuan Chen 《Atmospheric and Oceanic Science Letters》 CSCD 2024年第1期45-52,共8页
本文提出了一种基于生成对抗网络的全球海表面温度(sea surface temperature,SST)偏差订正及降尺度整合模型.该模型的生成器使用偏差订正模块将数值模式预测结果进行校正,再用可复用的共享降尺度模块将订正后的数据分辨率逐次提高.该模... 本文提出了一种基于生成对抗网络的全球海表面温度(sea surface temperature,SST)偏差订正及降尺度整合模型.该模型的生成器使用偏差订正模块将数值模式预测结果进行校正,再用可复用的共享降尺度模块将订正后的数据分辨率逐次提高.该模型的判别器可鉴别偏差订正及降尺度结果的质量,以此为标准进行对抗训练。同时,在对抗损失函数中含有物理引导的动力学惩罚项以提高模型的性能.本研究基于分辨率为1°的GFDL SPEAR模式的SST预测结果,选择遥感系统(Remote Sensing System)的观测资料作为真值,面向月尺度ENSO与IOD事件以及天尺度海洋热浪事件开展了验证试验:模型在将分辨率提高到0.0625°×0.0625°的同时将预测误差减少约90.3%,突破了观测数据分辨率的限制,且与观测结果的结构相似性高达96.46%. 展开更多
关键词 偏差订正 降尺度 海表面温度 生成对抗网络 物理引导的神经网络
下载PDF
Automated Video Generation of Moving Digits from Text Using Deep Deconvolutional Generative Adversarial Network
11
作者 Anwar Ullah Xinguo Yu Muhammad Numan 《Computers, Materials & Continua》 SCIE EI 2023年第11期2359-2383,共25页
Generating realistic and synthetic video from text is a highly challenging task due to the multitude of issues involved,including digit deformation,noise interference between frames,blurred output,and the need for tem... Generating realistic and synthetic video from text is a highly challenging task due to the multitude of issues involved,including digit deformation,noise interference between frames,blurred output,and the need for temporal coherence across frames.In this paper,we propose a novel approach for generating coherent videos of moving digits from textual input using a Deep Deconvolutional Generative Adversarial Network(DD-GAN).The DDGAN comprises a Deep Deconvolutional Neural Network(DDNN)as a Generator(G)and a modified Deep Convolutional Neural Network(DCNN)as a Discriminator(D)to ensure temporal coherence between adjacent frames.The proposed research involves several steps.First,the input text is fed into a Long Short Term Memory(LSTM)based text encoder and then smoothed using Conditioning Augmentation(CA)techniques to enhance the effectiveness of the Generator(G).Next,using a DDNN to generate video frames by incorporating enhanced text and random noise and modifying a DCNN to act as a Discriminator(D),effectively distinguishing between generated and real videos.This research evaluates the quality of the generated videos using standard metrics like Inception Score(IS),Fréchet Inception Distance(FID),Fréchet Inception Distance for video(FID2vid),and Generative Adversarial Metric(GAM),along with a human study based on realism,coherence,and relevance.By conducting experiments on Single-Digit Bouncing MNIST GIFs(SBMG),Two-Digit Bouncing MNIST GIFs(TBMG),and a custom dataset of essential mathematics videos with related text,this research demonstrates significant improvements in both metrics and human study results,confirming the effectiveness of DD-GAN.This research also took the exciting challenge of generating preschool math videos from text,handling complex structures,digits,and symbols,and achieving successful results.The proposed research demonstrates promising results for generating coherent videos from textual input. 展开更多
关键词 Generative adversarial Network(GAN) deconvolutional neural network convolutional neural network Inception Score(IS) temporal coherence Fréchet Inception Distance(FID) Generative adversarial Metric(GAM)
下载PDF
VeriFace:Defending against Adversarial Attacks in Face Verification Systems
12
作者 Awny Sayed Sohair Kinlany +1 位作者 Alaa Zaki Ahmed Mahfouz 《Computers, Materials & Continua》 SCIE EI 2023年第9期3151-3166,共16页
Face verification systems are critical in a wide range of applications,such as security systems and biometric authentication.However,these systems are vulnerable to adversarial attacks,which can significantly compromi... Face verification systems are critical in a wide range of applications,such as security systems and biometric authentication.However,these systems are vulnerable to adversarial attacks,which can significantly compromise their accuracy and reliability.Adversarial attacks are designed to deceive the face verification system by adding subtle perturbations to the input images.These perturbations can be imperceptible to the human eye but can cause the systemtomisclassifyor fail torecognize thepersoninthe image.Toaddress this issue,weproposeanovel system called VeriFace that comprises two defense mechanisms,adversarial detection,and adversarial removal.The first mechanism,adversarial detection,is designed to identify whether an input image has been subjected to adversarial perturbations.The second mechanism,adversarial removal,is designed to remove these perturbations from the input image to ensure the face verification system can accurately recognize the person in the image.To evaluate the effectiveness of the VeriFace system,we conducted experiments on different types of adversarial attacks using the Labelled Faces in the Wild(LFW)dataset.Our results show that the VeriFace adversarial detector can accurately identify adversarial imageswith a high detection accuracy of 100%.Additionally,our proposedVeriFace adversarial removalmethod has a significantly lower attack success rate of 6.5%compared to state-of-the-art removalmethods. 展开更多
关键词 adversarial attacks face aerification adversarial detection perturbation removal
下载PDF
Defending Adversarial Examples by a Clipped Residual U-Net Model
13
作者 Kazim Ali Adnan N.Qureshi +2 位作者 Muhammad Shahid Bhatti Abid Sohail Mohammad Hijji 《Intelligent Automation & Soft Computing》 SCIE 2023年第2期2237-2256,共20页
Deep learning-based systems have succeeded in many computer vision tasks.However,it is found that the latest study indicates that these systems are in danger in the presence of adversarial attacks.These attacks can qu... Deep learning-based systems have succeeded in many computer vision tasks.However,it is found that the latest study indicates that these systems are in danger in the presence of adversarial attacks.These attacks can quickly spoil deep learning models,e.g.,different convolutional neural networks(CNNs),used in various computer vision tasks from image classification to object detection.The adversarial examples are carefully designed by injecting a slight perturbation into the clean images.The proposed CRU-Net defense model is inspired by state-of-the-art defense mechanisms such as MagNet defense,Generative Adversarial Net-work Defense,Deep Regret Analytic Generative Adversarial Networks Defense,Deep Denoising Sparse Autoencoder Defense,and Condtional Generattive Adversarial Network Defense.We have experimentally proved that our approach is better than previous defensive techniques.Our proposed CRU-Net model maps the adversarial image examples into clean images by eliminating the adversarial perturbation.The proposed defensive approach is based on residual and U-Net learning.Many experiments are done on the datasets MNIST and CIFAR10 to prove that our proposed CRU-Net defense model prevents adversarial example attacks in WhiteBox and BlackBox settings and improves the robustness of the deep learning algorithms especially in the computer visionfield.We have also reported similarity(SSIM and PSNR)between the original and restored clean image examples by the proposed CRU-Net defense model. 展开更多
关键词 adversarial examples adversarial attacks defense method residual learning u-net cgan cru-et model
下载PDF
Adversarial Attack-Based Robustness Evaluation for Trustworthy AI
14
作者 Eungyu Lee Yongsoo Lee Taejin Lee 《Computer Systems Science & Engineering》 SCIE EI 2023年第11期1919-1935,共17页
Artificial Intelligence(AI)technology has been extensively researched in various fields,including the field of malware detection.AI models must be trustworthy to introduce AI systems into critical decisionmaking and r... Artificial Intelligence(AI)technology has been extensively researched in various fields,including the field of malware detection.AI models must be trustworthy to introduce AI systems into critical decisionmaking and resource protection roles.The problem of robustness to adversarial attacks is a significant barrier to trustworthy AI.Although various adversarial attack and defense methods are actively being studied,there is a lack of research on robustness evaluation metrics that serve as standards for determining whether AI models are safe and reliable against adversarial attacks.An AI model’s robustness level cannot be evaluated by traditional evaluation indicators such as accuracy and recall.Additional evaluation indicators are necessary to evaluate the robustness of AI models against adversarial attacks.In this paper,a Sophisticated Adversarial Robustness Score(SARS)is proposed for AI model robustness evaluation.SARS uses various factors in addition to the ratio of perturbated features and the size of perturbation to evaluate robustness accurately in the evaluation process.This evaluation indicator reflects aspects that are difficult to evaluate using traditional evaluation indicators.Moreover,the level of robustness can be evaluated by considering the difficulty of generating adversarial samples through adversarial attacks.This paper proposed using SARS,calculated based on adversarial attacks,to identify data groups with robustness vulnerability and improve robustness through adversarial training.Through SARS,it is possible to evaluate the level of robustness,which can help developers identify areas for improvement.To validate the proposed method,experiments were conducted using a malware dataset.Through adversarial training,it was confirmed that SARS increased by 70.59%,and the recall reduction rate improved by 64.96%.Through SARS,it is possible to evaluate whether an AI model is vulnerable to adversarial attacks and to identify vulnerable data types.In addition,it is expected that improved models can be achieved by improving resistance to adversarial attacks via methods such as adversarial training. 展开更多
关键词 AI ROBUSTNESS adversarial attack adversarial robustness robustness indicator trustworthy AI
下载PDF
An Optimised Defensive Technique to Recognize Adversarial Iris Images Using Curvelet Transform
15
作者 K.Meenakshi G.Maragatham 《Intelligent Automation & Soft Computing》 SCIE 2023年第1期627-643,共17页
Deep Learning is one of the most popular computer science techniques,with applications in natural language processing,image processing,pattern iden-tification,and various otherfields.Despite the success of these deep ... Deep Learning is one of the most popular computer science techniques,with applications in natural language processing,image processing,pattern iden-tification,and various otherfields.Despite the success of these deep learning algorithms in multiple scenarios,such as spam detection,malware detection,object detection and tracking,face recognition,and automatic driving,these algo-rithms and their associated training data are rather vulnerable to numerous security threats.These threats ultimately result in significant performance degradation.Moreover,the supervised based learning models are affected by manipulated data known as adversarial examples,which are images with a particular level of noise that is invisible to humans.Adversarial inputs are introduced to purposefully confuse a neural network,restricting its use in sensitive application areas such as bio-metrics applications.In this paper,an optimized defending approach is proposed to recognize the adversarial iris examples efficiently.The Curvelet Transform Denoising method is used in this defense strategy,which examines every sub-band of the adversarial images and reproduces the image that has been changed by the attacker.The salient iris features are retrieved from the reconstructed iris image by using a pretrained Convolutional Neural Network model(VGG 16)followed by Multiclass classification.The classification is performed by using Support Vector Machine(SVM)which uses Particle Swarm Optimization method(PSO-SVM).The proposed system is tested when classifying the adversarial iris images affected by various adversarial attacks such as FGSM,iGSM,and Deep-fool methods.An experimental result on benchmark iris dataset,namely IITD,produces excellent outcomes with the highest accuracy of 95.8%on average. 展开更多
关键词 adversarial attacks BIOMETRICS curvelet transform CNN particle swarm optimization adversarial iris recognition
下载PDF
Instance Reweighting Adversarial Training Based on Confused Label
16
作者 Zhicong Qiu Xianmin Wang +3 位作者 Huawei Ma Songcao Hou Jing Li Zuoyong Li 《Intelligent Automation & Soft Computing》 SCIE 2023年第8期1243-1256,共14页
Reweighting adversarial examples during training plays an essential role in improving the robustness of neural networks,which lies in the fact that examples closer to the decision boundaries are much more vulnerable t... Reweighting adversarial examples during training plays an essential role in improving the robustness of neural networks,which lies in the fact that examples closer to the decision boundaries are much more vulnerable to being attacked and should be given larger weights.The probability margin(PM)method is a promising approach to continuously and path-independently mea-suring such closeness between the example and decision boundary.However,the performance of PM is limited due to the fact that PM fails to effectively distinguish the examples having only one misclassified category and the ones with multiple misclassified categories,where the latter is closer to multi-classification decision boundaries and is supported to be more critical in our observation.To tackle this problem,this paper proposed an improved PM criterion,called confused-label-based PM(CL-PM),to measure the closeness mentioned above and reweight adversarial examples during training.Specifi-cally,a confused label(CL)is defined as the label whose prediction probability is greater than that of the ground truth label given a specific adversarial example.Instead of considering the discrepancy between the probability of the true label and the probability of the most misclassified label as the PM method does,we evaluate the closeness by accumulating the probability differences of all the CLs and ground truth label.CL-PM shares a negative correlation with data vulnerability:data with larger/smaller CL-PM is safer/riskier and should have a smaller/larger weight.Experiments demonstrated that CL-PM is more reliable in indicating the closeness regarding multiple misclassified categories,and reweighting adversarial training based on CL-PM outperformed state-of-the-art counterparts. 展开更多
关键词 Reweighting adversarial training adversarial example boundary closeness confused label
下载PDF
Black Box Adversarial Defense Based on Image Denoising and Pix2Pix
17
作者 Zhenyong Rui Xiugang Gong 《Journal of Computer and Communications》 2023年第12期14-30,共17页
Deep Neural Networks (DNN) are widely utilized due to their outstanding performance, but the susceptibility to adversarial attacks poses significant security risks, making adversarial defense research crucial in the f... Deep Neural Networks (DNN) are widely utilized due to their outstanding performance, but the susceptibility to adversarial attacks poses significant security risks, making adversarial defense research crucial in the field of AI security. Currently, robustness defense techniques for models often rely on adversarial training, a method that tends to only defend against specific types of attacks and lacks strong generalization. In response to this challenge, this paper proposes a black-box defense method based on Image Denoising and Pix2Pix (IDP) technology. This method does not require prior knowledge of the specific attack type and eliminates the need for cumbersome adversarial training. When making predictions on unknown samples, the IDP method first undergoes denoising processing, followed by inputting the processed image into a trained Pix2Pix model for image transformation. Finally, the image generated by Pix2Pix is input into the classification model for prediction. This versatile defense approach demonstrates excellent defensive performance against common attack methods such as FGSM, I-FGSM, DeepFool, and UPSET, showcasing high flexibility and transferability. In summary, the IDP method introduces new perspectives and possibilities for adversarial sample defense, alleviating the limitations of traditional adversarial training methods and enhancing the overall robustness of models. 展开更多
关键词 Deep Neural Networks (DNN) adversarial attack adversarial Training Fourier Transform Robust Defense
下载PDF
Remaining Useful Life Prediction With Partial Sensor Malfunctions Using Deep Adversarial Networks 被引量:1
18
作者 Xiang Li Yixiao Xu +2 位作者 Naipeng Li Bin Yang Yaguo Lei 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2023年第1期121-134,共14页
In recent years,intelligent data-driven prognostic methods have been successfully developed,and good machinery health assessment performance has been achieved through explorations of data from multiple sensors.However... In recent years,intelligent data-driven prognostic methods have been successfully developed,and good machinery health assessment performance has been achieved through explorations of data from multiple sensors.However,existing datafusion prognostic approaches generally rely on the data availability of all sensors,and are vulnerable to potential sensor malfunctions,which are likely to occur in real industries especially for machines in harsh operating environments.In this paper,a deep learning-based remaining useful life(RUL)prediction method is proposed to address the sensor malfunction problem.A global feature extraction scheme is adopted to fully exploit information of different sensors.Adversarial learning is further introduced to extract generalized sensor-invariant features.Through explorations of both global and shared features,promising and robust RUL prediction performance can be achieved by the proposed method in the testing scenarios with sensor malfunctions.The experimental results suggest the proposed approach is well suited for real industrial applications. 展开更多
关键词 adversarial training data fusion deep learning remaining useful life(RUL)prediction sensor malfunction
下载PDF
Conditional Generative Adversarial Network Approach for Autism Prediction 被引量:1
19
作者 K.Chola Raja S.Kannimuthu 《Computer Systems Science & Engineering》 SCIE EI 2023年第1期741-755,共15页
Autism Spectrum Disorder(ASD)requires a precise diagnosis in order to be managed and rehabilitated.Non-invasive neuroimaging methods are disease markers that can be used to help diagnose ASD.The majority of available ... Autism Spectrum Disorder(ASD)requires a precise diagnosis in order to be managed and rehabilitated.Non-invasive neuroimaging methods are disease markers that can be used to help diagnose ASD.The majority of available techniques in the literature use functional magnetic resonance imaging(fMRI)to detect ASD with a small dataset,resulting in high accuracy but low generality.Traditional supervised machine learning classification algorithms such as support vector machines function well with unstructured and semi structured data such as text,images,and videos,but their performance and robustness are restricted by the size of the accompanying training data.Deep learning on the other hand creates an artificial neural network that can learn and make intelligent judgments on its own by layering algorithms.It takes use of plentiful low-cost computing and many approaches are focused with very big datasets that are concerned with creating far larger and more sophisticated neural networks.Generative modelling,also known as Generative Adversarial Networks(GANs),is an unsupervised deep learning task that entails automatically discovering and learning regularities or patterns in input data in order for the model to generate or output new examples that could have been drawn from the original dataset.GANs are an exciting and rapidly changingfield that delivers on the promise of generative models in terms of their ability to generate realistic examples across a range of problem domains,most notably in image-to-image translation tasks and hasn't been explored much for Autism spectrum disorder prediction in the past.In this paper,we present a novel conditional generative adversarial network,or cGAN for short,which is a form of GAN that uses a generator model to conditionally generate images.In terms of prediction and accuracy,they outperform the standard GAN.The pro-posed model is 74%more accurate than the traditional methods and takes only around 10 min for training even with a huge dataset. 展开更多
关键词 AUTISM classification attributes imaging adversarial FMRI functional graph neural networks
下载PDF
A Credit Card Fraud Detection Model Based on Multi-Feature Fusion and Generative Adversarial Network 被引量:1
20
作者 Yalong Xie Aiping Li +2 位作者 Biyin Hu Liqun Gao Hongkui Tu 《Computers, Materials & Continua》 SCIE EI 2023年第9期2707-2726,共20页
Credit Card Fraud Detection(CCFD)is an essential technology for banking institutions to control fraud risks and safeguard their reputation.Class imbalance and insufficient representation of feature data relating to cr... Credit Card Fraud Detection(CCFD)is an essential technology for banking institutions to control fraud risks and safeguard their reputation.Class imbalance and insufficient representation of feature data relating to credit card transactions are two prevalent issues in the current study field of CCFD,which significantly impact classification models’performance.To address these issues,this research proposes a novel CCFD model based on Multifeature Fusion and Generative Adversarial Networks(MFGAN).The MFGAN model consists of two modules:a multi-feature fusion module for integrating static and dynamic behavior data of cardholders into a unified highdimensional feature space,and a balance module based on the generative adversarial network to decrease the class imbalance ratio.The effectiveness of theMFGAN model is validated on two actual credit card datasets.The impacts of different class balance ratios on the performance of the four resamplingmodels are analyzed,and the contribution of the two different modules to the performance of the MFGAN model is investigated via ablation experiments.Experimental results demonstrate that the proposed model does better than state-of-the-art models in terms of recall,F1,and Area Under the Curve(AUC)metrics,which means that the MFGAN model can help banks find more fraudulent transactions and reduce fraud losses. 展开更多
关键词 Credit card fraud detection imbalanced classification feature fusion generative adversarial networks anti-fraud systems
下载PDF
上一页 1 2 231 下一页 到第
使用帮助 返回顶部