Federated Learning(FL),a burgeoning technology,has received increasing attention due to its privacy protection capability.However,the base algorithm FedAvg is vulnerable when it suffers from so-called backdoor attacks...Federated Learning(FL),a burgeoning technology,has received increasing attention due to its privacy protection capability.However,the base algorithm FedAvg is vulnerable when it suffers from so-called backdoor attacks.Former researchers proposed several robust aggregation methods.Unfortunately,due to the hidden characteristic of backdoor attacks,many of these aggregation methods are unable to defend against backdoor attacks.What's more,the attackers recently have proposed some hiding methods that further improve backdoor attacks'stealthiness,making all the existing robust aggregation methods fail.To tackle the threat of backdoor attacks,we propose a new aggregation method,X-raying Models with A Matrix(XMAM),to reveal the malicious local model updates submitted by the backdoor attackers.Since we observe that the output of the Softmax layer exhibits distinguishable patterns between malicious and benign updates,unlike the existing aggregation algorithms,we focus on the Softmax layer's output in which the backdoor attackers are difficult to hide their malicious behavior.Specifically,like medical X-ray examinations,we investigate the collected local model updates by using a matrix as an input to get their Softmax layer's outputs.Then,we preclude updates whose outputs are abnormal by clustering.Without any training dataset in the server,the extensive evaluations show that our XMAM can effectively distinguish malicious local model updates from benign ones.For instance,when other methods fail to defend against the backdoor attacks at no more than 20%malicious clients,our method can tolerate 45%malicious clients in the black-box mode and about 30%in Projected Gradient Descent(PGD)mode.Besides,under adaptive attacks,the results demonstrate that XMAM can still complete the global model training task even when there are 40%malicious clients.Finally,we analyze our method's screening complexity and compare the real screening time with other methods.The results show that XMAM is about 10–10000 times faster than the existing methods.展开更多
Secure platooning control plays an important role in enhancing the cooperative driving safety of automated vehicles subject to various security vulnerabilities.This paper focuses on the distributed secure control issu...Secure platooning control plays an important role in enhancing the cooperative driving safety of automated vehicles subject to various security vulnerabilities.This paper focuses on the distributed secure control issue of automated vehicles affected by replay attacks.A proportional-integral-observer(PIO)with predetermined forgetting parameters is first constructed to acquire the dynamical information of vehicles.Then,a time-varying parameter and two positive scalars are employed to describe the temporal behavior of replay attacks.In light of such a scheme and the common properties of Laplace matrices,the closed-loop system with PIO-based controllers is transformed into a switched and time-delayed one.Furthermore,some sufficient conditions are derived to achieve the desired platooning performance by the view of the Lyapunov stability theory.The controller gains are analytically determined by resorting to the solution of certain matrix inequalities only dependent on maximum and minimum eigenvalues of communication topologies.Finally,a simulation example is provided to illustrate the effectiveness of the proposed control strategy.展开更多
In vehicle edge computing(VEC),asynchronous federated learning(AFL)is used,where the edge receives a local model and updates the global model,effectively reducing the global aggregation latency.Due to different amount...In vehicle edge computing(VEC),asynchronous federated learning(AFL)is used,where the edge receives a local model and updates the global model,effectively reducing the global aggregation latency.Due to different amounts of local data,computing capabilities and locations of the vehicles,renewing the global model with same weight is inappropriate.The above factors will affect the local calculation time and upload time of the local model,and the vehicle may also be affected by Byzantine attacks,leading to the deterioration of the vehicle data.However,based on deep reinforcement learning(DRL),we can consider these factors comprehensively to eliminate vehicles with poor performance as much as possible and exclude vehicles that have suffered Byzantine attacks before AFL.At the same time,when aggregating AFL,we can focus on those vehicles with better performance to improve the accuracy and safety of the system.In this paper,we proposed a vehicle selection scheme based on DRL in VEC.In this scheme,vehicle’s mobility,channel conditions with temporal variations,computational resources with temporal variations,different data amount,transmission channel status of vehicles as well as Byzantine attacks were taken into account.Simulation results show that the proposed scheme effectively improves the safety and accuracy of the global model.展开更多
The recent interest in the deployment of Generative AI applications that use large language models (LLMs) has brought to the forefront significant privacy concerns, notably the leakage of Personally Identifiable Infor...The recent interest in the deployment of Generative AI applications that use large language models (LLMs) has brought to the forefront significant privacy concerns, notably the leakage of Personally Identifiable Information (PII) and other confidential or protected information that may have been memorized during training, specifically during a fine-tuning or customization process. We describe different black-box attacks from potential adversaries and study their impact on the amount and type of information that may be recovered from commonly used and deployed LLMs. Our research investigates the relationship between PII leakage, memorization, and factors such as model size, architecture, and the nature of attacks employed. The study utilizes two broad categories of attacks: PII leakage-focused attacks (auto-completion and extraction attacks) and memorization-focused attacks (various membership inference attacks). The findings from these investigations are quantified using an array of evaluative metrics, providing a detailed understanding of LLM vulnerabilities and the effectiveness of different attacks.展开更多
This study investigates resilient platoon control for constrained intelligent and connected vehicles(ICVs)against F-local Byzantine attacks.We introduce a resilient distributed model-predictive platooning control fram...This study investigates resilient platoon control for constrained intelligent and connected vehicles(ICVs)against F-local Byzantine attacks.We introduce a resilient distributed model-predictive platooning control framework for such ICVs.This framework seamlessly integrates the predesigned optimal control with distributed model predictive control(DMPC)optimization and introduces a unique distributed attack detector to ensure the reliability of the transmitted information among vehicles.Notably,our strategy uses previously broadcasted information and a specialized convex set,termed the“resilience set”,to identify unreliable data.This approach significantly eases graph robustness prerequisites,requiring only an(F+1)-robust graph,in contrast to the established mean sequence reduced algorithms,which require a minimum(2F+1)-robust graph.Additionally,we introduce a verification algorithm to restore trust in vehicles under minor attacks,further reducing communication network robustness.Our analysis demonstrates the recursive feasibility of the DMPC optimization.Furthermore,the proposed method achieves exceptional control performance by minimizing the discrepancies between the DMPC control inputs and predesigned platoon control inputs,while ensuring constraint compliance and cybersecurity.Simulation results verify the effectiveness of our theoretical findings.展开更多
Future 6G communications are envisioned to enable a large catalogue of pioneering applications.These will range from networked Cyber-Physical Systems to edge computing devices,establishing real-time feedback control l...Future 6G communications are envisioned to enable a large catalogue of pioneering applications.These will range from networked Cyber-Physical Systems to edge computing devices,establishing real-time feedback control loops critical for managing Industry 5.0 deployments,digital agriculture systems,and essential infrastructures.The provision of extensive machine-type communications through 6G will render many of these innovative systems autonomous and unsupervised.While full automation will enhance industrial efficiency significantly,it concurrently introduces new cyber risks and vulnerabilities.In particular,unattended systems are highly susceptible to trust issues:malicious nodes and false information can be easily introduced into control loops.Additionally,Denialof-Service attacks can be executed by inundating the network with valueless noise.Current anomaly detection schemes require the entire transformation of the control software to integrate new steps and can only mitigate anomalies that conform to predefined mathematical models.Solutions based on an exhaustive data collection to detect anomalies are precise but extremely slow.Standard models,with their limited understanding of mobile networks,can achieve precision rates no higher than 75%.Therefore,more general and transversal protection mechanisms are needed to detect malicious behaviors transparently.This paper introduces a probabilistic trust model and control algorithm designed to address this gap.The model determines the probability of any node to be trustworthy.Communication channels are pruned for those nodes whose probability is below a given threshold.The trust control algorithmcomprises three primary phases,which feed themodel with three different probabilities,which are weighted and combined.Initially,anomalous nodes are identified using Gaussian mixture models and clustering technologies.Next,traffic patterns are studied using digital Bessel functions and the functional scalar product.Finally,the information coherence and content are analyzed.The noise content and abnormal information sequences are detected using a Volterra filter and a bank of Finite Impulse Response filters.An experimental validation based on simulation tools and environments was carried out.Results show the proposed solution can successfully detect up to 92%of malicious data injection attacks.展开更多
Existing web-based security applications have failed in many situations due to the great intelligence of attackers.Among web applications,Cross-Site Scripting(XSS)is one of the dangerous assaults experienced while mod...Existing web-based security applications have failed in many situations due to the great intelligence of attackers.Among web applications,Cross-Site Scripting(XSS)is one of the dangerous assaults experienced while modifying an organization's or user's information.To avoid these security challenges,this article proposes a novel,all-encompassing combination of machine learning(NB,SVM,k-NN)and deep learning(RNN,CNN,LSTM)frameworks for detecting and defending against XSS attacks with high accuracy and efficiency.Based on the representation,a novel idea for merging stacking ensemble with web applications,termed“hybrid stacking”,is proposed.In order to implement the aforementioned methods,four distinct datasets,each of which contains both safe and unsafe content,are considered.The hybrid detection method can adaptively identify the attacks from the URL,and the defense mechanism inherits the advantages of URL encoding with dictionary-based mapping to improve prediction accuracy,accelerate the training process,and effectively remove the unsafe JScript/JavaScript keywords from the URL.The simulation results show that the proposed hybrid model is more efficient than the existing detection methods.It produces more than 99.5%accurate XSS attack classification results(accuracy,precision,recall,f1_score,and Receiver Operating Characteristic(ROC))and is highly resistant to XSS attacks.In order to ensure the security of the server's information,the proposed hybrid approach is demonstrated in a real-time environment.展开更多
Phishing,an Internet fraudwhere individuals are deceived into revealing critical personal and account information,poses a significant risk to both consumers and web-based institutions.Data indicates a persistent rise ...Phishing,an Internet fraudwhere individuals are deceived into revealing critical personal and account information,poses a significant risk to both consumers and web-based institutions.Data indicates a persistent rise in phishing attacks.Moreover,these fraudulent schemes are progressively becoming more intricate,thereby rendering them more challenging to identify.Hence,it is imperative to utilize sophisticated algorithms to address this issue.Machine learning is a highly effective approach for identifying and uncovering these harmful behaviors.Machine learning(ML)approaches can identify common characteristics in most phishing assaults.In this paper,we propose an ensemble approach and compare it with six machine learning techniques to determine the type of website and whether it is normal or not based on two phishing datasets.After that,we used the normalization technique on the dataset to transform the range of all the features into the same range.The findings of this paper for all algorithms are as follows in the first dataset based on accuracy,precision,recall,and F1-score,respectively:Decision Tree(DT)(0.964,0.961,0.976,0.968),Random Forest(RF)(0.970,0.964,0.984,0.974),Gradient Boosting(GB)(0.960,0.959,0.971,0.965),XGBoost(XGB)(0.973,0.976,0.976,0.976),AdaBoost(0.934,0.934,0.950,0.942),Multi Layer Perceptron(MLP)(0.970,0.971,0.976,0.974)and Voting(0.978,0.975,0.987,0.981).So,the Voting classifier gave the best results.While in the second dataset,all the algorithms gave the same results in four evaluation metrics,which indicates that each of them can effectively accomplish the prediction process.Also,this approach outperformed the previous work in detecting phishing websites with high accuracy,a lower false negative rate,a shorter prediction time,and a lower false positive rate.展开更多
Bayesian networks are a powerful class of graphical decision models used to represent causal relationships among variables.However,the reliability and integrity of learned Bayesian network models are highly dependent ...Bayesian networks are a powerful class of graphical decision models used to represent causal relationships among variables.However,the reliability and integrity of learned Bayesian network models are highly dependent on the quality of incoming data streams.One of the primary challenges with Bayesian networks is their vulnerability to adversarial data poisoning attacks,wherein malicious data is injected into the training dataset to negatively influence the Bayesian network models and impair their performance.In this research paper,we propose an efficient framework for detecting data poisoning attacks against Bayesian network structure learning algorithms.Our framework utilizes latent variables to quantify the amount of belief between every two nodes in each causal model over time.We use our innovative methodology to tackle an important issue with data poisoning assaults in the context of Bayesian networks.With regard to four different forms of data poisoning attacks,we specifically aim to strengthen the security and dependability of Bayesian network structure learning techniques,such as the PC algorithm.By doing this,we explore the complexity of this area and offer workablemethods for identifying and reducing these sneaky dangers.Additionally,our research investigates one particular use case,the“Visit to Asia Network.”The practical consequences of using uncertainty as a way to spot cases of data poisoning are explored in this inquiry,which is of utmost relevance.Our results demonstrate the promising efficacy of latent variables in detecting and mitigating the threat of data poisoning attacks.Additionally,our proposed latent-based framework proves to be sensitive in detecting malicious data poisoning attacks in the context of stream data.展开更多
Cocoa farming faces numerous constraints that affect production levels. Among these constraints are termites, one of the biggest scourges in tropical agriculture and agroforestry. The aim of this study is to assess th...Cocoa farming faces numerous constraints that affect production levels. Among these constraints are termites, one of the biggest scourges in tropical agriculture and agroforestry. The aim of this study is to assess the level of damage caused by termites in cocoa plantations. To this end, 3 plantations were selected. In each of the 3 plantations, 18 plots containing an average of 47 ± 6 cocoa plants were delimited. Sampling was based on 25 cocoa plants per plot. The study consisted in sampling the termites observed on the plants and noting the type of damage caused by them, taking into account the density of the harvest veneers and, above all, the termites’ progress through the anatomical structures of the plant, i.e. the bark, sapwood and heartwood. A total of 8 termite species were collected from cocoa plants. These species are responsible for four types of damage (D1, D2, D3 and D4), grouped into minor damage (D1 and D2) and major damage (D3 and D4). D1 damage ranged from 24.67% ± 5.64% to 39.55% ± 7.43%. D2 damage ranged from 6.88% ± 1.31% to 9.33% ± 2.79%. D3 damage ranged from 2.88% ± 1.55% to 6.44% ± 1.55%. D4 damage ranged from 1.11% ± 1% to 3.11% ± 1.37%. Among the termite species collected, Microcerotermes sp, C. sjostedti, A. crucifer and P. militaris were the most formidable on cocoa trees in our study locality. In view of the extensive damage caused by termites, biological control measures should be considered, using insecticidal plants.展开更多
Quantum key distribution(QKD),rooted in quantum mechanics,offers information-theoretic security.However,practi-cal systems open security threats due to imperfections,notably bright-light blinding attacks targeting sin...Quantum key distribution(QKD),rooted in quantum mechanics,offers information-theoretic security.However,practi-cal systems open security threats due to imperfections,notably bright-light blinding attacks targeting single-photon detectors.Here,we propose a concise,robust defense strategy for protecting single-photon detectors in QKD systems against blinding attacks.Our strategy uses a dual approach:detecting the bias current of the avalanche photodiode(APD)to defend against con-tinuous-wave blinding attacks,and monitoring the avalanche amplitude to protect against pulsed blinding attacks.By integrat-ing these two branches,the proposed solution effectively identifies and mitigates a wide range of bright light injection attempts,significantly enhancing the resilience of QKD systems against various bright-light blinding attacks.This method forti-fies the safeguards of quantum communications and offers a crucial contribution to the field of quantum information security.展开更多
基金Supported by the Fundamental Research Funds for the Central Universities(328202204)。
文摘Federated Learning(FL),a burgeoning technology,has received increasing attention due to its privacy protection capability.However,the base algorithm FedAvg is vulnerable when it suffers from so-called backdoor attacks.Former researchers proposed several robust aggregation methods.Unfortunately,due to the hidden characteristic of backdoor attacks,many of these aggregation methods are unable to defend against backdoor attacks.What's more,the attackers recently have proposed some hiding methods that further improve backdoor attacks'stealthiness,making all the existing robust aggregation methods fail.To tackle the threat of backdoor attacks,we propose a new aggregation method,X-raying Models with A Matrix(XMAM),to reveal the malicious local model updates submitted by the backdoor attackers.Since we observe that the output of the Softmax layer exhibits distinguishable patterns between malicious and benign updates,unlike the existing aggregation algorithms,we focus on the Softmax layer's output in which the backdoor attackers are difficult to hide their malicious behavior.Specifically,like medical X-ray examinations,we investigate the collected local model updates by using a matrix as an input to get their Softmax layer's outputs.Then,we preclude updates whose outputs are abnormal by clustering.Without any training dataset in the server,the extensive evaluations show that our XMAM can effectively distinguish malicious local model updates from benign ones.For instance,when other methods fail to defend against the backdoor attacks at no more than 20%malicious clients,our method can tolerate 45%malicious clients in the black-box mode and about 30%in Projected Gradient Descent(PGD)mode.Besides,under adaptive attacks,the results demonstrate that XMAM can still complete the global model training task even when there are 40%malicious clients.Finally,we analyze our method's screening complexity and compare the real screening time with other methods.The results show that XMAM is about 10–10000 times faster than the existing methods.
基金supported in part by the National Natural Science Foundation of China (61973219,U21A2019,61873058)the Hainan Province Science and Technology Special Fund (ZDYF2022SHFZ105)。
文摘Secure platooning control plays an important role in enhancing the cooperative driving safety of automated vehicles subject to various security vulnerabilities.This paper focuses on the distributed secure control issue of automated vehicles affected by replay attacks.A proportional-integral-observer(PIO)with predetermined forgetting parameters is first constructed to acquire the dynamical information of vehicles.Then,a time-varying parameter and two positive scalars are employed to describe the temporal behavior of replay attacks.In light of such a scheme and the common properties of Laplace matrices,the closed-loop system with PIO-based controllers is transformed into a switched and time-delayed one.Furthermore,some sufficient conditions are derived to achieve the desired platooning performance by the view of the Lyapunov stability theory.The controller gains are analytically determined by resorting to the solution of certain matrix inequalities only dependent on maximum and minimum eigenvalues of communication topologies.Finally,a simulation example is provided to illustrate the effectiveness of the proposed control strategy.
基金supported in part by the National Natural Science Foundation of China(No.61701197)in part by the National Key Research and Development Program of China(No.2021YFA1000500(4))in part by the 111 Project(No.B23008).
文摘In vehicle edge computing(VEC),asynchronous federated learning(AFL)is used,where the edge receives a local model and updates the global model,effectively reducing the global aggregation latency.Due to different amounts of local data,computing capabilities and locations of the vehicles,renewing the global model with same weight is inappropriate.The above factors will affect the local calculation time and upload time of the local model,and the vehicle may also be affected by Byzantine attacks,leading to the deterioration of the vehicle data.However,based on deep reinforcement learning(DRL),we can consider these factors comprehensively to eliminate vehicles with poor performance as much as possible and exclude vehicles that have suffered Byzantine attacks before AFL.At the same time,when aggregating AFL,we can focus on those vehicles with better performance to improve the accuracy and safety of the system.In this paper,we proposed a vehicle selection scheme based on DRL in VEC.In this scheme,vehicle’s mobility,channel conditions with temporal variations,computational resources with temporal variations,different data amount,transmission channel status of vehicles as well as Byzantine attacks were taken into account.Simulation results show that the proposed scheme effectively improves the safety and accuracy of the global model.
文摘The recent interest in the deployment of Generative AI applications that use large language models (LLMs) has brought to the forefront significant privacy concerns, notably the leakage of Personally Identifiable Information (PII) and other confidential or protected information that may have been memorized during training, specifically during a fine-tuning or customization process. We describe different black-box attacks from potential adversaries and study their impact on the amount and type of information that may be recovered from commonly used and deployed LLMs. Our research investigates the relationship between PII leakage, memorization, and factors such as model size, architecture, and the nature of attacks employed. The study utilizes two broad categories of attacks: PII leakage-focused attacks (auto-completion and extraction attacks) and memorization-focused attacks (various membership inference attacks). The findings from these investigations are quantified using an array of evaluative metrics, providing a detailed understanding of LLM vulnerabilities and the effectiveness of different attacks.
基金the financial support from the Natural Sciences and Engineering Research Council of Canada(NSERC)。
文摘This study investigates resilient platoon control for constrained intelligent and connected vehicles(ICVs)against F-local Byzantine attacks.We introduce a resilient distributed model-predictive platooning control framework for such ICVs.This framework seamlessly integrates the predesigned optimal control with distributed model predictive control(DMPC)optimization and introduces a unique distributed attack detector to ensure the reliability of the transmitted information among vehicles.Notably,our strategy uses previously broadcasted information and a specialized convex set,termed the“resilience set”,to identify unreliable data.This approach significantly eases graph robustness prerequisites,requiring only an(F+1)-robust graph,in contrast to the established mean sequence reduced algorithms,which require a minimum(2F+1)-robust graph.Additionally,we introduce a verification algorithm to restore trust in vehicles under minor attacks,further reducing communication network robustness.Our analysis demonstrates the recursive feasibility of the DMPC optimization.Furthermore,the proposed method achieves exceptional control performance by minimizing the discrepancies between the DMPC control inputs and predesigned platoon control inputs,while ensuring constraint compliance and cybersecurity.Simulation results verify the effectiveness of our theoretical findings.
基金funding by Comunidad de Madrid within the framework of the Multiannual Agreement with Universidad Politécnica de Madrid to encourage research by young doctors(PRINCE project).
文摘Future 6G communications are envisioned to enable a large catalogue of pioneering applications.These will range from networked Cyber-Physical Systems to edge computing devices,establishing real-time feedback control loops critical for managing Industry 5.0 deployments,digital agriculture systems,and essential infrastructures.The provision of extensive machine-type communications through 6G will render many of these innovative systems autonomous and unsupervised.While full automation will enhance industrial efficiency significantly,it concurrently introduces new cyber risks and vulnerabilities.In particular,unattended systems are highly susceptible to trust issues:malicious nodes and false information can be easily introduced into control loops.Additionally,Denialof-Service attacks can be executed by inundating the network with valueless noise.Current anomaly detection schemes require the entire transformation of the control software to integrate new steps and can only mitigate anomalies that conform to predefined mathematical models.Solutions based on an exhaustive data collection to detect anomalies are precise but extremely slow.Standard models,with their limited understanding of mobile networks,can achieve precision rates no higher than 75%.Therefore,more general and transversal protection mechanisms are needed to detect malicious behaviors transparently.This paper introduces a probabilistic trust model and control algorithm designed to address this gap.The model determines the probability of any node to be trustworthy.Communication channels are pruned for those nodes whose probability is below a given threshold.The trust control algorithmcomprises three primary phases,which feed themodel with three different probabilities,which are weighted and combined.Initially,anomalous nodes are identified using Gaussian mixture models and clustering technologies.Next,traffic patterns are studied using digital Bessel functions and the functional scalar product.Finally,the information coherence and content are analyzed.The noise content and abnormal information sequences are detected using a Volterra filter and a bank of Finite Impulse Response filters.An experimental validation based on simulation tools and environments was carried out.Results show the proposed solution can successfully detect up to 92%of malicious data injection attacks.
基金supported by the National Research Foundation of Korea(NRF)grant funded by the Korea government(MEST)No.2015R1A3A2031159,2016R1A5A1008055.
文摘Existing web-based security applications have failed in many situations due to the great intelligence of attackers.Among web applications,Cross-Site Scripting(XSS)is one of the dangerous assaults experienced while modifying an organization's or user's information.To avoid these security challenges,this article proposes a novel,all-encompassing combination of machine learning(NB,SVM,k-NN)and deep learning(RNN,CNN,LSTM)frameworks for detecting and defending against XSS attacks with high accuracy and efficiency.Based on the representation,a novel idea for merging stacking ensemble with web applications,termed“hybrid stacking”,is proposed.In order to implement the aforementioned methods,four distinct datasets,each of which contains both safe and unsafe content,are considered.The hybrid detection method can adaptively identify the attacks from the URL,and the defense mechanism inherits the advantages of URL encoding with dictionary-based mapping to improve prediction accuracy,accelerate the training process,and effectively remove the unsafe JScript/JavaScript keywords from the URL.The simulation results show that the proposed hybrid model is more efficient than the existing detection methods.It produces more than 99.5%accurate XSS attack classification results(accuracy,precision,recall,f1_score,and Receiver Operating Characteristic(ROC))and is highly resistant to XSS attacks.In order to ensure the security of the server's information,the proposed hybrid approach is demonstrated in a real-time environment.
基金funding from Deanship of Scientific Research in King Faisal University with Grant Number KFU 241085.
文摘Phishing,an Internet fraudwhere individuals are deceived into revealing critical personal and account information,poses a significant risk to both consumers and web-based institutions.Data indicates a persistent rise in phishing attacks.Moreover,these fraudulent schemes are progressively becoming more intricate,thereby rendering them more challenging to identify.Hence,it is imperative to utilize sophisticated algorithms to address this issue.Machine learning is a highly effective approach for identifying and uncovering these harmful behaviors.Machine learning(ML)approaches can identify common characteristics in most phishing assaults.In this paper,we propose an ensemble approach and compare it with six machine learning techniques to determine the type of website and whether it is normal or not based on two phishing datasets.After that,we used the normalization technique on the dataset to transform the range of all the features into the same range.The findings of this paper for all algorithms are as follows in the first dataset based on accuracy,precision,recall,and F1-score,respectively:Decision Tree(DT)(0.964,0.961,0.976,0.968),Random Forest(RF)(0.970,0.964,0.984,0.974),Gradient Boosting(GB)(0.960,0.959,0.971,0.965),XGBoost(XGB)(0.973,0.976,0.976,0.976),AdaBoost(0.934,0.934,0.950,0.942),Multi Layer Perceptron(MLP)(0.970,0.971,0.976,0.974)and Voting(0.978,0.975,0.987,0.981).So,the Voting classifier gave the best results.While in the second dataset,all the algorithms gave the same results in four evaluation metrics,which indicates that each of them can effectively accomplish the prediction process.Also,this approach outperformed the previous work in detecting phishing websites with high accuracy,a lower false negative rate,a shorter prediction time,and a lower false positive rate.
文摘Bayesian networks are a powerful class of graphical decision models used to represent causal relationships among variables.However,the reliability and integrity of learned Bayesian network models are highly dependent on the quality of incoming data streams.One of the primary challenges with Bayesian networks is their vulnerability to adversarial data poisoning attacks,wherein malicious data is injected into the training dataset to negatively influence the Bayesian network models and impair their performance.In this research paper,we propose an efficient framework for detecting data poisoning attacks against Bayesian network structure learning algorithms.Our framework utilizes latent variables to quantify the amount of belief between every two nodes in each causal model over time.We use our innovative methodology to tackle an important issue with data poisoning assaults in the context of Bayesian networks.With regard to four different forms of data poisoning attacks,we specifically aim to strengthen the security and dependability of Bayesian network structure learning techniques,such as the PC algorithm.By doing this,we explore the complexity of this area and offer workablemethods for identifying and reducing these sneaky dangers.Additionally,our research investigates one particular use case,the“Visit to Asia Network.”The practical consequences of using uncertainty as a way to spot cases of data poisoning are explored in this inquiry,which is of utmost relevance.Our results demonstrate the promising efficacy of latent variables in detecting and mitigating the threat of data poisoning attacks.Additionally,our proposed latent-based framework proves to be sensitive in detecting malicious data poisoning attacks in the context of stream data.
文摘Cocoa farming faces numerous constraints that affect production levels. Among these constraints are termites, one of the biggest scourges in tropical agriculture and agroforestry. The aim of this study is to assess the level of damage caused by termites in cocoa plantations. To this end, 3 plantations were selected. In each of the 3 plantations, 18 plots containing an average of 47 ± 6 cocoa plants were delimited. Sampling was based on 25 cocoa plants per plot. The study consisted in sampling the termites observed on the plants and noting the type of damage caused by them, taking into account the density of the harvest veneers and, above all, the termites’ progress through the anatomical structures of the plant, i.e. the bark, sapwood and heartwood. A total of 8 termite species were collected from cocoa plants. These species are responsible for four types of damage (D1, D2, D3 and D4), grouped into minor damage (D1 and D2) and major damage (D3 and D4). D1 damage ranged from 24.67% ± 5.64% to 39.55% ± 7.43%. D2 damage ranged from 6.88% ± 1.31% to 9.33% ± 2.79%. D3 damage ranged from 2.88% ± 1.55% to 6.44% ± 1.55%. D4 damage ranged from 1.11% ± 1% to 3.11% ± 1.37%. Among the termite species collected, Microcerotermes sp, C. sjostedti, A. crucifer and P. militaris were the most formidable on cocoa trees in our study locality. In view of the extensive damage caused by termites, biological control measures should be considered, using insecticidal plants.
基金This work was supported by the Major Scientific and Technological Special Project of Anhui Province(202103a13010004)the Major Scientific and Technological Special Project of Hefei City(2021DX007)+1 种基金the Key R&D Plan of Shandong Province(2020CXGC010105)the China Postdoctoral Science Foundation(2021M700315).
文摘Quantum key distribution(QKD),rooted in quantum mechanics,offers information-theoretic security.However,practi-cal systems open security threats due to imperfections,notably bright-light blinding attacks targeting single-photon detectors.Here,we propose a concise,robust defense strategy for protecting single-photon detectors in QKD systems against blinding attacks.Our strategy uses a dual approach:detecting the bias current of the avalanche photodiode(APD)to defend against con-tinuous-wave blinding attacks,and monitoring the avalanche amplitude to protect against pulsed blinding attacks.By integrat-ing these two branches,the proposed solution effectively identifies and mitigates a wide range of bright light injection attempts,significantly enhancing the resilience of QKD systems against various bright-light blinding attacks.This method forti-fies the safeguards of quantum communications and offers a crucial contribution to the field of quantum information security.