Antivirus vendors and the research community employ Machine Learning(ML)or Deep Learning(DL)-based static analysis techniques for efficient identification of new threats,given the continual emergence of novel malware ...Antivirus vendors and the research community employ Machine Learning(ML)or Deep Learning(DL)-based static analysis techniques for efficient identification of new threats,given the continual emergence of novel malware variants.On the other hand,numerous researchers have reported that Adversarial Examples(AEs),generated by manipulating previously detected malware,can successfully evade ML/DL-based classifiers.Commercial antivirus systems,in particular,have been identified as vulnerable to such AEs.This paper firstly focuses on conducting black-box attacks to circumvent ML/DL-based malware classifiers.Our attack method utilizes seven different perturbations,including Overlay Append,Section Append,and Break Checksum,capitalizing on the ambiguities present in the PE format,as previously employed in evasion attack research.By directly applying the perturbation techniques to PE binaries,our attack method eliminates the need to grapple with the problem-feature space dilemma,a persistent challenge in many evasion attack studies.Being a black-box attack,our method can generate AEs that successfully evade both DL-based and ML-based classifiers.Also,AEs generated by the attack method retain their executability and malicious behavior,eliminating the need for functionality verification.Through thorogh evaluations,we confirmed that the attack method achieves an evasion rate of 65.6%against well-known ML-based malware detectors and can reach a remarkable 99%evasion rate against well-known DL-based malware detectors.Furthermore,our AEs demonstrated the capability to bypass detection by 17%of vendors out of the 64 on VirusTotal(VT).In addition,we propose a defensive approach that utilizes Trend Locality Sensitive Hashing(TLSH)to construct a similarity-based defense model.Through several experiments on the approach,we verified that our defense model can effectively counter AEs generated by the perturbation techniques.In conclusion,our defense model alleviates the limitation of the most promising defense method,adversarial training,which is only effective against the AEs that are included in the training classifiers.展开更多
A quantum variational circuit is a quantum machine learning model similar to a neural network.A crafted adversarial example can lead to incorrect results for the model.Using adversarial examples to train the model wil...A quantum variational circuit is a quantum machine learning model similar to a neural network.A crafted adversarial example can lead to incorrect results for the model.Using adversarial examples to train the model will greatly improve its robustness.The existing method is to use automatic differentials or finite difference to obtain a gradient and use it to construct adversarial examples.This paper proposes an innovative method for constructing adversarial examples of quantum variational circuits.In this method,the gradient can be obtained by measuring the expected value of a quantum bit respectively in a series quantum circuit.This method can be used to construct the adversarial examples for a quantum variational circuit classifier.The implementation results prove the effectiveness of the proposed method.Compared with the existing method,our method requires fewer resources and is more efficient.展开更多
Image-denoising techniques are widely used to defend against Adversarial Examples(AEs).However,denoising alone cannot completely eliminate adversarial perturbations.The remaining perturbations tend to amplify as they ...Image-denoising techniques are widely used to defend against Adversarial Examples(AEs).However,denoising alone cannot completely eliminate adversarial perturbations.The remaining perturbations tend to amplify as they propagate through deeper layers of the network,leading to misclassifications.Moreover,image denoising compromises the classification accuracy of original examples.To address these challenges in AE defense through image denoising,this paper proposes a novel AE detection technique.The proposed technique combines multiple traditional image-denoising algorithms and Convolutional Neural Network(CNN)network structures.The used detector model integrates the classification results of different models as the input to the detector and calculates the final output of the detector based on a machine-learning voting algorithm.By analyzing the discrepancy between predictions made by the model on original examples and denoised examples,AEs are detected effectively.This technique reduces computational overhead without modifying the model structure or parameters,effectively avoiding the error amplification caused by denoising.The proposed approach demonstrates excellent detection performance against mainstream AE attacks.Experimental results show outstanding detection performance in well-known AE attacks,including Fast Gradient Sign Method(FGSM),Basic Iteration Method(BIM),DeepFool,and Carlini&Wagner(C&W),achieving a 94%success rate in FGSM detection,while only reducing the accuracy of clean examples by 4%.展开更多
Deep learning-based systems have succeeded in many computer vision tasks.However,it is found that the latest study indicates that these systems are in danger in the presence of adversarial attacks.These attacks can qu...Deep learning-based systems have succeeded in many computer vision tasks.However,it is found that the latest study indicates that these systems are in danger in the presence of adversarial attacks.These attacks can quickly spoil deep learning models,e.g.,different convolutional neural networks(CNNs),used in various computer vision tasks from image classification to object detection.The adversarial examples are carefully designed by injecting a slight perturbation into the clean images.The proposed CRU-Net defense model is inspired by state-of-the-art defense mechanisms such as MagNet defense,Generative Adversarial Net-work Defense,Deep Regret Analytic Generative Adversarial Networks Defense,Deep Denoising Sparse Autoencoder Defense,and Condtional Generattive Adversarial Network Defense.We have experimentally proved that our approach is better than previous defensive techniques.Our proposed CRU-Net model maps the adversarial image examples into clean images by eliminating the adversarial perturbation.The proposed defensive approach is based on residual and U-Net learning.Many experiments are done on the datasets MNIST and CIFAR10 to prove that our proposed CRU-Net defense model prevents adversarial example attacks in WhiteBox and BlackBox settings and improves the robustness of the deep learning algorithms especially in the computer visionfield.We have also reported similarity(SSIM and PSNR)between the original and restored clean image examples by the proposed CRU-Net defense model.展开更多
Speech is easily leaked imperceptibly.When people use their phones,the personal voice assistant is constantly listening and waiting to be activated.Private content in speech may be maliciously extracted through automa...Speech is easily leaked imperceptibly.When people use their phones,the personal voice assistant is constantly listening and waiting to be activated.Private content in speech may be maliciously extracted through automatic speech recognition(ASR)technology by some applications on phone devices.To guarantee that the recognized speech content is accurate,speech enhancement technology is used to denoise the input speech.Speech enhancement technology has developed rapidly along with deep neural networks(DNNs),but adversarial examples can cause DNNs to fail.Considering that the vulnerability of DNN can be used to protect the privacy in speech.In this work,we propose an adversarial method to degrade speech enhancement systems,which can prevent the malicious extraction of private information in speech.Experimental results show that the generated enhanced adversarial examples can be removed most content of the target speech or replaced with target speech content by speech enhancement.The word error rate(WER)between the enhanced original example and enhanced adversarial example recognition result can reach 89.0%.WER of target attack between enhanced adversarial example and target example is low at 33.75%.The adversarial perturbation in the adversarial example can bring much more change than itself.The rate of difference between two enhanced examples and adversarial perturbation can reach more than 1.4430.Meanwhile,the transferability between different speech enhancement models is also investigated.The low transferability of the method can be used to ensure the content in the adversarial example is not damaged,the useful information can be extracted by the friendly ASR.This work can prevent the malicious extraction of speech.展开更多
Introduction: Video examples with task demonstrations by experts, with the expert’s eye movements superimposed on the task, are known as “eye movement modeling examples” (EMME). We performed this study to evaluate ...Introduction: Video examples with task demonstrations by experts, with the expert’s eye movements superimposed on the task, are known as “eye movement modeling examples” (EMME). We performed this study to evaluate if there were improvements in the performance of anesthesia novice trainees when executing the epidural technique after an EMME of epidural block procedure. Methods: We developed an eye movement modeling example (EMME) from eye tracking recordings made by experienced anesthesiologists with more than 20 years of experience. Forty-two PGY3 anesthesia trainees who had never previously performed an epidural block were randomized to receive (study group) or not receive (control group) the EMME video before their institutional training. All the trainees were evaluated every 10 epidural blocks until the end of the rotation period, by an independent, blinded observer using the Global Rating Scale for Epidural Anesthesia (GRS). Results: Trainees who received the EMME training exhibited more respect for the patient’s tissues (P Discussion: This is the first study that has used the EMME for a practical, clinical teaching purpose on real patients and that has used it as an aid in teaching epidural anesthesia. We demonstrated that inexperienced trainees who received the EMME training improved their proficiency at epidural blocks as compared to those who had no EMME training beforehand. Given this result, we welcome further studies to investigate the impact and the role of EMME on clinical teaching in the field of anesthesia.展开更多
In order to narrow the semantic gap existing in content-based image retrieval (CBIR),a novel retrieval technology called auto-extended multi query examples (AMQE) is proposed.It expands the single one query image ...In order to narrow the semantic gap existing in content-based image retrieval (CBIR),a novel retrieval technology called auto-extended multi query examples (AMQE) is proposed.It expands the single one query image used in traditional image retrieval into multi query examples so as to include more image features related with semantics.Retrieving images for each of the multi query examples and integrating the retrieval results,more relevant images can be obtained.The property of the recall-precision curve of a general retrieval algorithm and the K-means clustering method are used to realize the expansion according to the distance of image features of the initially retrieved images.The experimental results demonstrate that the AMQE technology can greatly improve the recall and precision of the original algorithms.展开更多
The role of authigenic clay growth in clay gouge is increasingly recognized as a key to understanding the mechanics of berittle faulting and fault zone processes,including creep and seismogenesis,and providing new ins...The role of authigenic clay growth in clay gouge is increasingly recognized as a key to understanding the mechanics of berittle faulting and fault zone processes,including creep and seismogenesis,and providing new insights into the ongoing debate about the frictional strength of brittle fault(Haines and van der Pluijm,2012).However,neither the conditions nor the processes which展开更多
Adversarial examples are hot topics in the field of security in deep learning.The feature,generation methods,attack and defense methods of the adversarial examples are focuses of the current research on adversarial ex...Adversarial examples are hot topics in the field of security in deep learning.The feature,generation methods,attack and defense methods of the adversarial examples are focuses of the current research on adversarial examples.This article explains the key technologies and theories of adversarial examples from the concept of adversarial examples,the occurrences of the adversarial examples,the attacking methods of adversarial examples.This article lists the possible reasons for the adversarial examples.This article also analyzes several typical generation methods of adversarial examples in detail:Limited-memory BFGS(L-BFGS),Fast Gradient Sign Method(FGSM),Basic Iterative Method(BIM),Iterative Least-likely Class Method(LLC),etc.Furthermore,in the perspective of the attack methods and reasons of the adversarial examples,the main defense techniques for the adversarial examples are listed:preprocessing,regularization and adversarial training method,distillation method,etc.,which application scenarios and deficiencies of different defense measures are pointed out.This article further discusses the application of adversarial examples which currently is mainly used in adversarial evaluation and adversarial training.Finally,the overall research direction of the adversarial examples is prospected to completely solve the adversarial attack problem.There are still a lot of practical and theoretical problems that need to be solved.Finding out the characteristics of the adversarial examples,giving a mathematical description of its practical application prospects,exploring the universal method of adversarial example generation and the generation mechanism of the adversarial examples are the main research directions of the adversarial examples in the future.展开更多
Ma Zi Ren Wan (麻子仁丸), originally recorded in Treatise on Febrile Diseases (伤寒论), is composed of Ma Zi Ren (麻子仁Fructus Cannabis), Bai Shao (白芍Radix Paeoniae Alba), Zhi Shi (枳实Fructus Aurantii Immaturu... Ma Zi Ren Wan (麻子仁丸), originally recorded in Treatise on Febrile Diseases (伤寒论), is composed of Ma Zi Ren (麻子仁Fructus Cannabis), Bai Shao (白芍Radix Paeoniae Alba), Zhi Shi (枳实Fructus Aurantii Immaturus), Da Huang (大黄Radix etRhizoma Rhei), Hou Po (厚朴cortex Magnoliae Officinalis) and Xing Ren (杏仁Semen Armeniacae Amarum). Good therapeutic results have been achieved by using Ma ZiRen Wan in treatment of febrile disease at the restoring stage, chronic consumptive diseases, hemorrhoid, disorders in women after delivery, chronic kidney disease, senile constipation, pulmonary heart disease, diabetes, coronary heart disease and hypertension. Some illustrative cases are introduced below.
……展开更多
We first put forward the idea of a positive extension matrix (PEM) on paper. Then, an algorithm, AE_ 11, was built with the aid of the PEM. Finally, we made the comparisons of our experimental results and the final re...We first put forward the idea of a positive extension matrix (PEM) on paper. Then, an algorithm, AE_ 11, was built with the aid of the PEM. Finally, we made the comparisons of our experimental results and the final result was fairly satisfying.展开更多
The Early Jurassic volcanic sequence of the Central Atlantic Magmatic Province(CAMP)of Morocco is classically subdivided into four stratigraphic units:the Lower,Middle,Upper and Recurrent Formations separated
The transitional span is a special environment for deposits. Taking peat, oil gas, metallic deposits as examples, this paper discusses the spatial temporal transitional characteristics of mineralization in transitiona...The transitional span is a special environment for deposits. Taking peat, oil gas, metallic deposits as examples, this paper discusses the spatial temporal transitional characteristics of mineralization in transitional regions, points out the importance of the mineralization in transition spans, and analyses their dynamics finally.展开更多
[Objective] Taking the characteristic of flower diameter of Tagetes L.as an example,this study aimed to select example varieties used in the DUS Test Guideline of Tagetes L.[Method] Two continuous years of measurement...[Objective] Taking the characteristic of flower diameter of Tagetes L.as an example,this study aimed to select example varieties used in the DUS Test Guideline of Tagetes L.[Method] Two continuous years of measurements of flower diameter of 25 varieties were collected and then analyzed by using the box plot to illustrate the uniformity and stability of flower diameter of each variety.[Result] According to the information of variability,distribution symmetry of measurements and outliers of flower diameter of varieties provided by box plots,variety 16,2 and 4 were selected as the example varieties for the three expression states with respective flower diameter of 3.0-4.4,6.0-7.4 and 9.0-10.4 cm.[Conclusion] The box plot is an efficient method for the general analysis of varieties,which provides information covering the actual and possible expression range,median and outliers of measurements of flower diameter of each variety.It also provides references for selecting example varieties for other quantitative characteristics and evaluating the quality of varieties.展开更多
In recent years,deep learning has been the mainstream technology for fingerprint liveness detection(FLD)tasks because of its remarkable performance.However,recent studies have shown that these deep fake fingerprint de...In recent years,deep learning has been the mainstream technology for fingerprint liveness detection(FLD)tasks because of its remarkable performance.However,recent studies have shown that these deep fake fingerprint detection(DFFD)models are not resistant to attacks by adversarial examples,which are generated by the introduction of subtle perturbations in the fingerprint image,allowing the model to make fake judgments.Most of the existing adversarial example generation methods are based on gradient optimization,which is easy to fall into local optimal,resulting in poor transferability of adversarial attacks.In addition,the perturbation added to the blank area of the fingerprint image is easily perceived by the human eye,leading to poor visual quality.In response to the above challenges,this paper proposes a novel adversarial attack method based on local adaptive gradient variance for DFFD.The ridge texture area within the fingerprint image has been identified and designated as the region for perturbation generation.Subsequently,the images are fed into the targeted white-box model,and the gradient direction is optimized to compute gradient variance.Additionally,an adaptive parameter search method is proposed using stochastic gradient ascent to explore the parameter values during adversarial example generation,aiming to maximize adversarial attack performance.Experimental results on two publicly available fingerprint datasets show that ourmethod achieves higher attack transferability and robustness than existing methods,and the perturbation is harder to perceive.展开更多
Low-rank matrix decomposition with first-order total variation(TV)regularization exhibits excellent performance in exploration of image structure.Taking advantage of its excellent performance in image denoising,we app...Low-rank matrix decomposition with first-order total variation(TV)regularization exhibits excellent performance in exploration of image structure.Taking advantage of its excellent performance in image denoising,we apply it to improve the robustness of deep neural networks.However,although TV regularization can improve the robustness of the model,it reduces the accuracy of normal samples due to its over-smoothing.In our work,we develop a new low-rank matrix recovery model,called LRTGV,which incorporates total generalized variation(TGV)regularization into the reweighted low-rank matrix recovery model.In the proposed model,TGV is used to better reconstruct texture information without over-smoothing.The reweighted nuclear norm and Li-norm can enhance the global structure information.Thus,the proposed LRTGV can destroy the structure of adversarial noise while re-enhancing the global structure and local texture of the image.To solve the challenging optimal model issue,we propose an algorithm based on the alternating direction method of multipliers.Experimental results show that the proposed algorithm has a certain defense capability against black-box attacks,and outperforms state-of-the-art low-rank matrix recovery methods in image restoration.展开更多
基金supported by Institute of Information&Communications Technology Planning&Evaluation(IITP)Grant funded by the Korea government,Ministry of Science and ICT(MSIT)(No.2017-0-00168,Automatic Deep Malware Analysis Technology for Cyber Threat Intelligence).
文摘Antivirus vendors and the research community employ Machine Learning(ML)or Deep Learning(DL)-based static analysis techniques for efficient identification of new threats,given the continual emergence of novel malware variants.On the other hand,numerous researchers have reported that Adversarial Examples(AEs),generated by manipulating previously detected malware,can successfully evade ML/DL-based classifiers.Commercial antivirus systems,in particular,have been identified as vulnerable to such AEs.This paper firstly focuses on conducting black-box attacks to circumvent ML/DL-based malware classifiers.Our attack method utilizes seven different perturbations,including Overlay Append,Section Append,and Break Checksum,capitalizing on the ambiguities present in the PE format,as previously employed in evasion attack research.By directly applying the perturbation techniques to PE binaries,our attack method eliminates the need to grapple with the problem-feature space dilemma,a persistent challenge in many evasion attack studies.Being a black-box attack,our method can generate AEs that successfully evade both DL-based and ML-based classifiers.Also,AEs generated by the attack method retain their executability and malicious behavior,eliminating the need for functionality verification.Through thorogh evaluations,we confirmed that the attack method achieves an evasion rate of 65.6%against well-known ML-based malware detectors and can reach a remarkable 99%evasion rate against well-known DL-based malware detectors.Furthermore,our AEs demonstrated the capability to bypass detection by 17%of vendors out of the 64 on VirusTotal(VT).In addition,we propose a defensive approach that utilizes Trend Locality Sensitive Hashing(TLSH)to construct a similarity-based defense model.Through several experiments on the approach,we verified that our defense model can effectively counter AEs generated by the perturbation techniques.In conclusion,our defense model alleviates the limitation of the most promising defense method,adversarial training,which is only effective against the AEs that are included in the training classifiers.
基金Project supported by the National Natural Science Foundation of China(Grant Nos.62076042 and 62102049)the Natural Science Foundation of Sichuan Province(Grant No.2022NSFSC0535)+2 种基金the Key Research and Development Project of Sichuan Province(Grant Nos.2021YFSY0012 and 2021YFG0332)the Key Research and Development Project of Chengdu(Grant No.2021-YF05-02424-GX)the Innovation Team of Quantum Security Communication of Sichuan Province(Grant No.17TD0009).
文摘A quantum variational circuit is a quantum machine learning model similar to a neural network.A crafted adversarial example can lead to incorrect results for the model.Using adversarial examples to train the model will greatly improve its robustness.The existing method is to use automatic differentials or finite difference to obtain a gradient and use it to construct adversarial examples.This paper proposes an innovative method for constructing adversarial examples of quantum variational circuits.In this method,the gradient can be obtained by measuring the expected value of a quantum bit respectively in a series quantum circuit.This method can be used to construct the adversarial examples for a quantum variational circuit classifier.The implementation results prove the effectiveness of the proposed method.Compared with the existing method,our method requires fewer resources and is more efficient.
基金supported in part by the Natural Science Foundation of Hunan Province under Grant Nos.2023JJ30316 and 2022JJ2029in part by a project supported by Scientific Research Fund of Hunan Provincial Education Department under Grant No.22A0686+1 种基金in part by the National Natural Science Foundation of China under Grant No.62172058Researchers Supporting Project(No.RSP2023R102)King Saud University,Riyadh,Saudi Arabia.
文摘Image-denoising techniques are widely used to defend against Adversarial Examples(AEs).However,denoising alone cannot completely eliminate adversarial perturbations.The remaining perturbations tend to amplify as they propagate through deeper layers of the network,leading to misclassifications.Moreover,image denoising compromises the classification accuracy of original examples.To address these challenges in AE defense through image denoising,this paper proposes a novel AE detection technique.The proposed technique combines multiple traditional image-denoising algorithms and Convolutional Neural Network(CNN)network structures.The used detector model integrates the classification results of different models as the input to the detector and calculates the final output of the detector based on a machine-learning voting algorithm.By analyzing the discrepancy between predictions made by the model on original examples and denoised examples,AEs are detected effectively.This technique reduces computational overhead without modifying the model structure or parameters,effectively avoiding the error amplification caused by denoising.The proposed approach demonstrates excellent detection performance against mainstream AE attacks.Experimental results show outstanding detection performance in well-known AE attacks,including Fast Gradient Sign Method(FGSM),Basic Iteration Method(BIM),DeepFool,and Carlini&Wagner(C&W),achieving a 94%success rate in FGSM detection,while only reducing the accuracy of clean examples by 4%.
文摘Deep learning-based systems have succeeded in many computer vision tasks.However,it is found that the latest study indicates that these systems are in danger in the presence of adversarial attacks.These attacks can quickly spoil deep learning models,e.g.,different convolutional neural networks(CNNs),used in various computer vision tasks from image classification to object detection.The adversarial examples are carefully designed by injecting a slight perturbation into the clean images.The proposed CRU-Net defense model is inspired by state-of-the-art defense mechanisms such as MagNet defense,Generative Adversarial Net-work Defense,Deep Regret Analytic Generative Adversarial Networks Defense,Deep Denoising Sparse Autoencoder Defense,and Condtional Generattive Adversarial Network Defense.We have experimentally proved that our approach is better than previous defensive techniques.Our proposed CRU-Net model maps the adversarial image examples into clean images by eliminating the adversarial perturbation.The proposed defensive approach is based on residual and U-Net learning.Many experiments are done on the datasets MNIST and CIFAR10 to prove that our proposed CRU-Net defense model prevents adversarial example attacks in WhiteBox and BlackBox settings and improves the robustness of the deep learning algorithms especially in the computer visionfield.We have also reported similarity(SSIM and PSNR)between the original and restored clean image examples by the proposed CRU-Net defense model.
基金This work was supported by the National Natural Science Foundation of China(Grant No.61300055)Zhejiang Natural Science Foundation(Grant No.LY20F020010)+2 种基金Ningbo Science and Technology Innovation Project(Grant No.2022Z075)Ningbo Natural Science Foundation(Grant No.202003N4089)K.C.Wong Magna Fund in Ningbo University.
文摘Speech is easily leaked imperceptibly.When people use their phones,the personal voice assistant is constantly listening and waiting to be activated.Private content in speech may be maliciously extracted through automatic speech recognition(ASR)technology by some applications on phone devices.To guarantee that the recognized speech content is accurate,speech enhancement technology is used to denoise the input speech.Speech enhancement technology has developed rapidly along with deep neural networks(DNNs),but adversarial examples can cause DNNs to fail.Considering that the vulnerability of DNN can be used to protect the privacy in speech.In this work,we propose an adversarial method to degrade speech enhancement systems,which can prevent the malicious extraction of private information in speech.Experimental results show that the generated enhanced adversarial examples can be removed most content of the target speech or replaced with target speech content by speech enhancement.The word error rate(WER)between the enhanced original example and enhanced adversarial example recognition result can reach 89.0%.WER of target attack between enhanced adversarial example and target example is low at 33.75%.The adversarial perturbation in the adversarial example can bring much more change than itself.The rate of difference between two enhanced examples and adversarial perturbation can reach more than 1.4430.Meanwhile,the transferability between different speech enhancement models is also investigated.The low transferability of the method can be used to ensure the content in the adversarial example is not damaged,the useful information can be extracted by the friendly ASR.This work can prevent the malicious extraction of speech.
文摘Introduction: Video examples with task demonstrations by experts, with the expert’s eye movements superimposed on the task, are known as “eye movement modeling examples” (EMME). We performed this study to evaluate if there were improvements in the performance of anesthesia novice trainees when executing the epidural technique after an EMME of epidural block procedure. Methods: We developed an eye movement modeling example (EMME) from eye tracking recordings made by experienced anesthesiologists with more than 20 years of experience. Forty-two PGY3 anesthesia trainees who had never previously performed an epidural block were randomized to receive (study group) or not receive (control group) the EMME video before their institutional training. All the trainees were evaluated every 10 epidural blocks until the end of the rotation period, by an independent, blinded observer using the Global Rating Scale for Epidural Anesthesia (GRS). Results: Trainees who received the EMME training exhibited more respect for the patient’s tissues (P Discussion: This is the first study that has used the EMME for a practical, clinical teaching purpose on real patients and that has used it as an aid in teaching epidural anesthesia. We demonstrated that inexperienced trainees who received the EMME training improved their proficiency at epidural blocks as compared to those who had no EMME training beforehand. Given this result, we welcome further studies to investigate the impact and the role of EMME on clinical teaching in the field of anesthesia.
基金The National High Technology Research and Develop-ment Program of China (863 Program) (No.2002AA413420).
文摘In order to narrow the semantic gap existing in content-based image retrieval (CBIR),a novel retrieval technology called auto-extended multi query examples (AMQE) is proposed.It expands the single one query image used in traditional image retrieval into multi query examples so as to include more image features related with semantics.Retrieving images for each of the multi query examples and integrating the retrieval results,more relevant images can be obtained.The property of the recall-precision curve of a general retrieval algorithm and the K-means clustering method are used to realize the expansion according to the distance of image features of the initially retrieved images.The experimental results demonstrate that the AMQE technology can greatly improve the recall and precision of the original algorithms.
基金financed by the National Youth Sciences Foundation of China (No. 41502044)
文摘The role of authigenic clay growth in clay gouge is increasingly recognized as a key to understanding the mechanics of berittle faulting and fault zone processes,including creep and seismogenesis,and providing new insights into the ongoing debate about the frictional strength of brittle fault(Haines and van der Pluijm,2012).However,neither the conditions nor the processes which
基金This work is supported by the NSFC[Grant Nos.61772281,61703212]the Priority Academic Program Development of Jiangsu Higher Education Institutions(PAPD)and Jiangsu Collaborative Innovation Center on Atmospheric Environment and Equipment Technology(CICAEET).
文摘Adversarial examples are hot topics in the field of security in deep learning.The feature,generation methods,attack and defense methods of the adversarial examples are focuses of the current research on adversarial examples.This article explains the key technologies and theories of adversarial examples from the concept of adversarial examples,the occurrences of the adversarial examples,the attacking methods of adversarial examples.This article lists the possible reasons for the adversarial examples.This article also analyzes several typical generation methods of adversarial examples in detail:Limited-memory BFGS(L-BFGS),Fast Gradient Sign Method(FGSM),Basic Iterative Method(BIM),Iterative Least-likely Class Method(LLC),etc.Furthermore,in the perspective of the attack methods and reasons of the adversarial examples,the main defense techniques for the adversarial examples are listed:preprocessing,regularization and adversarial training method,distillation method,etc.,which application scenarios and deficiencies of different defense measures are pointed out.This article further discusses the application of adversarial examples which currently is mainly used in adversarial evaluation and adversarial training.Finally,the overall research direction of the adversarial examples is prospected to completely solve the adversarial attack problem.There are still a lot of practical and theoretical problems that need to be solved.Finding out the characteristics of the adversarial examples,giving a mathematical description of its practical application prospects,exploring the universal method of adversarial example generation and the generation mechanism of the adversarial examples are the main research directions of the adversarial examples in the future.
文摘 Ma Zi Ren Wan (麻子仁丸), originally recorded in Treatise on Febrile Diseases (伤寒论), is composed of Ma Zi Ren (麻子仁Fructus Cannabis), Bai Shao (白芍Radix Paeoniae Alba), Zhi Shi (枳实Fructus Aurantii Immaturus), Da Huang (大黄Radix etRhizoma Rhei), Hou Po (厚朴cortex Magnoliae Officinalis) and Xing Ren (杏仁Semen Armeniacae Amarum). Good therapeutic results have been achieved by using Ma ZiRen Wan in treatment of febrile disease at the restoring stage, chronic consumptive diseases, hemorrhoid, disorders in women after delivery, chronic kidney disease, senile constipation, pulmonary heart disease, diabetes, coronary heart disease and hypertension. Some illustrative cases are introduced below.
……
文摘We first put forward the idea of a positive extension matrix (PEM) on paper. Then, an algorithm, AE_ 11, was built with the aid of the PEM. Finally, we made the comparisons of our experimental results and the final result was fairly satisfying.
文摘The Early Jurassic volcanic sequence of the Central Atlantic Magmatic Province(CAMP)of Morocco is classically subdivided into four stratigraphic units:the Lower,Middle,Upper and Recurrent Formations separated
文摘The transitional span is a special environment for deposits. Taking peat, oil gas, metallic deposits as examples, this paper discusses the spatial temporal transitional characteristics of mineralization in transitional regions, points out the importance of the mineralization in transition spans, and analyses their dynamics finally.
基金Supported by the Special Fund for Agro-scientific Research in the Public Interest(200903008-14)the National "948" Project(2009-Z11)~~
文摘[Objective] Taking the characteristic of flower diameter of Tagetes L.as an example,this study aimed to select example varieties used in the DUS Test Guideline of Tagetes L.[Method] Two continuous years of measurements of flower diameter of 25 varieties were collected and then analyzed by using the box plot to illustrate the uniformity and stability of flower diameter of each variety.[Result] According to the information of variability,distribution symmetry of measurements and outliers of flower diameter of varieties provided by box plots,variety 16,2 and 4 were selected as the example varieties for the three expression states with respective flower diameter of 3.0-4.4,6.0-7.4 and 9.0-10.4 cm.[Conclusion] The box plot is an efficient method for the general analysis of varieties,which provides information covering the actual and possible expression range,median and outliers of measurements of flower diameter of each variety.It also provides references for selecting example varieties for other quantitative characteristics and evaluating the quality of varieties.
基金supported by the National Natural Science Foundation of China under Grant(62102189,62122032,61972205)the National Social Sciences Foundation of China under Grant 2022-SKJJ-C-082+2 种基金the Natural Science Foundation of Jiangsu Province under Grant BK20200807NUDT Scientific Research Program under Grant(JS21-4,ZK21-43)Guangdong Natural Science Funds for Distinguished Young Scholar under Grant 2023B1515020041.
文摘In recent years,deep learning has been the mainstream technology for fingerprint liveness detection(FLD)tasks because of its remarkable performance.However,recent studies have shown that these deep fake fingerprint detection(DFFD)models are not resistant to attacks by adversarial examples,which are generated by the introduction of subtle perturbations in the fingerprint image,allowing the model to make fake judgments.Most of the existing adversarial example generation methods are based on gradient optimization,which is easy to fall into local optimal,resulting in poor transferability of adversarial attacks.In addition,the perturbation added to the blank area of the fingerprint image is easily perceived by the human eye,leading to poor visual quality.In response to the above challenges,this paper proposes a novel adversarial attack method based on local adaptive gradient variance for DFFD.The ridge texture area within the fingerprint image has been identified and designated as the region for perturbation generation.Subsequently,the images are fed into the targeted white-box model,and the gradient direction is optimized to compute gradient variance.Additionally,an adaptive parameter search method is proposed using stochastic gradient ascent to explore the parameter values during adversarial example generation,aiming to maximize adversarial attack performance.Experimental results on two publicly available fingerprint datasets show that ourmethod achieves higher attack transferability and robustness than existing methods,and the perturbation is harder to perceive.
基金Project supported by the National Natural Science Foundation of China(No.62072024)the Outstanding Youth Program of Beijing University of Civil Engineering and Architecture,China(No.JDJQ20220805)the Shenzhen Stability Support General Project(Type A),China(No.20200826104014001)。
文摘Low-rank matrix decomposition with first-order total variation(TV)regularization exhibits excellent performance in exploration of image structure.Taking advantage of its excellent performance in image denoising,we apply it to improve the robustness of deep neural networks.However,although TV regularization can improve the robustness of the model,it reduces the accuracy of normal samples due to its over-smoothing.In our work,we develop a new low-rank matrix recovery model,called LRTGV,which incorporates total generalized variation(TGV)regularization into the reweighted low-rank matrix recovery model.In the proposed model,TGV is used to better reconstruct texture information without over-smoothing.The reweighted nuclear norm and Li-norm can enhance the global structure information.Thus,the proposed LRTGV can destroy the structure of adversarial noise while re-enhancing the global structure and local texture of the image.To solve the challenging optimal model issue,we propose an algorithm based on the alternating direction method of multipliers.Experimental results show that the proposed algorithm has a certain defense capability against black-box attacks,and outperforms state-of-the-art low-rank matrix recovery methods in image restoration.