Artificial immune detection can be used to detect network intrusions in an adaptive approach and proper matching methods can improve the accuracy of immune detection methods.This paper proposes an artificial immune de...Artificial immune detection can be used to detect network intrusions in an adaptive approach and proper matching methods can improve the accuracy of immune detection methods.This paper proposes an artificial immune detection model for network intrusion data based on a quantitative matching method.The proposed model defines the detection process by using network data and decimal values to express features and artificial immune mechanisms are simulated to define immune elements.Then,to improve the accuracy of similarity calculation,a quantitative matching method is proposed.The model uses mathematical methods to train and evolve immune elements,increasing the diversity of immune recognition and allowing for the successful detection of unknown intrusions.The proposed model’s objective is to accurately identify known intrusions and expand the identification of unknown intrusions through signature detection and immune detection,overcoming the disadvantages of traditional methods.The experiment results show that the proposed model can detect intrusions effectively.It has a detection rate of more than 99.6%on average and a false alarm rate of 0.0264%.It outperforms existing immune intrusion detection methods in terms of comprehensive detection performance.展开更多
An artificial neural network(ANN)method is introduced to predict drop size in two kinds of pulsed columns with small-scale data sets.After training,the deviation between calculate and experimental results are 3.8%and ...An artificial neural network(ANN)method is introduced to predict drop size in two kinds of pulsed columns with small-scale data sets.After training,the deviation between calculate and experimental results are 3.8%and 9.3%,respectively.Through ANN model,the influence of interfacial tension and pulsation intensity on the droplet diameter has been developed.Droplet size gradually increases with the increase of interfacial tension,and decreases with the increase of pulse intensity.It can be seen that the accuracy of ANN model in predicting droplet size outside the training set range is reach the same level as the accuracy of correlation obtained based on experiments within this range.For two kinds of columns,the drop size prediction deviations of ANN model are 9.6%and 18.5%and the deviations in correlations are 11%and 15%.展开更多
A novel variational wave function defined as a Jastrow factor multiplying a backflow transformed Slater determinant was developed for A=3 nuclei.The Jastrow factor and backflow transformation were represented by artif...A novel variational wave function defined as a Jastrow factor multiplying a backflow transformed Slater determinant was developed for A=3 nuclei.The Jastrow factor and backflow transformation were represented by artificial neural networks.With this newly developed wave function,variational Monte Carlo calculations were carried out for3H and3He nuclei starting from a nuclear Hamiltonian based on the leadingorder pionless effective field theory.The obtained ground-state energy and charge radii were successfully benchmarked against the results of the highly-accurate hypersphericalharmonics method.The backflow transformation plays a crucial role in improving the nodal surface of the Slater determinant and,thus,providing accurate ground-state energy.展开更多
ADC distribution network is an effective solution for increasing renewable energy utilization with distinct benefits,such as high efficiency and easy control.However,a sudden increase in the current after the occurren...ADC distribution network is an effective solution for increasing renewable energy utilization with distinct benefits,such as high efficiency and easy control.However,a sudden increase in the current after the occurrence of faults in the network may adversely affect network stability.This study proposes an artificial neural network(ANN)-based fault detection and protection method for DC distribution networks.The ANN is applied to a classifier for different faults ontheDC line.The backpropagationneuralnetwork is used to predict the line current,and the fault detection threshold is obtained on the basis of the difference between the predicted current and the actual current.The proposed method only uses local signals,with no requirement of a strict communication link.Simulation experiments are conducted for the proposed algorithm on a two-terminal DC distribution network modeled in the PSCAD/EMTDC and developed on the MATLAB platform.The results confirm that the proposed method can accurately detect and classify line faults within a few milliseconds and is not affected by fault locations,fault resistance,noise,and communication delay.展开更多
Atmospheric pressure plasma jet(APPJ)arrays have shown a potential in a wide range of applications ranging from material processing to biomedicine.In these applications,targets with complex three-dimensional structure...Atmospheric pressure plasma jet(APPJ)arrays have shown a potential in a wide range of applications ranging from material processing to biomedicine.In these applications,targets with complex three-dimensional structures often easily affect plasma uniformity.However,the uniformity is usually crucially important in application areas such as biomedicine,etc.In this work,the flow and electric field collaborative modulations are used to improve the uniformity of the plasma downstream.Taking a two-dimensional sloped metallic substrate with a 10°inclined angle as an example,the influences of both flow and electric field on the electron and typical active species distributions downstream are studied based on a multi-field coupling model.The electric and flow fields modulations are first separately applied to test the influence.Results show that the electric field modulation has an obvious improvement on the uniformity of plasma while the flow field modulation effect is limited.Based on such outputs,a collaborative modulation of both fields is then applied,and shows a much better effect on the uniformity.To make further advances,a basic strategy of uniformity improvement is thus acquired.To achieve the goal,an artificial neural network method with reasonable accuracy is then used to predict the correlation between plasma processing parameters and downstream uniformity properties for further improvement of the plasma uniformity.An optional scheme taking advantage of the flexibility of APPJ arrays is then developed for practical demands.展开更多
The fifth generation (5G) networks will support the rapid emergence of Internet of Things (IoT) devices operating in a heterogeneous network (HetNet) system. These 5G-enabled IoT devices will result in a surge in data...The fifth generation (5G) networks will support the rapid emergence of Internet of Things (IoT) devices operating in a heterogeneous network (HetNet) system. These 5G-enabled IoT devices will result in a surge in data traffic for Mobile Network Operators (MNOs) to handle. At the same time, MNOs are preparing for a paradigm shift to decouple the control and forwarding plane in a Software-Defined Networking (SDN) architecture. Artificial Intelligence powered Self-Organising Networks (AI-SON) can fit into the SDN architecture by providing prediction and recommender systems to minimise costs in supporting the MNO’s infrastructure. This paper presents a review report on AI-SON frameworks in 5G and SDN. The review considers the dynamic deployment and functions of the AI-SON frameworks, especially for SDN support and applications. Each module in the frameworks was discussed to ascertain its relevance based on the context of AI-SON and SDN integration. After examining each framework, the identified gaps are summarised as open issues for future works.展开更多
Artificial intelligence can be indirectly applied to the repair of peripheral nerve injury.Specifically,it can be used to analyze and process data regarding peripheral nerve injury and repair,while study findings on p...Artificial intelligence can be indirectly applied to the repair of peripheral nerve injury.Specifically,it can be used to analyze and process data regarding peripheral nerve injury and repair,while study findings on peripheral nerve injury and repair can provide valuable data to enrich artificial intelligence algorithms.To investigate advances in the use of artificial intelligence in the diagnosis,rehabilitation,and scientific examination of peripheral nerve injury,we used CiteSpace and VOSviewer software to analyze the relevant literature included in the Web of Science from 1994–2023.We identified the following research hotspots in peripheral nerve injury and repair:(1)diagnosis,classification,and prognostic assessment of peripheral nerve injury using neuroimaging and artificial intelligence techniques,such as corneal confocal microscopy and coherent anti-Stokes Raman spectroscopy;(2)motion control and rehabilitation following peripheral nerve injury using artificial neural networks and machine learning algorithms,such as wearable devices and assisted wheelchair systems;(3)improving the accuracy and effectiveness of peripheral nerve electrical stimulation therapy using artificial intelligence techniques combined with deep learning,such as implantable peripheral nerve interfaces;(4)the application of artificial intelligence technology to brain-machine interfaces for disabled patients and those with reduced mobility,enabling them to control devices such as networked hand prostheses;(5)artificial intelligence robots that can replace doctors in certain procedures during surgery or rehabilitation,thereby reducing surgical risk and complications,and facilitating postoperative recovery.Although artificial intelligence has shown many benefits and potential applications in peripheral nerve injury and repair,there are some limitations to this technology,such as the consequences of missing or imbalanced data,low data accuracy and reproducibility,and ethical issues(e.g.,privacy,data security,research transparency).Future research should address the issue of data collection,as large-scale,high-quality clinical datasets are required to establish effective artificial intelligence models.Multimodal data processing is also necessary,along with interdisciplinary collaboration,medical-industrial integration,and multicenter,large-sample clinical studies.展开更多
Research on Chinese Sign Language(CSL)provides convenience and support for individuals with hearing impairments to communicate and integrate into society.This article reviews the relevant literature on Chinese Sign La...Research on Chinese Sign Language(CSL)provides convenience and support for individuals with hearing impairments to communicate and integrate into society.This article reviews the relevant literature on Chinese Sign Language Recognition(CSLR)in the past 20 years.Hidden Markov Models(HMM),Support Vector Machines(SVM),and Dynamic Time Warping(DTW)were found to be the most commonly employed technologies among traditional identificationmethods.Benefiting from the rapid development of computer vision and artificial intelligence technology,Convolutional Neural Networks(CNN),3D-CNN,YOLO,Capsule Network(CapsNet)and various deep neural networks have sprung up.Deep Neural Networks(DNNs)and their derived models are integral tomodern artificial intelligence recognitionmethods.In addition,technologies thatwerewidely used in the early days have also been integrated and applied to specific hybrid models and customized identification methods.Sign language data collection includes acquiring data from data gloves,data sensors(such as Kinect,LeapMotion,etc.),and high-definition photography.Meanwhile,facial expression recognition,complex background processing,and 3D sign language recognition have also attracted research interests among scholars.Due to the uniqueness and complexity of Chinese sign language,accuracy,robustness,real-time performance,and user independence are significant challenges for future sign language recognition research.Additionally,suitable datasets and evaluation criteria are also worth pursuing.展开更多
Object segmentation and recognition is an imperative area of computer vision andmachine learning that identifies and separates individual objects within an image or video and determines classes or categories based on ...Object segmentation and recognition is an imperative area of computer vision andmachine learning that identifies and separates individual objects within an image or video and determines classes or categories based on their features.The proposed system presents a distinctive approach to object segmentation and recognition using Artificial Neural Networks(ANNs).The system takes RGB images as input and uses a k-means clustering-based segmentation technique to fragment the intended parts of the images into different regions and label thembased on their characteristics.Then,two distinct kinds of features are obtained from the segmented images to help identify the objects of interest.An Artificial Neural Network(ANN)is then used to recognize the objects based on their features.Experiments were carried out with three standard datasets,MSRC,MS COCO,and Caltech 101 which are extensively used in object recognition research,to measure the productivity of the suggested approach.The findings from the experiment support the suggested system’s validity,as it achieved class recognition accuracies of 89%,83%,and 90.30% on the MSRC,MS COCO,and Caltech 101 datasets,respectively.展开更多
AI development has brought great success to upgrading the information age.At the same time,the large-scale artificial neural network for building AI systems is thirsty for computing power,which is barely satisfied by ...AI development has brought great success to upgrading the information age.At the same time,the large-scale artificial neural network for building AI systems is thirsty for computing power,which is barely satisfied by the conventional computing hardware.In the post-Moore era,the increase in computing power brought about by the size reduction of CMOS in very large-scale integrated circuits(VLSIC)is challenging to meet the growing demand for AI computing power.To address the issue,technical approaches like neuromorphic computing attract great attention because of their feature of breaking Von-Neumann architecture,and dealing with AI algorithms much more parallelly and energy efficiently.Inspired by the human neural network architecture,neuromorphic computing hardware is brought to life based on novel artificial neurons constructed by new materials or devices.Although it is relatively difficult to deploy a training process in the neuromorphic architecture like spiking neural network(SNN),the development in this field has incubated promising technologies like in-sensor computing,which brings new opportunities for multidisciplinary research,including the field of optoelectronic materials and devices,artificial neural networks,and microelectronics integration technology.The vision chips based on the architectures could reduce unnecessary data transfer and realize fast and energy-efficient visual cognitive processing.This paper reviews firstly the architectures and algorithms of SNN,and artificial neuron devices supporting neuromorphic computing,then the recent progress of in-sensor computing vision chips,which all will promote the development of AI.展开更多
The lethal brain tumor “Glioblastoma” has the propensity to grow over time. To improve patient outcomes, it is essential to classify GBM accurately and promptly in order to provide a focused and individualized treat...The lethal brain tumor “Glioblastoma” has the propensity to grow over time. To improve patient outcomes, it is essential to classify GBM accurately and promptly in order to provide a focused and individualized treatment plan. Despite this, deep learning methods, particularly Convolutional Neural Networks (CNNs), have demonstrated a high level of accuracy in a myriad of medical image analysis applications as a result of recent technical breakthroughs. The overall aim of the research is to investigate how CNNs can be used to classify GBMs using data from medical imaging, to improve prognosis precision and effectiveness. This research study will demonstrate a suggested methodology that makes use of the CNN architecture and is trained using a database of MRI pictures with this tumor. The constructed model will be assessed based on its overall performance. Extensive experiments and comparisons with conventional machine learning techniques and existing classification methods will also be made. It will be crucial to emphasize the possibility of early and accurate prediction in a clinical workflow because it can have a big impact on treatment planning and patient outcomes. The paramount objective is to not only address the classification challenge but also to outline a clear pathway towards enhancing prognosis precision and treatment effectiveness.展开更多
This study presents a neural network-based model for predicting linear quadratic regulator(LQR)weighting matrices for achieving a target response reduction.Based on the expected weighting matrices,the LQR algorithm is...This study presents a neural network-based model for predicting linear quadratic regulator(LQR)weighting matrices for achieving a target response reduction.Based on the expected weighting matrices,the LQR algorithm is used to determine the various responses of the structure.The responses are determined by numerically analyzing the governing equation of motion using the state-space approach.For training a neural network,four input parameters are considered:the time history of the ground motion,the percentage reduction in lateral displacement,lateral velocity,and lateral acceleration,Output parameters are LQR weighting matrices.To study the effectiveness of an LQR-based neural network(LQRNN),the actual percentage reduction in the responses obtained from using LQRNN is compared with the target percentage reductions.Furthermore,to investigate the efficacy of an active control system using LQRNN,the controlled responses of a system are compared to the corresponding uncontrolled responses.The trained neural network effectively predicts weighting parameters that can provide a percentage reduction in displacement,velocity,and acceleration close to the target percentage reduction.Based on the simulation study,it can be concluded that significant response reductions are observed in the active-controlled system using LQRNN.Moreover,the LQRNN algorithm can replace conventional LQR control with the use of an active control system.展开更多
To detect radioactive substances with low activity levels,an anticoincidence detector and a high-purity germanium(HPGe)detector are typically used simultaneously to suppress Compton scattering background,thereby resul...To detect radioactive substances with low activity levels,an anticoincidence detector and a high-purity germanium(HPGe)detector are typically used simultaneously to suppress Compton scattering background,thereby resulting in an extremely low detection limit and improving the measurement accuracy.However,the complex and expensive hardware required does not facilitate the application or promotion of this method.Thus,a method is proposed in this study to discriminate the digital waveform of pulse signals output using an HPGe detector,whereby Compton scattering background is suppressed and a low minimum detectable activity(MDA)is achieved without using an expensive and complex anticoincidence detector and device.The electric-field-strength and energy-deposition distributions of the detector are simulated to determine the relationship between pulse shape and energy-deposition location,as well as the characteristics of energy-deposition distributions for fulland partial-energy deposition events.This relationship is used to develop a pulse-shape-discrimination algorithm based on an artificial neural network for pulse-feature identification.To accurately determine the relationship between the deposited energy of gamma(γ)rays in the detector and the deposition location,we extract four shape parameters from the pulse signals output by the detector.Machine learning is used to input the four shape parameters into the detector.Subsequently,the pulse signals are identified and classified to discriminate between partial-and full-energy deposition events.Some partial-energy deposition events are removed to suppress Compton scattering.The proposed method effectively decreases the MDA of an HPGeγ-energy dispersive spectrometer.Test results show that the Compton suppression factors for energy spectra obtained from measurements on ^(152)Eu,^(137)Cs,and ^(60)Co radioactive sources are 1.13(344 keV),1.11(662 keV),and 1.08(1332 keV),respectively,and that the corresponding MDAs are 1.4%,5.3%,and 21.6%lower,respectively.展开更多
Ore production is usually affected by multiple influencing inputs at open-pit mines.Nevertheless,the complex nonlinear relationships between these inputs and ore production remain unclear.This becomes even more challe...Ore production is usually affected by multiple influencing inputs at open-pit mines.Nevertheless,the complex nonlinear relationships between these inputs and ore production remain unclear.This becomes even more challenging when training data(e.g.truck haulage information and weather conditions)are massive.In machine learning(ML)algorithms,deep neural network(DNN)is a superior method for processing nonlinear and massive data by adjusting the amount of neurons and hidden layers.This study adopted DNN to forecast ore production using truck haulage information and weather conditions at open-pit mines as training data.Before the prediction models were built,principal component analysis(PCA)was employed to reduce the data dimensionality and eliminate the multicollinearity among highly correlated input variables.To verify the superiority of DNN,three ANNs containing only one hidden layer and six traditional ML models were established as benchmark models.The DNN model with multiple hidden layers performed better than the ANN models with a single hidden layer.The DNN model outperformed the extensively applied benchmark models in predicting ore production.This can provide engineers and researchers with an accurate method to forecast ore production,which helps make sound budgetary decisions and mine planning at open-pit mines.展开更多
Distribution generation(DG)technology based on a variety of renewable energy technologies has developed rapidly.A large number of multi-type DG are connected to the distribution network(DN),resulting in a decline in t...Distribution generation(DG)technology based on a variety of renewable energy technologies has developed rapidly.A large number of multi-type DG are connected to the distribution network(DN),resulting in a decline in the stability of DN operation.It is urgent to find a method that can effectively connect multi-energy DG to DN.photovoltaic(PV),wind power generation(WPG),fuel cell(FC),and micro gas turbine(MGT)are considered in this paper.A multi-objective optimization model was established based on the life cycle cost(LCC)of DG,voltage quality,voltage fluctuation,system network loss,power deviation of the tie-line,DG pollution emission index,and meteorological index weight of DN.Multi-objective artificial bee colony algorithm(MOABC)was used to determine the optimal location and capacity of the four kinds of DG access DN,and compared with the other three heuristic algorithms.Simulation tests based on IEEE 33 test node and IEEE 69 test node show that in IEEE 33 test node,the total voltage deviation,voltage fluctuation,and system network loss of DN decreased by 49.67%,7.47%and 48.12%,respectively,compared with that without DG configuration.In the IEEE 69 test node,the total voltage deviation,voltage fluctuation and system network loss of DN in the MOABC configuration scheme decreased by 54.98%,35.93%and 75.17%,respectively,compared with that without DG configuration,indicating that MOABC can reasonably plan the capacity and location of DG.Achieve the maximum trade-off between DG economy and DN operation stability.展开更多
Security issues in cloud networks and edge computing have become very common. This research focuses on analyzing such issues and developing the best solutions. A detailed literature review has been conducted in this r...Security issues in cloud networks and edge computing have become very common. This research focuses on analyzing such issues and developing the best solutions. A detailed literature review has been conducted in this regard. The findings have shown that many challenges are linked to edge computing, such as privacy concerns, security breaches, high costs, low efficiency, etc. Therefore, there is a need to implement proper security measures to overcome these issues. Using emerging trends, like machine learning, encryption, artificial intelligence, real-time monitoring, etc., can help mitigate security issues. They can also develop a secure and safe future in cloud computing. It was concluded that the security implications of edge computing can easily be covered with the help of new technologies and techniques.展开更多
In this study, the mechanical properties of aluminum-5%magnesium doped with rare earth metal neodymium were evaluated. Fuzzy logic (FL) and artificial neural network (ANN) were used to model the mechanical properties ...In this study, the mechanical properties of aluminum-5%magnesium doped with rare earth metal neodymium were evaluated. Fuzzy logic (FL) and artificial neural network (ANN) were used to model the mechanical properties of aluminum-5%magnesium (0-0.9 wt%) neodymium. The single input (SI) to the fuzzy logic and artificial neural network models was the percentage weight of neodymium, while the multiple outputs (MO) were average grain size, ultimate tensile strength, yield strength elongation and hardness. The fuzzy logic-based model showed more accurate prediction than the artificial neutral network-based model in terms of the correlation coefficient values (R).展开更多
Artificial neural networks (ANN), being a sophisticated type of information processing system by imitating the neural system of human brain, can be used to investigate the effects of concentration of flux solution, te...Artificial neural networks (ANN), being a sophisticated type of information processing system by imitating the neural system of human brain, can be used to investigate the effects of concentration of flux solution, temperature of liquid aluminium, temperture of tools and pressure on thickness of the intermetallic layer at the interface between steel and aluminium under solid-liquid pressure bonding of steel and aluminium perfectly. The optimum thickness has been determined according to the value of the optimum shearing strength.展开更多
The developments of modern mathematics and computer science make artificial neural networks become most useful tools in wide range of fields. Modeling methods of artificial neural networks are described in this paper....The developments of modern mathematics and computer science make artificial neural networks become most useful tools in wide range of fields. Modeling methods of artificial neural networks are described in this paper. The programming technique by using Matlab neural networks toolbox is discussed. The application in Material Hot Working of neural networks is also introduced.展开更多
Artificial neural networks(ANNs)are a core component of artificial intelligence and are frequently used in machine learning.In this report,we investigate the use of ANNs to recover the saturated signals acquired in hi...Artificial neural networks(ANNs)are a core component of artificial intelligence and are frequently used in machine learning.In this report,we investigate the use of ANNs to recover the saturated signals acquired in highenergy particle and nuclear physics experiments.The inherent properties of the detector and hardware imply that particles with relatively high energies probably often generate saturated signals.Usually,these saturated signals are discarded during data processing,and therefore,some useful information is lost.Thus,it is worth restoring the saturated signals to their normal form.The mapping from a saturated signal waveform to a normal signal waveform constitutes a regression problem.Given that the scintillator and collection usually do not form a linear system,typical regression methods such as multi-parameter fitting are not immediately applicable.One important advantage of ANNs is their capability to process nonlinear regression problems.To recover the saturated signal,three typical ANNs were tested including backpropagation(BP),simple recurrent(Elman),and generalized radial basis function(GRBF)neural networks(NNs).They represent a basic network structure,a network structure with feedback,and a network structure with a kernel function,respectively.The saturated waveforms were produced mainly by the environmental gamma in a liquid scintillation detector for the China Dark Matter Detection Experiment(CDEX).The training and test data sets consisted of 6000 and 3000 recordings of background radiation,respectively,in which saturation was simulated by truncating each waveform at 40%of the maximum signal.The results show that the GBRF-NN performed best as measured using a Chi-squared test to compare the original and reconstructed signals in the region in which saturation was simulated.A comparison of the original and reconstructed signals in this region shows that the GBRF neural network produced the best performance.This ANN demonstrates a powerful efficacy in terms of solving the saturation recovery problem.The proposed method outlines new ideas and possibilities for the recovery of saturated signals in high-energy particle and nuclear physics experiments.This study also illustrates an innovative application of machine learning in the analysis of experimental data in particle physics.展开更多
基金This research was funded by the Scientific Research Project of Leshan Normal University(No.2022SSDX002)the Scientific Plan Project of Leshan(No.22NZD012).
文摘Artificial immune detection can be used to detect network intrusions in an adaptive approach and proper matching methods can improve the accuracy of immune detection methods.This paper proposes an artificial immune detection model for network intrusion data based on a quantitative matching method.The proposed model defines the detection process by using network data and decimal values to express features and artificial immune mechanisms are simulated to define immune elements.Then,to improve the accuracy of similarity calculation,a quantitative matching method is proposed.The model uses mathematical methods to train and evolve immune elements,increasing the diversity of immune recognition and allowing for the successful detection of unknown intrusions.The proposed model’s objective is to accurately identify known intrusions and expand the identification of unknown intrusions through signature detection and immune detection,overcoming the disadvantages of traditional methods.The experiment results show that the proposed model can detect intrusions effectively.It has a detection rate of more than 99.6%on average and a false alarm rate of 0.0264%.It outperforms existing immune intrusion detection methods in terms of comprehensive detection performance.
基金the support of the National Natural Science Foundation of China(22278234,21776151)。
文摘An artificial neural network(ANN)method is introduced to predict drop size in two kinds of pulsed columns with small-scale data sets.After training,the deviation between calculate and experimental results are 3.8%and 9.3%,respectively.Through ANN model,the influence of interfacial tension and pulsation intensity on the droplet diameter has been developed.Droplet size gradually increases with the increase of interfacial tension,and decreases with the increase of pulse intensity.It can be seen that the accuracy of ANN model in predicting droplet size outside the training set range is reach the same level as the accuracy of correlation obtained based on experiments within this range.For two kinds of columns,the drop size prediction deviations of ANN model are 9.6%and 18.5%and the deviations in correlations are 11%and 15%.
基金Supported by National Key R&D Program of China (018YFA0404400)National Natural Science Foundation of China (12070131001,11875075,11935003,11975031,12141501)。
文摘A novel variational wave function defined as a Jastrow factor multiplying a backflow transformed Slater determinant was developed for A=3 nuclei.The Jastrow factor and backflow transformation were represented by artificial neural networks.With this newly developed wave function,variational Monte Carlo calculations were carried out for3H and3He nuclei starting from a nuclear Hamiltonian based on the leadingorder pionless effective field theory.The obtained ground-state energy and charge radii were successfully benchmarked against the results of the highly-accurate hypersphericalharmonics method.The backflow transformation plays a crucial role in improving the nodal surface of the Slater determinant and,thus,providing accurate ground-state energy.
基金supported by Key Natural Science Research Projects of Colleges and Universities in Anhui Province(No.2022AH051831).
文摘ADC distribution network is an effective solution for increasing renewable energy utilization with distinct benefits,such as high efficiency and easy control.However,a sudden increase in the current after the occurrence of faults in the network may adversely affect network stability.This study proposes an artificial neural network(ANN)-based fault detection and protection method for DC distribution networks.The ANN is applied to a classifier for different faults ontheDC line.The backpropagationneuralnetwork is used to predict the line current,and the fault detection threshold is obtained on the basis of the difference between the predicted current and the actual current.The proposed method only uses local signals,with no requirement of a strict communication link.Simulation experiments are conducted for the proposed algorithm on a two-terminal DC distribution network modeled in the PSCAD/EMTDC and developed on the MATLAB platform.The results confirm that the proposed method can accurately detect and classify line faults within a few milliseconds and is not affected by fault locations,fault resistance,noise,and communication delay.
基金National Natural Science Foundation of China(Nos.51577044 and 52022026).
文摘Atmospheric pressure plasma jet(APPJ)arrays have shown a potential in a wide range of applications ranging from material processing to biomedicine.In these applications,targets with complex three-dimensional structures often easily affect plasma uniformity.However,the uniformity is usually crucially important in application areas such as biomedicine,etc.In this work,the flow and electric field collaborative modulations are used to improve the uniformity of the plasma downstream.Taking a two-dimensional sloped metallic substrate with a 10°inclined angle as an example,the influences of both flow and electric field on the electron and typical active species distributions downstream are studied based on a multi-field coupling model.The electric and flow fields modulations are first separately applied to test the influence.Results show that the electric field modulation has an obvious improvement on the uniformity of plasma while the flow field modulation effect is limited.Based on such outputs,a collaborative modulation of both fields is then applied,and shows a much better effect on the uniformity.To make further advances,a basic strategy of uniformity improvement is thus acquired.To achieve the goal,an artificial neural network method with reasonable accuracy is then used to predict the correlation between plasma processing parameters and downstream uniformity properties for further improvement of the plasma uniformity.An optional scheme taking advantage of the flexibility of APPJ arrays is then developed for practical demands.
文摘The fifth generation (5G) networks will support the rapid emergence of Internet of Things (IoT) devices operating in a heterogeneous network (HetNet) system. These 5G-enabled IoT devices will result in a surge in data traffic for Mobile Network Operators (MNOs) to handle. At the same time, MNOs are preparing for a paradigm shift to decouple the control and forwarding plane in a Software-Defined Networking (SDN) architecture. Artificial Intelligence powered Self-Organising Networks (AI-SON) can fit into the SDN architecture by providing prediction and recommender systems to minimise costs in supporting the MNO’s infrastructure. This paper presents a review report on AI-SON frameworks in 5G and SDN. The review considers the dynamic deployment and functions of the AI-SON frameworks, especially for SDN support and applications. Each module in the frameworks was discussed to ascertain its relevance based on the context of AI-SON and SDN integration. After examining each framework, the identified gaps are summarised as open issues for future works.
基金supported by the Capital’s Funds for Health Improvement and Research,No.2022-2-2072(to YG).
文摘Artificial intelligence can be indirectly applied to the repair of peripheral nerve injury.Specifically,it can be used to analyze and process data regarding peripheral nerve injury and repair,while study findings on peripheral nerve injury and repair can provide valuable data to enrich artificial intelligence algorithms.To investigate advances in the use of artificial intelligence in the diagnosis,rehabilitation,and scientific examination of peripheral nerve injury,we used CiteSpace and VOSviewer software to analyze the relevant literature included in the Web of Science from 1994–2023.We identified the following research hotspots in peripheral nerve injury and repair:(1)diagnosis,classification,and prognostic assessment of peripheral nerve injury using neuroimaging and artificial intelligence techniques,such as corneal confocal microscopy and coherent anti-Stokes Raman spectroscopy;(2)motion control and rehabilitation following peripheral nerve injury using artificial neural networks and machine learning algorithms,such as wearable devices and assisted wheelchair systems;(3)improving the accuracy and effectiveness of peripheral nerve electrical stimulation therapy using artificial intelligence techniques combined with deep learning,such as implantable peripheral nerve interfaces;(4)the application of artificial intelligence technology to brain-machine interfaces for disabled patients and those with reduced mobility,enabling them to control devices such as networked hand prostheses;(5)artificial intelligence robots that can replace doctors in certain procedures during surgery or rehabilitation,thereby reducing surgical risk and complications,and facilitating postoperative recovery.Although artificial intelligence has shown many benefits and potential applications in peripheral nerve injury and repair,there are some limitations to this technology,such as the consequences of missing or imbalanced data,low data accuracy and reproducibility,and ethical issues(e.g.,privacy,data security,research transparency).Future research should address the issue of data collection,as large-scale,high-quality clinical datasets are required to establish effective artificial intelligence models.Multimodal data processing is also necessary,along with interdisciplinary collaboration,medical-industrial integration,and multicenter,large-sample clinical studies.
基金supported by National Social Science Foundation Annual Project“Research on Evaluation and Improvement Paths of Integrated Development of Disabled Persons”(Grant No.20BRK029)the National Language Commission’s“14th Five-Year Plan”Scientific Research Plan 2023 Project“Domain Digital Language Service Resource Construction and Key Technology Research”(YB145-72)the National Philosophy and Social Sciences Foundation(Grant No.20BTQ065).
文摘Research on Chinese Sign Language(CSL)provides convenience and support for individuals with hearing impairments to communicate and integrate into society.This article reviews the relevant literature on Chinese Sign Language Recognition(CSLR)in the past 20 years.Hidden Markov Models(HMM),Support Vector Machines(SVM),and Dynamic Time Warping(DTW)were found to be the most commonly employed technologies among traditional identificationmethods.Benefiting from the rapid development of computer vision and artificial intelligence technology,Convolutional Neural Networks(CNN),3D-CNN,YOLO,Capsule Network(CapsNet)and various deep neural networks have sprung up.Deep Neural Networks(DNNs)and their derived models are integral tomodern artificial intelligence recognitionmethods.In addition,technologies thatwerewidely used in the early days have also been integrated and applied to specific hybrid models and customized identification methods.Sign language data collection includes acquiring data from data gloves,data sensors(such as Kinect,LeapMotion,etc.),and high-definition photography.Meanwhile,facial expression recognition,complex background processing,and 3D sign language recognition have also attracted research interests among scholars.Due to the uniqueness and complexity of Chinese sign language,accuracy,robustness,real-time performance,and user independence are significant challenges for future sign language recognition research.Additionally,suitable datasets and evaluation criteria are also worth pursuing.
基金supported by the MSIT(Ministry of Science and ICT)Korea,under the ITRC(Information Technology Research Center)Support Program(IITP-2023-2018-0-01426)supervised by the IITP(Institute for Information&Communications Technology Planning&Evaluation)+1 种基金Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2023R410),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabiathe Deanship of Scientific Research at Najran University for funding this work under the Research Group Funding Program Grant Code(NU/RG/SERC/12/6).
文摘Object segmentation and recognition is an imperative area of computer vision andmachine learning that identifies and separates individual objects within an image or video and determines classes or categories based on their features.The proposed system presents a distinctive approach to object segmentation and recognition using Artificial Neural Networks(ANNs).The system takes RGB images as input and uses a k-means clustering-based segmentation technique to fragment the intended parts of the images into different regions and label thembased on their characteristics.Then,two distinct kinds of features are obtained from the segmented images to help identify the objects of interest.An Artificial Neural Network(ANN)is then used to recognize the objects based on their features.Experiments were carried out with three standard datasets,MSRC,MS COCO,and Caltech 101 which are extensively used in object recognition research,to measure the productivity of the suggested approach.The findings from the experiment support the suggested system’s validity,as it achieved class recognition accuracies of 89%,83%,and 90.30% on the MSRC,MS COCO,and Caltech 101 datasets,respectively.
基金Project supported in part by the National Key Research and Development Program of China(Grant No.2021YFA0716400)the National Natural Science Foundation of China(Grant Nos.62225405,62150027,61974080,61991443,61975093,61927811,61875104,62175126,and 62235011)+2 种基金the Ministry of Science and Technology of China(Grant Nos.2021ZD0109900 and 2021ZD0109903)the Collaborative Innovation Center of Solid-State Lighting and Energy-Saving ElectronicsTsinghua University Initiative Scientific Research Program.
文摘AI development has brought great success to upgrading the information age.At the same time,the large-scale artificial neural network for building AI systems is thirsty for computing power,which is barely satisfied by the conventional computing hardware.In the post-Moore era,the increase in computing power brought about by the size reduction of CMOS in very large-scale integrated circuits(VLSIC)is challenging to meet the growing demand for AI computing power.To address the issue,technical approaches like neuromorphic computing attract great attention because of their feature of breaking Von-Neumann architecture,and dealing with AI algorithms much more parallelly and energy efficiently.Inspired by the human neural network architecture,neuromorphic computing hardware is brought to life based on novel artificial neurons constructed by new materials or devices.Although it is relatively difficult to deploy a training process in the neuromorphic architecture like spiking neural network(SNN),the development in this field has incubated promising technologies like in-sensor computing,which brings new opportunities for multidisciplinary research,including the field of optoelectronic materials and devices,artificial neural networks,and microelectronics integration technology.The vision chips based on the architectures could reduce unnecessary data transfer and realize fast and energy-efficient visual cognitive processing.This paper reviews firstly the architectures and algorithms of SNN,and artificial neuron devices supporting neuromorphic computing,then the recent progress of in-sensor computing vision chips,which all will promote the development of AI.
文摘The lethal brain tumor “Glioblastoma” has the propensity to grow over time. To improve patient outcomes, it is essential to classify GBM accurately and promptly in order to provide a focused and individualized treatment plan. Despite this, deep learning methods, particularly Convolutional Neural Networks (CNNs), have demonstrated a high level of accuracy in a myriad of medical image analysis applications as a result of recent technical breakthroughs. The overall aim of the research is to investigate how CNNs can be used to classify GBMs using data from medical imaging, to improve prognosis precision and effectiveness. This research study will demonstrate a suggested methodology that makes use of the CNN architecture and is trained using a database of MRI pictures with this tumor. The constructed model will be assessed based on its overall performance. Extensive experiments and comparisons with conventional machine learning techniques and existing classification methods will also be made. It will be crucial to emphasize the possibility of early and accurate prediction in a clinical workflow because it can have a big impact on treatment planning and patient outcomes. The paramount objective is to not only address the classification challenge but also to outline a clear pathway towards enhancing prognosis precision and treatment effectiveness.
基金Dean Research&Consultancy under Grant No.Dean (R&C)/2020-21/1155。
文摘This study presents a neural network-based model for predicting linear quadratic regulator(LQR)weighting matrices for achieving a target response reduction.Based on the expected weighting matrices,the LQR algorithm is used to determine the various responses of the structure.The responses are determined by numerically analyzing the governing equation of motion using the state-space approach.For training a neural network,four input parameters are considered:the time history of the ground motion,the percentage reduction in lateral displacement,lateral velocity,and lateral acceleration,Output parameters are LQR weighting matrices.To study the effectiveness of an LQR-based neural network(LQRNN),the actual percentage reduction in the responses obtained from using LQRNN is compared with the target percentage reductions.Furthermore,to investigate the efficacy of an active control system using LQRNN,the controlled responses of a system are compared to the corresponding uncontrolled responses.The trained neural network effectively predicts weighting parameters that can provide a percentage reduction in displacement,velocity,and acceleration close to the target percentage reduction.Based on the simulation study,it can be concluded that significant response reductions are observed in the active-controlled system using LQRNN.Moreover,the LQRNN algorithm can replace conventional LQR control with the use of an active control system.
基金This work was supported by the National Key R&D Program of China(Nos.2022YFF0709503,2022YFB1902700,2017YFC0602101)the Key Research and Development Program of Sichuan province(No.2023YFG0347)the Key Research and Development Program of Sichuan province(No.2020ZDZX0007).
文摘To detect radioactive substances with low activity levels,an anticoincidence detector and a high-purity germanium(HPGe)detector are typically used simultaneously to suppress Compton scattering background,thereby resulting in an extremely low detection limit and improving the measurement accuracy.However,the complex and expensive hardware required does not facilitate the application or promotion of this method.Thus,a method is proposed in this study to discriminate the digital waveform of pulse signals output using an HPGe detector,whereby Compton scattering background is suppressed and a low minimum detectable activity(MDA)is achieved without using an expensive and complex anticoincidence detector and device.The electric-field-strength and energy-deposition distributions of the detector are simulated to determine the relationship between pulse shape and energy-deposition location,as well as the characteristics of energy-deposition distributions for fulland partial-energy deposition events.This relationship is used to develop a pulse-shape-discrimination algorithm based on an artificial neural network for pulse-feature identification.To accurately determine the relationship between the deposited energy of gamma(γ)rays in the detector and the deposition location,we extract four shape parameters from the pulse signals output by the detector.Machine learning is used to input the four shape parameters into the detector.Subsequently,the pulse signals are identified and classified to discriminate between partial-and full-energy deposition events.Some partial-energy deposition events are removed to suppress Compton scattering.The proposed method effectively decreases the MDA of an HPGeγ-energy dispersive spectrometer.Test results show that the Compton suppression factors for energy spectra obtained from measurements on ^(152)Eu,^(137)Cs,and ^(60)Co radioactive sources are 1.13(344 keV),1.11(662 keV),and 1.08(1332 keV),respectively,and that the corresponding MDAs are 1.4%,5.3%,and 21.6%lower,respectively.
基金This work was supported by the Pilot Seed Grant(Grant No.RES0049944)the Collaborative Research Project(Grant No.RES0043251)from the University of Alberta.
文摘Ore production is usually affected by multiple influencing inputs at open-pit mines.Nevertheless,the complex nonlinear relationships between these inputs and ore production remain unclear.This becomes even more challenging when training data(e.g.truck haulage information and weather conditions)are massive.In machine learning(ML)algorithms,deep neural network(DNN)is a superior method for processing nonlinear and massive data by adjusting the amount of neurons and hidden layers.This study adopted DNN to forecast ore production using truck haulage information and weather conditions at open-pit mines as training data.Before the prediction models were built,principal component analysis(PCA)was employed to reduce the data dimensionality and eliminate the multicollinearity among highly correlated input variables.To verify the superiority of DNN,three ANNs containing only one hidden layer and six traditional ML models were established as benchmark models.The DNN model with multiple hidden layers performed better than the ANN models with a single hidden layer.The DNN model outperformed the extensively applied benchmark models in predicting ore production.This can provide engineers and researchers with an accurate method to forecast ore production,which helps make sound budgetary decisions and mine planning at open-pit mines.
文摘Distribution generation(DG)technology based on a variety of renewable energy technologies has developed rapidly.A large number of multi-type DG are connected to the distribution network(DN),resulting in a decline in the stability of DN operation.It is urgent to find a method that can effectively connect multi-energy DG to DN.photovoltaic(PV),wind power generation(WPG),fuel cell(FC),and micro gas turbine(MGT)are considered in this paper.A multi-objective optimization model was established based on the life cycle cost(LCC)of DG,voltage quality,voltage fluctuation,system network loss,power deviation of the tie-line,DG pollution emission index,and meteorological index weight of DN.Multi-objective artificial bee colony algorithm(MOABC)was used to determine the optimal location and capacity of the four kinds of DG access DN,and compared with the other three heuristic algorithms.Simulation tests based on IEEE 33 test node and IEEE 69 test node show that in IEEE 33 test node,the total voltage deviation,voltage fluctuation,and system network loss of DN decreased by 49.67%,7.47%and 48.12%,respectively,compared with that without DG configuration.In the IEEE 69 test node,the total voltage deviation,voltage fluctuation and system network loss of DN in the MOABC configuration scheme decreased by 54.98%,35.93%and 75.17%,respectively,compared with that without DG configuration,indicating that MOABC can reasonably plan the capacity and location of DG.Achieve the maximum trade-off between DG economy and DN operation stability.
文摘Security issues in cloud networks and edge computing have become very common. This research focuses on analyzing such issues and developing the best solutions. A detailed literature review has been conducted in this regard. The findings have shown that many challenges are linked to edge computing, such as privacy concerns, security breaches, high costs, low efficiency, etc. Therefore, there is a need to implement proper security measures to overcome these issues. Using emerging trends, like machine learning, encryption, artificial intelligence, real-time monitoring, etc., can help mitigate security issues. They can also develop a secure and safe future in cloud computing. It was concluded that the security implications of edge computing can easily be covered with the help of new technologies and techniques.
文摘In this study, the mechanical properties of aluminum-5%magnesium doped with rare earth metal neodymium were evaluated. Fuzzy logic (FL) and artificial neural network (ANN) were used to model the mechanical properties of aluminum-5%magnesium (0-0.9 wt%) neodymium. The single input (SI) to the fuzzy logic and artificial neural network models was the percentage weight of neodymium, while the multiple outputs (MO) were average grain size, ultimate tensile strength, yield strength elongation and hardness. The fuzzy logic-based model showed more accurate prediction than the artificial neutral network-based model in terms of the correlation coefficient values (R).
文摘Artificial neural networks (ANN), being a sophisticated type of information processing system by imitating the neural system of human brain, can be used to investigate the effects of concentration of flux solution, temperature of liquid aluminium, temperture of tools and pressure on thickness of the intermetallic layer at the interface between steel and aluminium under solid-liquid pressure bonding of steel and aluminium perfectly. The optimum thickness has been determined according to the value of the optimum shearing strength.
文摘The developments of modern mathematics and computer science make artificial neural networks become most useful tools in wide range of fields. Modeling methods of artificial neural networks are described in this paper. The programming technique by using Matlab neural networks toolbox is discussed. The application in Material Hot Working of neural networks is also introduced.
基金supported by the ‘‘Detection of very low-flux background neutrons in China Jinping Underground Laboratory’’ project of the National Natural Science Foundation of China(No.11275134)
文摘Artificial neural networks(ANNs)are a core component of artificial intelligence and are frequently used in machine learning.In this report,we investigate the use of ANNs to recover the saturated signals acquired in highenergy particle and nuclear physics experiments.The inherent properties of the detector and hardware imply that particles with relatively high energies probably often generate saturated signals.Usually,these saturated signals are discarded during data processing,and therefore,some useful information is lost.Thus,it is worth restoring the saturated signals to their normal form.The mapping from a saturated signal waveform to a normal signal waveform constitutes a regression problem.Given that the scintillator and collection usually do not form a linear system,typical regression methods such as multi-parameter fitting are not immediately applicable.One important advantage of ANNs is their capability to process nonlinear regression problems.To recover the saturated signal,three typical ANNs were tested including backpropagation(BP),simple recurrent(Elman),and generalized radial basis function(GRBF)neural networks(NNs).They represent a basic network structure,a network structure with feedback,and a network structure with a kernel function,respectively.The saturated waveforms were produced mainly by the environmental gamma in a liquid scintillation detector for the China Dark Matter Detection Experiment(CDEX).The training and test data sets consisted of 6000 and 3000 recordings of background radiation,respectively,in which saturation was simulated by truncating each waveform at 40%of the maximum signal.The results show that the GBRF-NN performed best as measured using a Chi-squared test to compare the original and reconstructed signals in the region in which saturation was simulated.A comparison of the original and reconstructed signals in this region shows that the GBRF neural network produced the best performance.This ANN demonstrates a powerful efficacy in terms of solving the saturation recovery problem.The proposed method outlines new ideas and possibilities for the recovery of saturated signals in high-energy particle and nuclear physics experiments.This study also illustrates an innovative application of machine learning in the analysis of experimental data in particle physics.