Metal-ion batteries(MIBs),including alkali metal-ion(Li^(+),Na^(+),and K^(3)),multi-valent metal-ion(Zn^(2+),Mg^(2+),and Al^(3+)),metal-air,and metal-sulfur batteries,play an indispensable role in electrochemical ener...Metal-ion batteries(MIBs),including alkali metal-ion(Li^(+),Na^(+),and K^(3)),multi-valent metal-ion(Zn^(2+),Mg^(2+),and Al^(3+)),metal-air,and metal-sulfur batteries,play an indispensable role in electrochemical energy storage.However,the performance of MIBs is significantly influenced by numerous variables,resulting in multi-dimensional and long-term challenges in the field of battery research and performance enhancement.Machine learning(ML),with its capability to solve intricate tasks and perform robust data processing,is now catalyzing a revolutionary transformation in the development of MIB materials and devices.In this review,we summarize the utilization of ML algorithms that have expedited research on MIBs over the past five years.We present an extensive overview of existing algorithms,elucidating their details,advantages,and limitations in various applications,which encompass electrode screening,material property prediction,electrolyte formulation design,electrode material characterization,manufacturing parameter optimization,and real-time battery status monitoring.Finally,we propose potential solutions and future directions for the application of ML in advancing MIB development.展开更多
Traditional linear statistical methods cannot provide effective prediction results due to the complexity of human mind.In this paper,we apply machine learning to the field of funding allocation decision making,and try...Traditional linear statistical methods cannot provide effective prediction results due to the complexity of human mind.In this paper,we apply machine learning to the field of funding allocation decision making,and try to explore whether personal characteristics of evaluators help predict the outcome of the evaluation decision?and how to improve the accuracy rate of machine learning methods on the imbalanced dataset of grant funding?Since funding data is characterized by imbalanced data distribution,we propose a slacked weighted entropy decision tree(SWE-DT).We assign weight to each class with the help of slacked factor.The experimental results show that the SWE decision tree performs well with sensitivity of 0.87,specificity of 0.85 and average accuracy of 0.75.It also provides a satisfied classification accuracy with Area Under Curve(AUC)=0.87.This implies that the proposed method accurately classified minority class instances and suitable to imbalanced datasets.By adding evaluator factors into the model,sensitivity is improved by over 9%,specificity improved by nearly 8%and the average accuracy also increased by 7%.It proves the feasibility of using evaluators’characteristics as predictors.And by innovatively using machine learning method to predict evaluation decisions based on the personal characteristics of evaluators,it enriches the literature in the field of decision making and machine learning field.展开更多
Recent studies for computer vision and deep learning-based,post-earthquake inspections on RC structures mainly perform well for specific tasks,while the trained models must be fine-tuned and re-trained when facing new...Recent studies for computer vision and deep learning-based,post-earthquake inspections on RC structures mainly perform well for specific tasks,while the trained models must be fine-tuned and re-trained when facing new tasks and datasets,which is inevitably time-consuming.This study proposes a multi-task learning approach that simultaneously accomplishes the semantic segmentation of seven-type structural components,three-type seismic damage,and four-type deterioration states.The proposed method contains a CNN-based encoder-decoder backbone subnetwork with skip-connection modules and a multi-head,task-specific recognition subnetwork.The backbone subnetwork is designed to extract multi-level features of post-earthquake RC structures.The multi-head,task-specific recognition subnetwork consists of three individual self-attention pipelines,each of which utilizes extracted multi-level features from the backbone network as a mutual guidance for the individual segmentation task.A synthetical loss function is designed with real-time adaptive coefficients to balance multi-task losses and focus on the most unstably fluctuating one.Ablation experiments and comparative studies are further conducted to demonstrate their effectiveness and necessity.The results show that the proposed method can simultaneously recognize different structural components,seismic damage,and deterioration states,and that the overall performance of the three-task learning models gains general improvement when compared to all single-task and dual-task models.展开更多
With the amalgamation of wearable systems equipped with inertial sensors, such as a gyroscope, and machine learning a therapy regimen can be objectively quantified, and then the initial phase and final phase of a one ...With the amalgamation of wearable systems equipped with inertial sensors, such as a gyroscope, and machine learning a therapy regimen can be objectively quantified, and then the initial phase and final phase of a one year therapy regimen can be distinguished through machine learning. In the context of rehabilitation of a hemiplegic ankle, a longitudinal therapy regimen incorporating stretching and then a series of repetitions for raising and lowering the foot of the hemiplegic ankle can be applied over the course of a year. Using a smartphone equipped with an application to function as a wearable and wireless gyroscope platform mounted to the dorsum of the foot by an armband, the initial phase and final phase of a one year longitudinally applied therapy regimen can be objectively quantified and recorded for subsequent machine learning. Considerable classification accuracy is attained to distinguish between the initial phase and final phase by a support vector machine for a one year longitudinally applied hemiplegic ankle therapy regimen based on the gyroscope signal data obtained by a smartphone functioning as a wearable and wireless inertial sensor system. .展开更多
With its high mountains,deep valleys,and complex geological formations,the Jiuzhaigou County has the typical characteristics of a disaster-prone mountainous region in southwestern China.On August 8,2017,a strong Ms 7....With its high mountains,deep valleys,and complex geological formations,the Jiuzhaigou County has the typical characteristics of a disaster-prone mountainous region in southwestern China.On August 8,2017,a strong Ms 7.0 earthquake occurred in this region,causing some of the mountains in the area to become loose and cracked.Therefore,a survey and evaluation of landslides in this area can help to reveal hazards and take effective measures for subsequent disaster management.However,different evaluation models can yield different spatial distributions of landslide susceptibility,and thus,selecting the appropriate model and performing the optimal combination of parameters is the most effective way to improve susceptibility evaluation.In order to construct an evaluation indicator system suitable for Jiuzhaigou County,we extracted 12 factors affecting the occurrence of landslides,including slope,elevation and slope surface,and made samples.At the core of the transformer model is a self-attentive mechanism that enables any two of the features to be interlinked,after which feature extraction is performed via a forward propagation network(FFN).We exploited its coding structure to transform it into a deep learning model that is more suitable for landslide susceptibility evaluation.The results show that the transformer model has the highest accuracy(86.89%),followed by the random forest and support vector machine models(84.47%and 82.52%,respectively),and the logistic regression model achieves the lowest accuracy(79.61%).Accordingly,this deep learning model provides a new tool to achieve more accurate zonation of landslide susceptibility in Jiuzhaigou County.展开更多
AIM:To evaluate the clinical application value of the artificial intelligence assisted pathologic myopia(PM-AI)diagnosis model based on deep learning.METHODS:A total of 1156 readable color fundus photographs were coll...AIM:To evaluate the clinical application value of the artificial intelligence assisted pathologic myopia(PM-AI)diagnosis model based on deep learning.METHODS:A total of 1156 readable color fundus photographs were collected and annotated based on the diagnostic criteria of Meta-pathologic myopia(PM)(2015).The PM-AI system and four eye doctors(retinal specialists 1 and 2,and ophthalmologists 1 and 2)independently evaluated the color fundus photographs to determine whether they were indicative of PM or not and the presence of myopic choroidal neovascularization(mCNV).The performance of identification for PM and mCNV by the PMAI system and the eye doctors was compared and evaluated via the relevant statistical analysis.RESULTS:For PM identification,the sensitivity of the PM-AI system was 98.17%,which was comparable to specialist 1(P=0.307),but was higher than specialist 2 and ophthalmologists 1 and 2(P<0.001).The specificity of the PM-AI system was 93.06%,which was lower than specialists 1 and 2,but was higher than ophthalmologists 1 and 2.The PM-AI system showed the Kappa value of 0.904,while the Kappa values of specialists 1,2 and ophthalmologists 1,2 were 0.968,0.916,0.772 and 0.730,respectively.For mCNV identification,the AI system showed the sensitivity of 84.06%,which was comparable to specialists 1,2 and ophthalmologist 2(P>0.05),and was higher than ophthalmologist 1.The specificity of the PM-AI system was 95.31%,which was lower than specialists 1 and 2,but higher than ophthalmologists 1 and 2.The PM-AI system gave the Kappa value of 0.624,while the Kappa values of specialists 1,2 and ophthalmologists 1 and 2 were 0.864,0.732,0.304 and 0.238,respectively.CONCLUSION:In comparison to the senior ophthalmologists,the PM-AI system based on deep learning exhibits excellent performance in PM and mCNV identification.The effectiveness of PM-AI system is an auxiliary diagnosis tool for clinical screening of PM and mCNV.展开更多
BACKGROUND Liver transplantation(LT)is a life-saving intervention for patients with end-stage liver disease.However,the equitable allocation of scarce donor organs remains a formidable challenge.Prognostic tools are p...BACKGROUND Liver transplantation(LT)is a life-saving intervention for patients with end-stage liver disease.However,the equitable allocation of scarce donor organs remains a formidable challenge.Prognostic tools are pivotal in identifying the most suitable transplant candidates.Traditionally,scoring systems like the model for end-stage liver disease have been instrumental in this process.Nevertheless,the landscape of prognostication is undergoing a transformation with the integration of machine learning(ML)and artificial intelligence models.AIM To assess the utility of ML models in prognostication for LT,comparing their performance and reliability to established traditional scoring systems.METHODS Following the Preferred Reporting Items for Systematic Reviews and Meta-Analysis guidelines,we conducted a thorough and standardized literature search using the PubMed/MEDLINE database.Our search imposed no restrictions on publication year,age,or gender.Exclusion criteria encompassed non-English studies,review articles,case reports,conference papers,studies with missing data,or those exhibiting evident methodological flaws.RESULTS Our search yielded a total of 64 articles,with 23 meeting the inclusion criteria.Among the selected studies,60.8%originated from the United States and China combined.Only one pediatric study met the criteria.Notably,91%of the studies were published within the past five years.ML models consistently demonstrated satisfactory to excellent area under the receiver operating characteristic curve values(ranging from 0.6 to 1)across all studies,surpassing the performance of traditional scoring systems.Random forest exhibited superior predictive capabilities for 90-d mortality following LT,sepsis,and acute kidney injury(AKI).In contrast,gradient boosting excelled in predicting the risk of graft-versus-host disease,pneumonia,and AKI.CONCLUSION This study underscores the potential of ML models in guiding decisions related to allograft allocation and LT,marking a significant evolution in the field of prognostication.展开更多
Stroke is a leading cause of disability and mortality worldwide,necessitating the development of advanced technologies to improve its diagnosis,treatment,and patient outcomes.In recent years,machine learning technique...Stroke is a leading cause of disability and mortality worldwide,necessitating the development of advanced technologies to improve its diagnosis,treatment,and patient outcomes.In recent years,machine learning techniques have emerged as promising tools in stroke medicine,enabling efficient analysis of large-scale datasets and facilitating personalized and precision medicine approaches.This abstract provides a comprehensive overview of machine learning’s applications,challenges,and future directions in stroke medicine.Recently introduced machine learning algorithms have been extensively employed in all the fields of stroke medicine.Machine learning models have demonstrated remarkable accuracy in imaging analysis,diagnosing stroke subtypes,risk stratifications,guiding medical treatment,and predicting patient prognosis.Despite the tremendous potential of machine learning in stroke medicine,several challenges must be addressed.These include the need for standardized and interoperable data collection,robust model validation and generalization,and the ethical considerations surrounding privacy and bias.In addition,integrating machine learning models into clinical workflows and establishing regulatory frameworks are critical for ensuring their widespread adoption and impact in routine stroke care.Machine learning promises to revolutionize stroke medicine by enabling precise diagnosis,tailored treatment selection,and improved prognostication.Continued research and collaboration among clinicians,researchers,and technologists are essential for overcoming challenges and realizing the full potential of machine learning in stroke care,ultimately leading to enhanced patient outcomes and quality of life.This review aims to summarize all the current implications of machine learning in stroke diagnosis,treatment,and prognostic evaluation.At the same time,another purpose of this paper is to explore all the future perspectives these techniques can provide in combating this disabling disease.展开更多
In software testing,the quality of test cases is crucial,but manual generation is time-consuming.Various automatic test case generation methods exist,requiring careful selection based on program features.Current evalu...In software testing,the quality of test cases is crucial,but manual generation is time-consuming.Various automatic test case generation methods exist,requiring careful selection based on program features.Current evaluation methods compare a limited set of metrics,which does not support a larger number of metrics or consider the relative importance of each metric to the final assessment.To address this,we propose an evaluation tool,the Test Case Generation Evaluator(TCGE),based on the learning to rank(L2R)algorithm.Unlike previous approaches,our method comprehensively evaluates algorithms by considering multiple metrics,resulting in a more reasoned assessment.The main principle of the TCGE is the formation of feature vectors that are of concern by the tester.Through training,the feature vectors are sorted to generate a list,with the order of the methods on the list determined according to their effectiveness on the tested assembly.We implement TCGE using three L2R algorithms:Listnet,LambdaMART,and RFLambdaMART.Evaluation employs a dataset with features of classical test case generation algorithms and three metrics—Normalized Discounted Cumulative Gain(NDCG),Mean Average Precision(MAP),and Mean Reciprocal Rank(MRR).Results demonstrate the TCGE’s superior effectiveness in evaluating test case generation algorithms compared to other methods.Among the three L2R algorithms,RFLambdaMART proves the most effective,achieving an accuracy above 96.5%,surpassing LambdaMART by 2%and Listnet by 1.5%.Consequently,the TCGE framework exhibits significant application value in the evaluation of test case generation algorithms.展开更多
Vascular etiology is the second most prevalent cause of cognitive impairment globally.Endothelin-1,which is produced and secreted by endothelial cells and astrocytes,is implicated in the pathogenesis of stroke.However...Vascular etiology is the second most prevalent cause of cognitive impairment globally.Endothelin-1,which is produced and secreted by endothelial cells and astrocytes,is implicated in the pathogenesis of stroke.However,the way in which changes in astrocytic endothelin-1 lead to poststroke cognitive deficits following transient middle cerebral artery occlusion is not well understood.Here,using mice in which astrocytic endothelin-1 was overexpressed,we found that the selective overexpression of endothelin-1 by astrocytic cells led to ischemic stroke-related dementia(1 hour of ischemia;7 days,28 days,or 3 months of reperfusion).We also revealed that astrocytic endothelin-1 overexpression contributed to the role of neural stem cell proliferation but impaired neurogenesis in the dentate gyrus of the hippocampus after middle cerebral artery occlusion.Comprehensive proteome profiles and western blot analysis confirmed that levels of glial fibrillary acidic protein and peroxiredoxin 6,which were differentially expressed in the brain,were significantly increased in mice with astrocytic endothelin-1 overexpression in comparison with wild-type mice 28 days after ischemic stroke.Moreover,the levels of the enriched differentially expressed proteins were closely related to lipid metabolism,as indicated by Kyoto Encyclopedia of Genes and Genomes pathway analysis.Liquid chromatography-mass spectrometry nontargeted metabolite profiling of brain tissues showed that astrocytic endothelin-1 overexpression altered lipid metabolism products such as glycerol phosphatidylcholine,sphingomyelin,and phosphatidic acid.Overall,this study demonstrates that astrocytic endothelin-1 overexpression can impair hippocampal neurogenesis and that it is correlated with lipid metabolism in poststroke cognitive dysfunction.展开更多
In recent years,as intelligent transportation systems(ITS)such as autonomous driving and advanced driver-assistance systems have become more popular,there has been a rise in the need for different sources of traffic s...In recent years,as intelligent transportation systems(ITS)such as autonomous driving and advanced driver-assistance systems have become more popular,there has been a rise in the need for different sources of traffic situation data.The classification of the road surface type,also known as the RST,is among the most essential of these situational data and can be utilized across the entirety of the ITS domain.Recently,the benefits of deep learning(DL)approaches for sensor-based RST classification have been demonstrated by automatic feature extraction without manual methods.The ability to extract important features is vital in making RST classification more accurate.This work investigates the most recent advances in DL algorithms for sensor-based RST classification and explores appropriate feature extraction models.We used different convolutional neural networks to understand the functional architecture better;we constructed an enhanced DL model called SE-ResNet,which uses residual connections and squeeze-and-excitation mod-ules to improve the classification performance.Comparative experiments with a publicly available benchmark dataset,the passive vehicular sensors dataset,have shown that SE-ResNet outperforms other state-of-the-art models.The proposed model achieved the highest accuracy of 98.41%and the highest F1-score of 98.19%when classifying surfaces into segments of dirt,cobblestone,or asphalt roads.Moreover,the proposed model significantly outperforms DL networks(CNN,LSTM,and CNN-LSTM).The proposed RE-ResNet achieved the classification accuracies of asphalt roads at 98.98,cobblestone roads at 97.02,and dirt roads at 99.56%,respectively.展开更多
Recommending high-quality news to users is vital in improving user stickiness and news platforms’reputation.However,existing news quality evaluation methods,such as clickbait detection and popularity prediction,are c...Recommending high-quality news to users is vital in improving user stickiness and news platforms’reputation.However,existing news quality evaluation methods,such as clickbait detection and popularity prediction,are challenging to reflect news quality comprehensively and concisely.This paper defines news quality as the ability of news articles to elicit clicks and comments from users,which represents whether the news article can attract widespread attention and discussion.Based on the above definition,this paper first presents a straightforward method to measure news quality based on the comments and clicks of news and defines four news quality indicators.Then,the dataset can be labeled automatically by the method.Next,this paper proposes a deep learning model that integrates explicit and implicit news information for news quality evaluation(EINQ).The explicit information includes the headline,source,and publishing time of the news,which attracts users to click.The implicit information refers to the news article’s content which attracts users to comment.The implicit and explicit information affect users’click and comment behavior differently.For modeling explicit information,the typical convolution neural network(CNN)is used to get news headline semantic representation.For modeling implicit information,a hierarchical attention network(HAN)is exploited to extract news content semantic representation while using the latent Dirichlet allocation(LDA)model to get the subject distribution of news as a semantic supplement.Considering the different roles of explicit and implicit information for quality evaluation,the EINQ exploits an attention layer to fuse them dynamically.The proposed model yields the Accuracy of 82.31%and the F-Score of 80.51%on the real-world dataset from Toutiao,which shows the effectiveness of explicit and implicit information dynamic fusion and demonstrates performance improvements over a variety of baseline models in news quality evaluation.This work provides empirical evidence for explicit and implicit factors in news quality evaluation and a new idea for news quality evaluation.展开更多
The visions of Industry 4.0 and 5.0 have reinforced the industrial environment.They have also made artificial intelligence incorporated as a major facilitator.Diagnosing machine faults has become a solid foundation fo...The visions of Industry 4.0 and 5.0 have reinforced the industrial environment.They have also made artificial intelligence incorporated as a major facilitator.Diagnosing machine faults has become a solid foundation for automatically recognizing machine failure,and thus timely maintenance can ensure safe operations.Transfer learning is a promising solution that can enhance the machine fault diagnosis model by borrowing pre-trained knowledge from the source model and applying it to the target model,which typically involves two datasets.In response to the availability of multiple datasets,this paper proposes using selective and adaptive incremental transfer learning(SA-ITL),which fuses three algorithms,namely,the hybrid selective algorithm,the transferability enhancement algorithm,and the incremental transfer learning algorithm.It is a selective algorithm that enables selecting and ordering appropriate datasets for transfer learning and selecting useful knowledge to avoid negative transfer.The algorithm also adaptively adjusts the portion of training data to balance the learning rate and training time.The proposed algorithm is evaluated and analyzed using ten benchmark datasets.Compared with other algorithms from existing works,SA-ITL improves the accuracy of all datasets.Ablation studies present the accuracy enhancements of the SA-ITL,including the hybrid selective algorithm(1.22%-3.82%),transferability enhancement algorithm(1.91%-4.15%),and incremental transfer learning algorithm(0.605%-2.68%).These also show the benefits of enhancing the target model with heterogeneous image datasets that widen the range of domain selection between source and target domains.展开更多
Fracture porosity is one of the key parameters for characterizing fractured reservoirs.However,fracture porosity calculation is difficult with conventional logging data due to severe anisotropy of the reservoirs.To de...Fracture porosity is one of the key parameters for characterizing fractured reservoirs.However,fracture porosity calculation is difficult with conventional logging data due to severe anisotropy of the reservoirs.To deal with the problem,the equivalent macroscopic anisotropic formation model based on dual laterolog(DLL)data is adopted to cyclically assign such parameters as bedrock resistivity(RB),fluid resistivity in fractures(RFL),fracture dip angle(FDA)and fracture thickness as well as fracture spacing,and to produce massive data for formation modeling.A large number of training data obtained through three dimensional finite element forward modeling and the functional relationship between DLL responses and fracture parameters that are trained and summarized by deep neural network,are combined to establish a new fast forward model for calculating DLL responses in fractured formations.A new fracture porosity inversion model for fractured reservoirs based on gradient optimization inversion algorithm combined with multi-initial inversion strategy is then proposed.While running the model,formation is divided into eight intervals according to bedrock resistivity and fracture dip angle from 0°to 90°is divided every 0.5°to improve the operation speed and efficiency.The results of numerical verification show that when bedrock resistivity is greater than 1000Ωm,the mean absolute error(MAE)of fracture porosity inversion is 0.001658%for horizontal fractures,0.00413%for intermediate fractures and 0.0027%for quasi-vertical fractures.When bedrock resistivity is between 100Ωm and 1000Ωm,MAE of fracture porosity inversion is 0.003%for horizontal fractures,0.0034%for intermediate fractures and 0.00348%for quasi-vertical fractures.Fracture parameters determined by the fracture porosity inversion model with actual data are in good agreement with the results of micro resistivity imaging logging.展开更多
Social media(SM)based surveillance systems,combined with machine learning(ML)and deep learning(DL)techniques,have shown potential for early detection of epidemic outbreaks.This review discusses the current state of SM...Social media(SM)based surveillance systems,combined with machine learning(ML)and deep learning(DL)techniques,have shown potential for early detection of epidemic outbreaks.This review discusses the current state of SM-based surveillance methods for early epidemic outbreaks and the role of ML and DL in enhancing their performance.Since,every year,a large amount of data related to epidemic outbreaks,particularly Twitter data is generated by SM.This paper outlines the theme of SM analysis for tracking health-related issues and detecting epidemic outbreaks in SM,along with the ML and DL techniques that have been configured for the detection of epidemic outbreaks.DL has emerged as a promising ML technique that adaptsmultiple layers of representations or features of the data and yields state-of-the-art extrapolation results.In recent years,along with the success of ML and DL in many other application domains,both ML and DL are also popularly used in SM analysis.This paper aims to provide an overview of epidemic outbreaks in SM and then outlines a comprehensive analysis of ML and DL approaches and their existing applications in SM analysis.Finally,this review serves the purpose of offering suggestions,ideas,and proposals,along with highlighting the ongoing challenges in the field of early outbreak detection that still need to be addressed.展开更多
The evaluation of disease severity through endoscopy is pivotal in managing patients with ulcerative colitis,a condition with significant clinical implications.However,endoscopic assessment is susceptible to inherent ...The evaluation of disease severity through endoscopy is pivotal in managing patients with ulcerative colitis,a condition with significant clinical implications.However,endoscopic assessment is susceptible to inherent variations,both within and between observers,compromising the reliability of individual evaluations.This study addresses this challenge by harnessing deep learning to develop a robust model capable of discerning discrete levels of endoscopic disease severity.To initiate this endeavor,a multi-faceted approach is embarked upon.The dataset is meticulously preprocessed,enhancing the quality and discriminative features of the images through contrast limited adaptive histogram equalization(CLAHE).A diverse array of data augmentation techniques,encompassing various geometric transformations,is leveraged to fortify the dataset’s diversity and facilitate effective feature extraction.A fundamental aspect of the approach involves the strategic incorporation of transfer learning principles,harnessing a modified ResNet-50 architecture.This augmentation,informed by domain expertise,contributed significantly to enhancing the model’s classification performance.The outcome of this research endeavor yielded a highly promising model,demonstrating an accuracy rate of 86.85%,coupled with a recall rate of 82.11%and a precision rate of 89.23%.展开更多
This paper is concerned with a novel integrated multi-step heuristic dynamic programming(MsHDP)algorithm for solving optimal control problems.It is shown that,initialized by the zero cost function,MsHDP can converge t...This paper is concerned with a novel integrated multi-step heuristic dynamic programming(MsHDP)algorithm for solving optimal control problems.It is shown that,initialized by the zero cost function,MsHDP can converge to the optimal solution of the Hamilton-Jacobi-Bellman(HJB)equation.Then,the stability of the system is analyzed using control policies generated by MsHDP.Also,a general stability criterion is designed to determine the admissibility of the current control policy.That is,the criterion is applicable not only to traditional value iteration and policy iteration but also to MsHDP.Further,based on the convergence and the stability criterion,the integrated MsHDP algorithm using immature control policies is developed to accelerate learning efficiency greatly.Besides,actor-critic is utilized to implement the integrated MsHDP scheme,where neural networks are used to evaluate and improve the iterative policy as the parameter architecture.Finally,two simulation examples are given to demonstrate that the learning effectiveness of the integrated MsHDP scheme surpasses those of other fixed or integrated methods.展开更多
When data privacy is imposed as a necessity,Federated learning(FL)emerges as a relevant artificial intelligence field for developing machine learning(ML)models in a distributed and decentralized environment.FL allows ...When data privacy is imposed as a necessity,Federated learning(FL)emerges as a relevant artificial intelligence field for developing machine learning(ML)models in a distributed and decentralized environment.FL allows ML models to be trained on local devices without any need for centralized data transfer,thereby reducing both the exposure of sensitive data and the possibility of data interception by malicious third parties.This paradigm has gained momentum in the last few years,spurred by the plethora of real-world applications that have leveraged its ability to improve the efficiency of distributed learning and to accommodate numerous participants with their data sources.By virtue of FL,models can be learned from all such distributed data sources while preserving data privacy.The aim of this paper is to provide a practical tutorial on FL,including a short methodology and a systematic analysis of existing software frameworks.Furthermore,our tutorial provides exemplary cases of study from three complementary perspectives:i)Foundations of FL,describing the main components of FL,from key elements to FL categories;ii)Implementation guidelines and exemplary cases of study,by systematically examining the functionalities provided by existing software frameworks for FL deployment,devising a methodology to design a FL scenario,and providing exemplary cases of study with source code for different ML approaches;and iii)Trends,shortly reviewing a non-exhaustive list of research directions that are under active investigation in the current FL landscape.The ultimate purpose of this work is to establish itself as a referential work for researchers,developers,and data scientists willing to explore the capabilities of FL in practical applications.展开更多
基金supported by the National Natural Science Foundation of China(52203364,52188101,52020105010)the National Key R&D Program of China(2021YFB3800300,2022YFB3803400)+2 种基金the Strategic Priority Research Program of Chinese Academy of Science(XDA22010602)the China Postdoctoral Science Foundation(2022M713214)the China National Postdoctoral Program for Innovative Talents(BX2021321)。
文摘Metal-ion batteries(MIBs),including alkali metal-ion(Li^(+),Na^(+),and K^(3)),multi-valent metal-ion(Zn^(2+),Mg^(2+),and Al^(3+)),metal-air,and metal-sulfur batteries,play an indispensable role in electrochemical energy storage.However,the performance of MIBs is significantly influenced by numerous variables,resulting in multi-dimensional and long-term challenges in the field of battery research and performance enhancement.Machine learning(ML),with its capability to solve intricate tasks and perform robust data processing,is now catalyzing a revolutionary transformation in the development of MIB materials and devices.In this review,we summarize the utilization of ML algorithms that have expedited research on MIBs over the past five years.We present an extensive overview of existing algorithms,elucidating their details,advantages,and limitations in various applications,which encompass electrode screening,material property prediction,electrolyte formulation design,electrode material characterization,manufacturing parameter optimization,and real-time battery status monitoring.Finally,we propose potential solutions and future directions for the application of ML in advancing MIB development.
基金This research project is supported by the Science Foundation of Beijing Language and Culture University(supported by the Fundamental Research Funds for the Central Universities)(21YBB35)the Hainan Provincial Natural Science Foundation of China(620RC562)+1 种基金the Program of Hainan Association for Science and Technology Plans to Youth R&D Innovation(Grant No.QCXM201910)the Postdoctoral Science Foundation of China(2021M690338).
文摘Traditional linear statistical methods cannot provide effective prediction results due to the complexity of human mind.In this paper,we apply machine learning to the field of funding allocation decision making,and try to explore whether personal characteristics of evaluators help predict the outcome of the evaluation decision?and how to improve the accuracy rate of machine learning methods on the imbalanced dataset of grant funding?Since funding data is characterized by imbalanced data distribution,we propose a slacked weighted entropy decision tree(SWE-DT).We assign weight to each class with the help of slacked factor.The experimental results show that the SWE decision tree performs well with sensitivity of 0.87,specificity of 0.85 and average accuracy of 0.75.It also provides a satisfied classification accuracy with Area Under Curve(AUC)=0.87.This implies that the proposed method accurately classified minority class instances and suitable to imbalanced datasets.By adding evaluator factors into the model,sensitivity is improved by over 9%,specificity improved by nearly 8%and the average accuracy also increased by 7%.It proves the feasibility of using evaluators’characteristics as predictors.And by innovatively using machine learning method to predict evaluation decisions based on the personal characteristics of evaluators,it enriches the literature in the field of decision making and machine learning field.
基金National Key R&D Program of China under Grant No.2019YFC1511005the National Natural Science Foundation of China under Grant Nos.51921006,52192661 and 52008138+2 种基金the China Postdoctoral Science Foundation under Grant Nos.BX20190102 and 2019M661286the Heilongjiang Natural Science Foundation under Grant No.LH2022E070the Heilongjiang Province Postdoctoral Science Foundation under Grant Nos.LBH-TZ2016 and LBH-Z19064。
文摘Recent studies for computer vision and deep learning-based,post-earthquake inspections on RC structures mainly perform well for specific tasks,while the trained models must be fine-tuned and re-trained when facing new tasks and datasets,which is inevitably time-consuming.This study proposes a multi-task learning approach that simultaneously accomplishes the semantic segmentation of seven-type structural components,three-type seismic damage,and four-type deterioration states.The proposed method contains a CNN-based encoder-decoder backbone subnetwork with skip-connection modules and a multi-head,task-specific recognition subnetwork.The backbone subnetwork is designed to extract multi-level features of post-earthquake RC structures.The multi-head,task-specific recognition subnetwork consists of three individual self-attention pipelines,each of which utilizes extracted multi-level features from the backbone network as a mutual guidance for the individual segmentation task.A synthetical loss function is designed with real-time adaptive coefficients to balance multi-task losses and focus on the most unstably fluctuating one.Ablation experiments and comparative studies are further conducted to demonstrate their effectiveness and necessity.The results show that the proposed method can simultaneously recognize different structural components,seismic damage,and deterioration states,and that the overall performance of the three-task learning models gains general improvement when compared to all single-task and dual-task models.
文摘With the amalgamation of wearable systems equipped with inertial sensors, such as a gyroscope, and machine learning a therapy regimen can be objectively quantified, and then the initial phase and final phase of a one year therapy regimen can be distinguished through machine learning. In the context of rehabilitation of a hemiplegic ankle, a longitudinal therapy regimen incorporating stretching and then a series of repetitions for raising and lowering the foot of the hemiplegic ankle can be applied over the course of a year. Using a smartphone equipped with an application to function as a wearable and wireless gyroscope platform mounted to the dorsum of the foot by an armband, the initial phase and final phase of a one year longitudinally applied therapy regimen can be objectively quantified and recorded for subsequent machine learning. Considerable classification accuracy is attained to distinguish between the initial phase and final phase by a support vector machine for a one year longitudinally applied hemiplegic ankle therapy regimen based on the gyroscope signal data obtained by a smartphone functioning as a wearable and wireless inertial sensor system. .
基金funded by the National Natural Science Foundation of China(Grants No.41771444)Science and Technology Plan Project of Sichuan Province(Grants No.2021YJ0369).
文摘With its high mountains,deep valleys,and complex geological formations,the Jiuzhaigou County has the typical characteristics of a disaster-prone mountainous region in southwestern China.On August 8,2017,a strong Ms 7.0 earthquake occurred in this region,causing some of the mountains in the area to become loose and cracked.Therefore,a survey and evaluation of landslides in this area can help to reveal hazards and take effective measures for subsequent disaster management.However,different evaluation models can yield different spatial distributions of landslide susceptibility,and thus,selecting the appropriate model and performing the optimal combination of parameters is the most effective way to improve susceptibility evaluation.In order to construct an evaluation indicator system suitable for Jiuzhaigou County,we extracted 12 factors affecting the occurrence of landslides,including slope,elevation and slope surface,and made samples.At the core of the transformer model is a self-attentive mechanism that enables any two of the features to be interlinked,after which feature extraction is performed via a forward propagation network(FFN).We exploited its coding structure to transform it into a deep learning model that is more suitable for landslide susceptibility evaluation.The results show that the transformer model has the highest accuracy(86.89%),followed by the random forest and support vector machine models(84.47%and 82.52%,respectively),and the logistic regression model achieves the lowest accuracy(79.61%).Accordingly,this deep learning model provides a new tool to achieve more accurate zonation of landslide susceptibility in Jiuzhaigou County.
文摘AIM:To evaluate the clinical application value of the artificial intelligence assisted pathologic myopia(PM-AI)diagnosis model based on deep learning.METHODS:A total of 1156 readable color fundus photographs were collected and annotated based on the diagnostic criteria of Meta-pathologic myopia(PM)(2015).The PM-AI system and four eye doctors(retinal specialists 1 and 2,and ophthalmologists 1 and 2)independently evaluated the color fundus photographs to determine whether they were indicative of PM or not and the presence of myopic choroidal neovascularization(mCNV).The performance of identification for PM and mCNV by the PMAI system and the eye doctors was compared and evaluated via the relevant statistical analysis.RESULTS:For PM identification,the sensitivity of the PM-AI system was 98.17%,which was comparable to specialist 1(P=0.307),but was higher than specialist 2 and ophthalmologists 1 and 2(P<0.001).The specificity of the PM-AI system was 93.06%,which was lower than specialists 1 and 2,but was higher than ophthalmologists 1 and 2.The PM-AI system showed the Kappa value of 0.904,while the Kappa values of specialists 1,2 and ophthalmologists 1,2 were 0.968,0.916,0.772 and 0.730,respectively.For mCNV identification,the AI system showed the sensitivity of 84.06%,which was comparable to specialists 1,2 and ophthalmologist 2(P>0.05),and was higher than ophthalmologist 1.The specificity of the PM-AI system was 95.31%,which was lower than specialists 1 and 2,but higher than ophthalmologists 1 and 2.The PM-AI system gave the Kappa value of 0.624,while the Kappa values of specialists 1,2 and ophthalmologists 1 and 2 were 0.864,0.732,0.304 and 0.238,respectively.CONCLUSION:In comparison to the senior ophthalmologists,the PM-AI system based on deep learning exhibits excellent performance in PM and mCNV identification.The effectiveness of PM-AI system is an auxiliary diagnosis tool for clinical screening of PM and mCNV.
文摘BACKGROUND Liver transplantation(LT)is a life-saving intervention for patients with end-stage liver disease.However,the equitable allocation of scarce donor organs remains a formidable challenge.Prognostic tools are pivotal in identifying the most suitable transplant candidates.Traditionally,scoring systems like the model for end-stage liver disease have been instrumental in this process.Nevertheless,the landscape of prognostication is undergoing a transformation with the integration of machine learning(ML)and artificial intelligence models.AIM To assess the utility of ML models in prognostication for LT,comparing their performance and reliability to established traditional scoring systems.METHODS Following the Preferred Reporting Items for Systematic Reviews and Meta-Analysis guidelines,we conducted a thorough and standardized literature search using the PubMed/MEDLINE database.Our search imposed no restrictions on publication year,age,or gender.Exclusion criteria encompassed non-English studies,review articles,case reports,conference papers,studies with missing data,or those exhibiting evident methodological flaws.RESULTS Our search yielded a total of 64 articles,with 23 meeting the inclusion criteria.Among the selected studies,60.8%originated from the United States and China combined.Only one pediatric study met the criteria.Notably,91%of the studies were published within the past five years.ML models consistently demonstrated satisfactory to excellent area under the receiver operating characteristic curve values(ranging from 0.6 to 1)across all studies,surpassing the performance of traditional scoring systems.Random forest exhibited superior predictive capabilities for 90-d mortality following LT,sepsis,and acute kidney injury(AKI).In contrast,gradient boosting excelled in predicting the risk of graft-versus-host disease,pneumonia,and AKI.CONCLUSION This study underscores the potential of ML models in guiding decisions related to allograft allocation and LT,marking a significant evolution in the field of prognostication.
文摘Stroke is a leading cause of disability and mortality worldwide,necessitating the development of advanced technologies to improve its diagnosis,treatment,and patient outcomes.In recent years,machine learning techniques have emerged as promising tools in stroke medicine,enabling efficient analysis of large-scale datasets and facilitating personalized and precision medicine approaches.This abstract provides a comprehensive overview of machine learning’s applications,challenges,and future directions in stroke medicine.Recently introduced machine learning algorithms have been extensively employed in all the fields of stroke medicine.Machine learning models have demonstrated remarkable accuracy in imaging analysis,diagnosing stroke subtypes,risk stratifications,guiding medical treatment,and predicting patient prognosis.Despite the tremendous potential of machine learning in stroke medicine,several challenges must be addressed.These include the need for standardized and interoperable data collection,robust model validation and generalization,and the ethical considerations surrounding privacy and bias.In addition,integrating machine learning models into clinical workflows and establishing regulatory frameworks are critical for ensuring their widespread adoption and impact in routine stroke care.Machine learning promises to revolutionize stroke medicine by enabling precise diagnosis,tailored treatment selection,and improved prognostication.Continued research and collaboration among clinicians,researchers,and technologists are essential for overcoming challenges and realizing the full potential of machine learning in stroke care,ultimately leading to enhanced patient outcomes and quality of life.This review aims to summarize all the current implications of machine learning in stroke diagnosis,treatment,and prognostic evaluation.At the same time,another purpose of this paper is to explore all the future perspectives these techniques can provide in combating this disabling disease.
文摘In software testing,the quality of test cases is crucial,but manual generation is time-consuming.Various automatic test case generation methods exist,requiring careful selection based on program features.Current evaluation methods compare a limited set of metrics,which does not support a larger number of metrics or consider the relative importance of each metric to the final assessment.To address this,we propose an evaluation tool,the Test Case Generation Evaluator(TCGE),based on the learning to rank(L2R)algorithm.Unlike previous approaches,our method comprehensively evaluates algorithms by considering multiple metrics,resulting in a more reasoned assessment.The main principle of the TCGE is the formation of feature vectors that are of concern by the tester.Through training,the feature vectors are sorted to generate a list,with the order of the methods on the list determined according to their effectiveness on the tested assembly.We implement TCGE using three L2R algorithms:Listnet,LambdaMART,and RFLambdaMART.Evaluation employs a dataset with features of classical test case generation algorithms and three metrics—Normalized Discounted Cumulative Gain(NDCG),Mean Average Precision(MAP),and Mean Reciprocal Rank(MRR).Results demonstrate the TCGE’s superior effectiveness in evaluating test case generation algorithms compared to other methods.Among the three L2R algorithms,RFLambdaMART proves the most effective,achieving an accuracy above 96.5%,surpassing LambdaMART by 2%and Listnet by 1.5%.Consequently,the TCGE framework exhibits significant application value in the evaluation of test case generation algorithms.
基金financially supported by the National Natural Science Foundation of China,No.81303115,81774042 (both to XC)the Pearl River S&T Nova Program of Guangzhou,No.201806010025 (to XC)+3 种基金the Specialty Program of Guangdong Province Hospital of Chinese Medicine of China,No.YN2018ZD07 (to XC)the Natural Science Foundatior of Guangdong Province of China,No.2023A1515012174 (to JL)the Science and Technology Program of Guangzhou of China,No.20210201 0268 (to XC),20210201 0339 (to JS)Guangdong Provincial Key Laboratory of Research on Emergency in TCM,Nos.2018-75,2019-140 (to JS)
文摘Vascular etiology is the second most prevalent cause of cognitive impairment globally.Endothelin-1,which is produced and secreted by endothelial cells and astrocytes,is implicated in the pathogenesis of stroke.However,the way in which changes in astrocytic endothelin-1 lead to poststroke cognitive deficits following transient middle cerebral artery occlusion is not well understood.Here,using mice in which astrocytic endothelin-1 was overexpressed,we found that the selective overexpression of endothelin-1 by astrocytic cells led to ischemic stroke-related dementia(1 hour of ischemia;7 days,28 days,or 3 months of reperfusion).We also revealed that astrocytic endothelin-1 overexpression contributed to the role of neural stem cell proliferation but impaired neurogenesis in the dentate gyrus of the hippocampus after middle cerebral artery occlusion.Comprehensive proteome profiles and western blot analysis confirmed that levels of glial fibrillary acidic protein and peroxiredoxin 6,which were differentially expressed in the brain,were significantly increased in mice with astrocytic endothelin-1 overexpression in comparison with wild-type mice 28 days after ischemic stroke.Moreover,the levels of the enriched differentially expressed proteins were closely related to lipid metabolism,as indicated by Kyoto Encyclopedia of Genes and Genomes pathway analysis.Liquid chromatography-mass spectrometry nontargeted metabolite profiling of brain tissues showed that astrocytic endothelin-1 overexpression altered lipid metabolism products such as glycerol phosphatidylcholine,sphingomyelin,and phosphatidic acid.Overall,this study demonstrates that astrocytic endothelin-1 overexpression can impair hippocampal neurogenesis and that it is correlated with lipid metabolism in poststroke cognitive dysfunction.
基金funded by National Research Council of Thailand (NRCT):An Integrated Road Safety Innovations of Pedestrian Crossing for Mortality and Injuries Reduction Among All Groups of Road Users,Contract No.N33A650757supported by the Thailand Science Research and Innovation Fund+1 种基金the University of Phayao (Grant No.FF66-UoE001)King Mongkut’s University of Technology North Bangkok underContract No.KMUTNB-66-KNOW-05.
文摘In recent years,as intelligent transportation systems(ITS)such as autonomous driving and advanced driver-assistance systems have become more popular,there has been a rise in the need for different sources of traffic situation data.The classification of the road surface type,also known as the RST,is among the most essential of these situational data and can be utilized across the entirety of the ITS domain.Recently,the benefits of deep learning(DL)approaches for sensor-based RST classification have been demonstrated by automatic feature extraction without manual methods.The ability to extract important features is vital in making RST classification more accurate.This work investigates the most recent advances in DL algorithms for sensor-based RST classification and explores appropriate feature extraction models.We used different convolutional neural networks to understand the functional architecture better;we constructed an enhanced DL model called SE-ResNet,which uses residual connections and squeeze-and-excitation mod-ules to improve the classification performance.Comparative experiments with a publicly available benchmark dataset,the passive vehicular sensors dataset,have shown that SE-ResNet outperforms other state-of-the-art models.The proposed model achieved the highest accuracy of 98.41%and the highest F1-score of 98.19%when classifying surfaces into segments of dirt,cobblestone,or asphalt roads.Moreover,the proposed model significantly outperforms DL networks(CNN,LSTM,and CNN-LSTM).The proposed RE-ResNet achieved the classification accuracies of asphalt roads at 98.98,cobblestone roads at 97.02,and dirt roads at 99.56%,respectively.
基金supported by the Fundamental Research Funds for the Central Universities(CUC230B008).
文摘Recommending high-quality news to users is vital in improving user stickiness and news platforms’reputation.However,existing news quality evaluation methods,such as clickbait detection and popularity prediction,are challenging to reflect news quality comprehensively and concisely.This paper defines news quality as the ability of news articles to elicit clicks and comments from users,which represents whether the news article can attract widespread attention and discussion.Based on the above definition,this paper first presents a straightforward method to measure news quality based on the comments and clicks of news and defines four news quality indicators.Then,the dataset can be labeled automatically by the method.Next,this paper proposes a deep learning model that integrates explicit and implicit news information for news quality evaluation(EINQ).The explicit information includes the headline,source,and publishing time of the news,which attracts users to click.The implicit information refers to the news article’s content which attracts users to comment.The implicit and explicit information affect users’click and comment behavior differently.For modeling explicit information,the typical convolution neural network(CNN)is used to get news headline semantic representation.For modeling implicit information,a hierarchical attention network(HAN)is exploited to extract news content semantic representation while using the latent Dirichlet allocation(LDA)model to get the subject distribution of news as a semantic supplement.Considering the different roles of explicit and implicit information for quality evaluation,the EINQ exploits an attention layer to fuse them dynamically.The proposed model yields the Accuracy of 82.31%and the F-Score of 80.51%on the real-world dataset from Toutiao,which shows the effectiveness of explicit and implicit information dynamic fusion and demonstrates performance improvements over a variety of baseline models in news quality evaluation.This work provides empirical evidence for explicit and implicit factors in news quality evaluation and a new idea for news quality evaluation.
文摘The visions of Industry 4.0 and 5.0 have reinforced the industrial environment.They have also made artificial intelligence incorporated as a major facilitator.Diagnosing machine faults has become a solid foundation for automatically recognizing machine failure,and thus timely maintenance can ensure safe operations.Transfer learning is a promising solution that can enhance the machine fault diagnosis model by borrowing pre-trained knowledge from the source model and applying it to the target model,which typically involves two datasets.In response to the availability of multiple datasets,this paper proposes using selective and adaptive incremental transfer learning(SA-ITL),which fuses three algorithms,namely,the hybrid selective algorithm,the transferability enhancement algorithm,and the incremental transfer learning algorithm.It is a selective algorithm that enables selecting and ordering appropriate datasets for transfer learning and selecting useful knowledge to avoid negative transfer.The algorithm also adaptively adjusts the portion of training data to balance the learning rate and training time.The proposed algorithm is evaluated and analyzed using ten benchmark datasets.Compared with other algorithms from existing works,SA-ITL improves the accuracy of all datasets.Ablation studies present the accuracy enhancements of the SA-ITL,including the hybrid selective algorithm(1.22%-3.82%),transferability enhancement algorithm(1.91%-4.15%),and incremental transfer learning algorithm(0.605%-2.68%).These also show the benefits of enhancing the target model with heterogeneous image datasets that widen the range of domain selection between source and target domains.
基金This work was financially supported by the National Natural Science Foundation of China(NSFC)Basic Research Program on Deep Petroleum Resource Accumulation and Key Engineering Technologies(Grant No.U19B6003-04-03-03)State Key Laboratory of Shale Oil and Gas Enrichment Mechanisms and Effective Development Projects(No.20-YYGZ-KF-GC-11)+1 种基金the Strategic Priority Research program of the Chinese Academy of Sciences(Grant No.XDA14010101)the National Science and Technology Major Project(Grant No.2017ZX05005005-005 and 2016ZX05014002-001).
文摘Fracture porosity is one of the key parameters for characterizing fractured reservoirs.However,fracture porosity calculation is difficult with conventional logging data due to severe anisotropy of the reservoirs.To deal with the problem,the equivalent macroscopic anisotropic formation model based on dual laterolog(DLL)data is adopted to cyclically assign such parameters as bedrock resistivity(RB),fluid resistivity in fractures(RFL),fracture dip angle(FDA)and fracture thickness as well as fracture spacing,and to produce massive data for formation modeling.A large number of training data obtained through three dimensional finite element forward modeling and the functional relationship between DLL responses and fracture parameters that are trained and summarized by deep neural network,are combined to establish a new fast forward model for calculating DLL responses in fractured formations.A new fracture porosity inversion model for fractured reservoirs based on gradient optimization inversion algorithm combined with multi-initial inversion strategy is then proposed.While running the model,formation is divided into eight intervals according to bedrock resistivity and fracture dip angle from 0°to 90°is divided every 0.5°to improve the operation speed and efficiency.The results of numerical verification show that when bedrock resistivity is greater than 1000Ωm,the mean absolute error(MAE)of fracture porosity inversion is 0.001658%for horizontal fractures,0.00413%for intermediate fractures and 0.0027%for quasi-vertical fractures.When bedrock resistivity is between 100Ωm and 1000Ωm,MAE of fracture porosity inversion is 0.003%for horizontal fractures,0.0034%for intermediate fractures and 0.00348%for quasi-vertical fractures.Fracture parameters determined by the fracture porosity inversion model with actual data are in good agreement with the results of micro resistivity imaging logging.
基金authors are thankful to the Deanship of Scientific Research at Najran University for funding this work,under the Research Groups Funding Program Grant Code(NU/RG/SERC/12/27).
文摘Social media(SM)based surveillance systems,combined with machine learning(ML)and deep learning(DL)techniques,have shown potential for early detection of epidemic outbreaks.This review discusses the current state of SM-based surveillance methods for early epidemic outbreaks and the role of ML and DL in enhancing their performance.Since,every year,a large amount of data related to epidemic outbreaks,particularly Twitter data is generated by SM.This paper outlines the theme of SM analysis for tracking health-related issues and detecting epidemic outbreaks in SM,along with the ML and DL techniques that have been configured for the detection of epidemic outbreaks.DL has emerged as a promising ML technique that adaptsmultiple layers of representations or features of the data and yields state-of-the-art extrapolation results.In recent years,along with the success of ML and DL in many other application domains,both ML and DL are also popularly used in SM analysis.This paper aims to provide an overview of epidemic outbreaks in SM and then outlines a comprehensive analysis of ML and DL approaches and their existing applications in SM analysis.Finally,this review serves the purpose of offering suggestions,ideas,and proposals,along with highlighting the ongoing challenges in the field of early outbreak detection that still need to be addressed.
文摘The evaluation of disease severity through endoscopy is pivotal in managing patients with ulcerative colitis,a condition with significant clinical implications.However,endoscopic assessment is susceptible to inherent variations,both within and between observers,compromising the reliability of individual evaluations.This study addresses this challenge by harnessing deep learning to develop a robust model capable of discerning discrete levels of endoscopic disease severity.To initiate this endeavor,a multi-faceted approach is embarked upon.The dataset is meticulously preprocessed,enhancing the quality and discriminative features of the images through contrast limited adaptive histogram equalization(CLAHE).A diverse array of data augmentation techniques,encompassing various geometric transformations,is leveraged to fortify the dataset’s diversity and facilitate effective feature extraction.A fundamental aspect of the approach involves the strategic incorporation of transfer learning principles,harnessing a modified ResNet-50 architecture.This augmentation,informed by domain expertise,contributed significantly to enhancing the model’s classification performance.The outcome of this research endeavor yielded a highly promising model,demonstrating an accuracy rate of 86.85%,coupled with a recall rate of 82.11%and a precision rate of 89.23%.
基金the National Key Research and Development Program of China(2021ZD0112302)the National Natural Science Foundation of China(62222301,61890930-5,62021003)the Beijing Natural Science Foundation(JQ19013).
文摘This paper is concerned with a novel integrated multi-step heuristic dynamic programming(MsHDP)algorithm for solving optimal control problems.It is shown that,initialized by the zero cost function,MsHDP can converge to the optimal solution of the Hamilton-Jacobi-Bellman(HJB)equation.Then,the stability of the system is analyzed using control policies generated by MsHDP.Also,a general stability criterion is designed to determine the admissibility of the current control policy.That is,the criterion is applicable not only to traditional value iteration and policy iteration but also to MsHDP.Further,based on the convergence and the stability criterion,the integrated MsHDP algorithm using immature control policies is developed to accelerate learning efficiency greatly.Besides,actor-critic is utilized to implement the integrated MsHDP scheme,where neural networks are used to evaluate and improve the iterative policy as the parameter architecture.Finally,two simulation examples are given to demonstrate that the learning effectiveness of the integrated MsHDP scheme surpasses those of other fixed or integrated methods.
基金the R&D&I,Spain grants PID2020-119478GB-I00 and,PID2020-115832GB-I00 funded by MCIN/AEI/10.13039/501100011033.N.Rodríguez-Barroso was supported by the grant FPU18/04475 funded by MCIN/AEI/10.13039/501100011033 and by“ESF Investing in your future”Spain.J.Moyano was supported by a postdoctoral Juan de la Cierva Formación grant FJC2020-043823-I funded by MCIN/AEI/10.13039/501100011033 and by European Union NextGenerationEU/PRTR.J.Del Ser acknowledges funding support from the Spanish Centro para el Desarrollo Tecnológico Industrial(CDTI)through the AI4ES projectthe Department of Education of the Basque Government(consolidated research group MATHMODE,IT1456-22)。
文摘When data privacy is imposed as a necessity,Federated learning(FL)emerges as a relevant artificial intelligence field for developing machine learning(ML)models in a distributed and decentralized environment.FL allows ML models to be trained on local devices without any need for centralized data transfer,thereby reducing both the exposure of sensitive data and the possibility of data interception by malicious third parties.This paradigm has gained momentum in the last few years,spurred by the plethora of real-world applications that have leveraged its ability to improve the efficiency of distributed learning and to accommodate numerous participants with their data sources.By virtue of FL,models can be learned from all such distributed data sources while preserving data privacy.The aim of this paper is to provide a practical tutorial on FL,including a short methodology and a systematic analysis of existing software frameworks.Furthermore,our tutorial provides exemplary cases of study from three complementary perspectives:i)Foundations of FL,describing the main components of FL,from key elements to FL categories;ii)Implementation guidelines and exemplary cases of study,by systematically examining the functionalities provided by existing software frameworks for FL deployment,devising a methodology to design a FL scenario,and providing exemplary cases of study with source code for different ML approaches;and iii)Trends,shortly reviewing a non-exhaustive list of research directions that are under active investigation in the current FL landscape.The ultimate purpose of this work is to establish itself as a referential work for researchers,developers,and data scientists willing to explore the capabilities of FL in practical applications.