期刊文献+
共找到23,161篇文章
< 1 2 250 >
每页显示 20 50 100
Explainable Heart Disease Prediction Using Ensemble-Quantum Machine Learning Approach
1
作者 Ghada Abdulsalam Souham Meshoul Hadil Shaiba 《Intelligent Automation & Soft Computing》 SCIE 2023年第4期761-779,共19页
Nowadays,quantum machine learning is attracting great interest in a wide range offields due to its potential superior performance and capabilities.The massive increase in computational capacity and speed of quantum com... Nowadays,quantum machine learning is attracting great interest in a wide range offields due to its potential superior performance and capabilities.The massive increase in computational capacity and speed of quantum computers can lead to a quantum leap in the healthcarefield.Heart disease seriously threa-tens human health since it is the leading cause of death worldwide.Quantum machine learning methods can propose effective solutions to predict heart disease and aid in early diagnosis.In this study,an ensemble machine learning model based on quantum machine learning classifiers is proposed to predict the risk of heart disease.The proposed model is a bagging ensemble learning model where a quantum support vector classifier was used as a base classifier.Further-more,in order to make the model’s outcomes more explainable,the importance of every single feature in the prediction is computed and visualized using SHapley Additive exPlanations(SHAP)framework.In the experimental study,other stand-alone quantum classifiers,namely,Quantum Support Vector Classifier(QSVC),Quantum Neural Network(QNN),and Variational Quantum Classifier(VQC)are applied and compared with classical machine learning classifiers such as Sup-port Vector Machine(SVM),and Artificial Neural Network(ANN).The experi-mental results on the Cleveland dataset reveal the superiority of QSVC compared to the others,which explains its use in the proposed bagging model.The Bagging-QSVC model outperforms all aforementioned classifiers with an accuracy of 90.16%while showing great competitiveness compared to some state-of-the-art models using the same dataset.The results of the study indicate that quantum machine learning classifiers perform better than classical machine learning classi-fiers in predicting heart disease.In addition,the study reveals that the bagging ensemble learning technique is effective in improving the prediction accuracy of quantum classifiers. 展开更多
关键词 machine learning ensemble learning quantum machine learning explainable machine learning heart disease prediction
下载PDF
Pancreatic Cancer Data Classification with Quantum Machine Learning
2
作者 Amit Saxena Smita Saxena 《Journal of Quantum Computing》 2023年第1期1-13,共13页
Quantum computing is a promising new approach to tackle the complex real-world computational problems by harnessing the power of quantum mechanics principles.The inherent parallelism and exponential computational powe... Quantum computing is a promising new approach to tackle the complex real-world computational problems by harnessing the power of quantum mechanics principles.The inherent parallelism and exponential computational power of quantum systems hold the potential to outpace classical counterparts in solving complex optimization problems,which are pervasive in machine learning.Quantum Support Vector Machine(QSVM)is a quantum machine learning algorithm inspired by classical Support Vector Machine(SVM)that exploits quantum parallelism to efficiently classify data points in high-dimensional feature spaces.We provide a comprehensive overview of the underlying principles of QSVM,elucidating how different quantum feature maps and quantum kernels enable the manipulation of quantum states to perform classification tasks.Through a comparative analysis,we reveal the quantum advantage achieved by these algorithms in terms of speedup and solution quality.As a case study,we explored the potential of quantum paradigms in the context of a real-world problem:classifying pancreatic cancer biomarker data.The Support Vector Classifier(SVC)algorithm was employed for the classical approach while the QSVM algorithm was executed on a quantum simulator provided by the Qiskit quantum computing framework.The classical approach as well as the quantum-based techniques reported similar accuracy.This uniformity suggests that these methods effectively captured similar underlying patterns in the dataset.Remarkably,quantum implementations exhibited substantially reduced execution times demonstrating the potential of quantum approaches in enhancing classification efficiency.This affirms the growing significance of quantum computing as a transformative tool for augmenting machine learning paradigms and also underscores the potency of quantum execution for computational acceleration. 展开更多
关键词 quantum computing quantum machine learning quantum support vector machine multiclass classification
下载PDF
Diabetes Type 2: Poincaré Data Preprocessing for Quantum Machine Learning
3
作者 Daniel Sierra-Sosa Juan D.Arcila-Moreno +1 位作者 Begonya Garcia-Zapirain Adel Elmaghraby 《Computers, Materials & Continua》 SCIE EI 2021年第5期1849-1861,共13页
Quantum Machine Learning(QML)techniques have been recently attracting massive interest.However reported applications usually employ synthetic or well-known datasets.One of these techniques based on using a hybrid appr... Quantum Machine Learning(QML)techniques have been recently attracting massive interest.However reported applications usually employ synthetic or well-known datasets.One of these techniques based on using a hybrid approach combining quantum and classic devices is the Variational Quantum Classifier(VQC),which development seems promising.Albeit being largely studied,VQC implementations for“real-world”datasets are still challenging on Noisy Intermediate Scale Quantum devices(NISQ).In this paper we propose a preprocessing pipeline based on Stokes parameters for data mapping.This pipeline enhances the prediction rates when applying VQC techniques,improving the feasibility of solving classification problems using NISQ devices.By including feature selection techniques and geometrical transformations,enhanced quantum state preparation is achieved.Also,a representation based on the Stokes parameters in the PoincaréSphere is possible for visualizing the data.Our results show that by using the proposed techniques we improve the classification score for the incidence of acute comorbid diseases in Type 2 Diabetes Mellitus patients.We used the implemented version of VQC available on IBM’s framework Qiskit,and obtained with two and three qubits an accuracy of 70%and 72%respectively. 展开更多
关键词 quantum machine learning data preprocessing stokes parameters Poincarésphere
下载PDF
Learning Unitary Transformation by Quantum Machine Learning Model
4
作者 Yi-Ming Huang Xiao-Yu Li +3 位作者 Yi-Xuan Zhu Hang Lei Qing-Sheng Zhu Shan Yang 《Computers, Materials & Continua》 SCIE EI 2021年第7期789-803,共15页
Quantum machine learning(QML)is a rapidly rising research eld that incorporates ideas from quantum computing and machine learning to develop emerging tools for scientic research and improving data processing.How to ef... Quantum machine learning(QML)is a rapidly rising research eld that incorporates ideas from quantum computing and machine learning to develop emerging tools for scientic research and improving data processing.How to efciently control or manipulate the quantum system is a fundamental and vexing problem in quantum computing.It can be described as learning or approximating a unitary operator.Since the success of the hybrid-based quantum machine learning model proposed in recent years,we investigate to apply the techniques from QML to tackle this problem.Based on the Choi–Jamiołkowski isomorphism in quantum computing,we transfer the original problem of learning a unitary operator to a min–max optimization problem which can also be viewed as a quantum generative adversarial network.Besides,we select the spectral norm between the target and generated unitary operators as the regularization term in the loss function.Inspired by the hybrid quantum-classical framework widely used in quantum machine learning,we employ the variational quantum circuit and gradient descent based optimizers to solve the min-max optimization problem.In our numerical experiments,the results imply that our proposed method can successfully approximate the desired unitary operator and dramatically reduce the number of quantum gates of the traditional approach.The average delity between the states that are produced by applying target and generated unitary on random input states is around 0.997. 展开更多
关键词 machine learning quantum computing unitary transformat
下载PDF
Use of machine learning models for the prognostication of liver transplantation: A systematic review 被引量:1
5
作者 Gidion Chongo Jonathan Soldera 《World Journal of Transplantation》 2024年第1期164-188,共25页
BACKGROUND Liver transplantation(LT)is a life-saving intervention for patients with end-stage liver disease.However,the equitable allocation of scarce donor organs remains a formidable challenge.Prognostic tools are p... BACKGROUND Liver transplantation(LT)is a life-saving intervention for patients with end-stage liver disease.However,the equitable allocation of scarce donor organs remains a formidable challenge.Prognostic tools are pivotal in identifying the most suitable transplant candidates.Traditionally,scoring systems like the model for end-stage liver disease have been instrumental in this process.Nevertheless,the landscape of prognostication is undergoing a transformation with the integration of machine learning(ML)and artificial intelligence models.AIM To assess the utility of ML models in prognostication for LT,comparing their performance and reliability to established traditional scoring systems.METHODS Following the Preferred Reporting Items for Systematic Reviews and Meta-Analysis guidelines,we conducted a thorough and standardized literature search using the PubMed/MEDLINE database.Our search imposed no restrictions on publication year,age,or gender.Exclusion criteria encompassed non-English studies,review articles,case reports,conference papers,studies with missing data,or those exhibiting evident methodological flaws.RESULTS Our search yielded a total of 64 articles,with 23 meeting the inclusion criteria.Among the selected studies,60.8%originated from the United States and China combined.Only one pediatric study met the criteria.Notably,91%of the studies were published within the past five years.ML models consistently demonstrated satisfactory to excellent area under the receiver operating characteristic curve values(ranging from 0.6 to 1)across all studies,surpassing the performance of traditional scoring systems.Random forest exhibited superior predictive capabilities for 90-d mortality following LT,sepsis,and acute kidney injury(AKI).In contrast,gradient boosting excelled in predicting the risk of graft-versus-host disease,pneumonia,and AKI.CONCLUSION This study underscores the potential of ML models in guiding decisions related to allograft allocation and LT,marking a significant evolution in the field of prognostication. 展开更多
关键词 Liver transplantation machine learning models PROGNOSTICATION Allograft allocation Artificial intelligence
下载PDF
Machine learning applications in stroke medicine:advancements,challenges,and future prospectives
6
作者 Mario Daidone Sergio Ferrantelli Antonino Tuttolomondo 《Neural Regeneration Research》 SCIE CAS CSCD 2024年第4期769-773,共5页
Stroke is a leading cause of disability and mortality worldwide,necessitating the development of advanced technologies to improve its diagnosis,treatment,and patient outcomes.In recent years,machine learning technique... Stroke is a leading cause of disability and mortality worldwide,necessitating the development of advanced technologies to improve its diagnosis,treatment,and patient outcomes.In recent years,machine learning techniques have emerged as promising tools in stroke medicine,enabling efficient analysis of large-scale datasets and facilitating personalized and precision medicine approaches.This abstract provides a comprehensive overview of machine learning’s applications,challenges,and future directions in stroke medicine.Recently introduced machine learning algorithms have been extensively employed in all the fields of stroke medicine.Machine learning models have demonstrated remarkable accuracy in imaging analysis,diagnosing stroke subtypes,risk stratifications,guiding medical treatment,and predicting patient prognosis.Despite the tremendous potential of machine learning in stroke medicine,several challenges must be addressed.These include the need for standardized and interoperable data collection,robust model validation and generalization,and the ethical considerations surrounding privacy and bias.In addition,integrating machine learning models into clinical workflows and establishing regulatory frameworks are critical for ensuring their widespread adoption and impact in routine stroke care.Machine learning promises to revolutionize stroke medicine by enabling precise diagnosis,tailored treatment selection,and improved prognostication.Continued research and collaboration among clinicians,researchers,and technologists are essential for overcoming challenges and realizing the full potential of machine learning in stroke care,ultimately leading to enhanced patient outcomes and quality of life.This review aims to summarize all the current implications of machine learning in stroke diagnosis,treatment,and prognostic evaluation.At the same time,another purpose of this paper is to explore all the future perspectives these techniques can provide in combating this disabling disease. 展开更多
关键词 cerebrovascular disease deep learning machine learning reinforcement learning STROKE stroke therapy supervised learning unsupervised learning
下载PDF
Machine learning for predicting the outcome of terminal ballistics events
7
作者 Shannon Ryan Neeraj Mohan Sushma +4 位作者 Arun Kumar AV Julian Berk Tahrima Hashem Santu Rana Svetha Venkatesh 《Defence Technology(防务技术)》 SCIE EI CAS CSCD 2024年第1期14-26,共13页
Machine learning(ML) is well suited for the prediction of high-complexity,high-dimensional problems such as those encountered in terminal ballistics.We evaluate the performance of four popular ML-based regression mode... Machine learning(ML) is well suited for the prediction of high-complexity,high-dimensional problems such as those encountered in terminal ballistics.We evaluate the performance of four popular ML-based regression models,extreme gradient boosting(XGBoost),artificial neural network(ANN),support vector regression(SVR),and Gaussian process regression(GP),on two common terminal ballistics’ problems:(a)predicting the V50ballistic limit of monolithic metallic armour impacted by small and medium calibre projectiles and fragments,and(b) predicting the depth to which a projectile will penetrate a target of semi-infinite thickness.To achieve this we utilise two datasets,each consisting of approximately 1000samples,collated from public release sources.We demonstrate that all four model types provide similarly excellent agreement when interpolating within the training data and diverge when extrapolating outside this range.Although extrapolation is not advisable for ML-based regression models,for applications such as lethality/survivability analysis,such capability is required.To circumvent this,we implement expert knowledge and physics-based models via enforced monotonicity,as a Gaussian prior mean,and through a modified loss function.The physics-informed models demonstrate improved performance over both classical physics-based models and the basic ML regression models,providing an ability to accurately fit experimental data when it is available and then revert to the physics-based model when not.The resulting models demonstrate high levels of predictive accuracy over a very wide range of projectile types,target materials and thicknesses,and impact conditions significantly more diverse than that achievable from any existing analytical approach.Compared with numerical analysis tools such as finite element solvers the ML models run orders of magnitude faster.We provide some general guidelines throughout for the development,application,and reporting of ML models in terminal ballistics problems. 展开更多
关键词 machine learning Artificial intelligence Physics-informed machine learning Terminal ballistics Armour
下载PDF
Advancements in machine learning for material design and process optimization in the field of additive manufacturing
8
作者 Hao-ran Zhou Hao Yang +8 位作者 Huai-qian Li Ying-chun Ma Sen Yu Jian shi Jing-chang Cheng Peng Gao Bo Yu Zhi-quan Miao Yan-peng Wei 《China Foundry》 SCIE EI CAS CSCD 2024年第2期101-115,共15页
Additive manufacturing technology is highly regarded due to its advantages,such as high precision and the ability to address complex geometric challenges.However,the development of additive manufacturing process is co... Additive manufacturing technology is highly regarded due to its advantages,such as high precision and the ability to address complex geometric challenges.However,the development of additive manufacturing process is constrained by issues like unclear fundamental principles,complex experimental cycles,and high costs.Machine learning,as a novel artificial intelligence technology,has the potential to deeply engage in the development of additive manufacturing process,assisting engineers in learning and developing new techniques.This paper provides a comprehensive overview of the research and applications of machine learning in the field of additive manufacturing,particularly in model design and process development.Firstly,it introduces the background and significance of machine learning-assisted design in additive manufacturing process.It then further delves into the application of machine learning in additive manufacturing,focusing on model design and process guidance.Finally,it concludes by summarizing and forecasting the development trends of machine learning technology in the field of additive manufacturing. 展开更多
关键词 additive manufacturing machine learning material design process optimization intersection of disciplines embedded machine learning
下载PDF
Prediction of lime utilization ratio of dephosphorization in BOF steelmaking based on online sequential extreme learning machine with forgetting mechanism
9
作者 Runhao Zhang Jian Yang +1 位作者 Han Sun Wenkui Yang 《International Journal of Minerals,Metallurgy and Materials》 SCIE EI CAS CSCD 2024年第3期508-517,共10页
The machine learning models of multiple linear regression(MLR),support vector regression(SVR),and extreme learning ma-chine(ELM)and the proposed ELM models of online sequential ELM(OS-ELM)and OS-ELM with forgetting me... The machine learning models of multiple linear regression(MLR),support vector regression(SVR),and extreme learning ma-chine(ELM)and the proposed ELM models of online sequential ELM(OS-ELM)and OS-ELM with forgetting mechanism(FOS-ELM)are applied in the prediction of the lime utilization ratio of dephosphorization in the basic oxygen furnace steelmaking process.The ELM model exhibites the best performance compared with the models of MLR and SVR.OS-ELM and FOS-ELM are applied for sequential learning and model updating.The optimal number of samples in validity term of the FOS-ELM model is determined to be 1500,with the smallest population mean absolute relative error(MARE)value of 0.058226 for the population.The variable importance analysis reveals lime weight,initial P content,and hot metal weight as the most important variables for the lime utilization ratio.The lime utilization ratio increases with the decrease in lime weight and the increases in the initial P content and hot metal weight.A prediction system based on FOS-ELM is applied in actual industrial production for one month.The hit ratios of the predicted lime utilization ratio in the error ranges of±1%,±3%,and±5%are 61.16%,90.63%,and 94.11%,respectively.The coefficient of determination,MARE,and root mean square error are 0.8670,0.06823,and 1.4265,respectively.The system exhibits desirable performance for applications in actual industrial pro-duction. 展开更多
关键词 basic oxygen furnace steelmaking machine learning lime utilization ratio DEPHOSPHORIZATION online sequential extreme learning machine forgetting mechanism
下载PDF
Social Media-Based Surveillance Systems for Health Informatics Using Machine and Deep Learning Techniques:A Comprehensive Review and Open Challenges
10
作者 Samina Amin Muhammad Ali Zeb +3 位作者 Hani Alshahrani Mohammed Hamdi Mohammad Alsulami Asadullah Shaikh 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第5期1167-1202,共36页
Social media(SM)based surveillance systems,combined with machine learning(ML)and deep learning(DL)techniques,have shown potential for early detection of epidemic outbreaks.This review discusses the current state of SM... Social media(SM)based surveillance systems,combined with machine learning(ML)and deep learning(DL)techniques,have shown potential for early detection of epidemic outbreaks.This review discusses the current state of SM-based surveillance methods for early epidemic outbreaks and the role of ML and DL in enhancing their performance.Since,every year,a large amount of data related to epidemic outbreaks,particularly Twitter data is generated by SM.This paper outlines the theme of SM analysis for tracking health-related issues and detecting epidemic outbreaks in SM,along with the ML and DL techniques that have been configured for the detection of epidemic outbreaks.DL has emerged as a promising ML technique that adaptsmultiple layers of representations or features of the data and yields state-of-the-art extrapolation results.In recent years,along with the success of ML and DL in many other application domains,both ML and DL are also popularly used in SM analysis.This paper aims to provide an overview of epidemic outbreaks in SM and then outlines a comprehensive analysis of ML and DL approaches and their existing applications in SM analysis.Finally,this review serves the purpose of offering suggestions,ideas,and proposals,along with highlighting the ongoing challenges in the field of early outbreak detection that still need to be addressed. 展开更多
关键词 Social media EPIDEMIC machine learning deep learning health informatics PANDEMIC
下载PDF
Selective and Adaptive Incremental Transfer Learning with Multiple Datasets for Machine Fault Diagnosis
11
作者 Kwok Tai Chui Brij B.Gupta +1 位作者 Varsha Arya Miguel Torres-Ruiz 《Computers, Materials & Continua》 SCIE EI 2024年第1期1363-1379,共17页
The visions of Industry 4.0 and 5.0 have reinforced the industrial environment.They have also made artificial intelligence incorporated as a major facilitator.Diagnosing machine faults has become a solid foundation fo... The visions of Industry 4.0 and 5.0 have reinforced the industrial environment.They have also made artificial intelligence incorporated as a major facilitator.Diagnosing machine faults has become a solid foundation for automatically recognizing machine failure,and thus timely maintenance can ensure safe operations.Transfer learning is a promising solution that can enhance the machine fault diagnosis model by borrowing pre-trained knowledge from the source model and applying it to the target model,which typically involves two datasets.In response to the availability of multiple datasets,this paper proposes using selective and adaptive incremental transfer learning(SA-ITL),which fuses three algorithms,namely,the hybrid selective algorithm,the transferability enhancement algorithm,and the incremental transfer learning algorithm.It is a selective algorithm that enables selecting and ordering appropriate datasets for transfer learning and selecting useful knowledge to avoid negative transfer.The algorithm also adaptively adjusts the portion of training data to balance the learning rate and training time.The proposed algorithm is evaluated and analyzed using ten benchmark datasets.Compared with other algorithms from existing works,SA-ITL improves the accuracy of all datasets.Ablation studies present the accuracy enhancements of the SA-ITL,including the hybrid selective algorithm(1.22%-3.82%),transferability enhancement algorithm(1.91%-4.15%),and incremental transfer learning algorithm(0.605%-2.68%).These also show the benefits of enhancing the target model with heterogeneous image datasets that widen the range of domain selection between source and target domains. 展开更多
关键词 Deep learning incremental learning machine fault diagnosis negative transfer transfer learning
下载PDF
Machine learning for membrane design and discovery
12
作者 Haoyu Yin Muzi Xu +4 位作者 Zhiyao Luo Xiaotian Bi Jiali Li Sui Zhang Xiaonan Wang 《Green Energy & Environment》 SCIE EI CAS CSCD 2024年第1期54-70,共17页
Membrane technologies are becoming increasingly versatile and helpful today for sustainable development.Machine Learning(ML),an essential branch of artificial intelligence(AI),has substantially impacted the research an... Membrane technologies are becoming increasingly versatile and helpful today for sustainable development.Machine Learning(ML),an essential branch of artificial intelligence(AI),has substantially impacted the research and development norm of new materials for energy and environment.This review provides an overview and perspectives on ML methodologies and their applications in membrane design and dis-covery.A brief overview of membrane technologies isfirst provided with the current bottlenecks and potential solutions.Through an appli-cations-based perspective of AI-aided membrane design and discovery,we further show how ML strategies are applied to the membrane discovery cycle(including membrane material design,membrane application,membrane process design,and knowledge extraction),in various membrane systems,ranging from gas,liquid,and fuel cell separation membranes.Furthermore,the best practices of integrating ML methods and specific application targets in membrane design and discovery are presented with an ideal paradigm proposed.The challenges to be addressed and prospects of AI applications in membrane discovery are also highlighted in the end. 展开更多
关键词 machine learning Membranes AI for Membrane DATA-DRIVEN DESIGN
下载PDF
AutoRhythmAI: A Hybrid Machine and Deep Learning Approach for Automated Diagnosis of Arrhythmias
13
作者 S.Jayanthi S.Prasanna Devi 《Computers, Materials & Continua》 SCIE EI 2024年第2期2137-2158,共22页
In healthcare,the persistent challenge of arrhythmias,a leading cause of global mortality,has sparked extensive research into the automation of detection using machine learning(ML)algorithms.However,traditional ML and... In healthcare,the persistent challenge of arrhythmias,a leading cause of global mortality,has sparked extensive research into the automation of detection using machine learning(ML)algorithms.However,traditional ML and AutoML approaches have revealed their limitations,notably regarding feature generalization and automation efficiency.This glaring research gap has motivated the development of AutoRhythmAI,an innovative solution that integrates both machine and deep learning to revolutionize the diagnosis of arrhythmias.Our approach encompasses two distinct pipelines tailored for binary-class and multi-class arrhythmia detection,effectively bridging the gap between data preprocessing and model selection.To validate our system,we have rigorously tested AutoRhythmAI using a multimodal dataset,surpassing the accuracy achieved using a single dataset and underscoring the robustness of our methodology.In the first pipeline,we employ signal filtering and ML algorithms for preprocessing,followed by data balancing and split for training.The second pipeline is dedicated to feature extraction and classification,utilizing deep learning models.Notably,we introduce the‘RRI-convoluted trans-former model’as a novel addition for binary-class arrhythmias.An ensemble-based approach then amalgamates all models,considering their respective weights,resulting in an optimal model pipeline.In our study,the VGGRes Model achieved impressive results in multi-class arrhythmia detection,with an accuracy of 97.39%and firm performance in precision(82.13%),recall(31.91%),and F1-score(82.61%).In the binary-class task,the proposed model achieved an outstanding accuracy of 96.60%.These results highlight the effectiveness of our approach in improving arrhythmia detection,with notably high accuracy and well-balanced performance metrics. 展开更多
关键词 Automated machine learning neural networks deep learning ARRHYTHMIAS
下载PDF
Enhanced prediction of anisotropic deformation behavior using machine learning with data augmentation
14
作者 Sujeong Byun Jinyeong Yu +3 位作者 Seho Cheon Seong Ho Lee Sung Hyuk Park Taekyung Lee 《Journal of Magnesium and Alloys》 SCIE EI CAS CSCD 2024年第1期186-196,共11页
Mg alloys possess an inherent plastic anisotropy owing to the selective activation of deformation mechanisms depending on the loading condition.This characteristic results in a diverse range of flow curves that vary w... Mg alloys possess an inherent plastic anisotropy owing to the selective activation of deformation mechanisms depending on the loading condition.This characteristic results in a diverse range of flow curves that vary with a deformation condition.This study proposes a novel approach for accurately predicting an anisotropic deformation behavior of wrought Mg alloys using machine learning(ML)with data augmentation.The developed model combines four key strategies from data science:learning the entire flow curves,generative adversarial networks(GAN),algorithm-driven hyperparameter tuning,and gated recurrent unit(GRU)architecture.The proposed model,namely GAN-aided GRU,was extensively evaluated for various predictive scenarios,such as interpolation,extrapolation,and a limited dataset size.The model exhibited significant predictability and improved generalizability for estimating the anisotropic compressive behavior of ZK60 Mg alloys under 11 annealing conditions and for three loading directions.The GAN-aided GRU results were superior to those of previous ML models and constitutive equations.The superior performance was attributed to hyperparameter optimization,GAN-based data augmentation,and the inherent predictivity of the GRU for extrapolation.As a first attempt to employ ML techniques other than artificial neural networks,this study proposes a novel perspective on predicting the anisotropic deformation behaviors of wrought Mg alloys. 展开更多
关键词 Plastic anisotropy Compression ANNEALING machine learning Data augmentation
下载PDF
Machine learning in metal-ion battery research: Advancing material prediction, characterization, and status evaluation
15
作者 Tong Yu Chunyang Wang +1 位作者 Huicong Yang Feng Li 《Journal of Energy Chemistry》 SCIE EI CAS CSCD 2024年第3期191-204,I0006,共15页
Metal-ion batteries(MIBs),including alkali metal-ion(Li^(+),Na^(+),and K^(3)),multi-valent metal-ion(Zn^(2+),Mg^(2+),and Al^(3+)),metal-air,and metal-sulfur batteries,play an indispensable role in electrochemical ener... Metal-ion batteries(MIBs),including alkali metal-ion(Li^(+),Na^(+),and K^(3)),multi-valent metal-ion(Zn^(2+),Mg^(2+),and Al^(3+)),metal-air,and metal-sulfur batteries,play an indispensable role in electrochemical energy storage.However,the performance of MIBs is significantly influenced by numerous variables,resulting in multi-dimensional and long-term challenges in the field of battery research and performance enhancement.Machine learning(ML),with its capability to solve intricate tasks and perform robust data processing,is now catalyzing a revolutionary transformation in the development of MIB materials and devices.In this review,we summarize the utilization of ML algorithms that have expedited research on MIBs over the past five years.We present an extensive overview of existing algorithms,elucidating their details,advantages,and limitations in various applications,which encompass electrode screening,material property prediction,electrolyte formulation design,electrode material characterization,manufacturing parameter optimization,and real-time battery status monitoring.Finally,we propose potential solutions and future directions for the application of ML in advancing MIB development. 展开更多
关键词 Metal-ion battery machine learning Electrode materials CHARACTERIZATION Status evaluation
下载PDF
Machine learning model based on non-convex penalized huberized-SVM
16
作者 Peng Wang Ji Guo Lin-Feng Li 《Journal of Electronic Science and Technology》 EI CAS CSCD 2024年第1期81-94,共14页
The support vector machine(SVM)is a classical machine learning method.Both the hinge loss and least absolute shrinkage and selection operator(LASSO)penalty are usually used in traditional SVMs.However,the hinge loss i... The support vector machine(SVM)is a classical machine learning method.Both the hinge loss and least absolute shrinkage and selection operator(LASSO)penalty are usually used in traditional SVMs.However,the hinge loss is not differentiable,and the LASSO penalty does not have the Oracle property.In this paper,the huberized loss is combined with non-convex penalties to obtain a model that has the advantages of both the computational simplicity and the Oracle property,contributing to higher accuracy than traditional SVMs.It is experimentally demonstrated that the two non-convex huberized-SVM methods,smoothly clipped absolute deviation huberized-SVM(SCAD-HSVM)and minimax concave penalty huberized-SVM(MCP-HSVM),outperform the traditional SVM method in terms of the prediction accuracy and classifier performance.They are also superior in terms of variable selection,especially when there is a high linear correlation between the variables.When they are applied to the prediction of listed companies,the variables that can affect and predict financial distress are accurately filtered out.Among all the indicators,the indicators per share have the greatest influence while those of solvency have the weakest influence.Listed companies can assess the financial situation with the indicators screened by our algorithm and make an early warning of their possible financial distress in advance with higher precision. 展开更多
关键词 Huberized loss machine learning Non-convex penalties Support vector machine(SVM)
下载PDF
Assessment of compressive strength of jet grouting by machine learning
17
作者 Esteban Diaz Edgar Leonardo Salamanca-Medina Roberto Tomas 《Journal of Rock Mechanics and Geotechnical Engineering》 SCIE CSCD 2024年第1期102-111,共10页
Jet grouting is one of the most popular soil improvement techniques,but its design usually involves great uncertainties that can lead to economic cost overruns in construction projects.The high dispersion in the prope... Jet grouting is one of the most popular soil improvement techniques,but its design usually involves great uncertainties that can lead to economic cost overruns in construction projects.The high dispersion in the properties of the improved material leads to designers assuming a conservative,arbitrary and unjustified strength,which is even sometimes subjected to the results of the test fields.The present paper presents an approach for prediction of the uniaxial compressive strength(UCS)of jet grouting columns based on the analysis of several machine learning algorithms on a database of 854 results mainly collected from different research papers.The selected machine learning model(extremely randomized trees)relates the soil type and various parameters of the technique to the value of the compressive strength.Despite the complex mechanism that surrounds the jet grouting process,evidenced by the high dispersion and low correlation of the variables studied,the trained model allows to optimally predict the values of compressive strength with a significant improvement with respect to the existing works.Consequently,this work proposes for the first time a reliable and easily applicable approach for estimation of the compressive strength of jet grouting columns. 展开更多
关键词 Jet grouting Ground improvement Compressive strength machine learning
下载PDF
Reconstruction of poloidal magnetic field profiles in field-reversed configurations with machine learning in laser-driven ion-beam trace probe
18
作者 徐栩涛 徐田超 +4 位作者 肖池阶 张祖煜 何任川 袁瑞鑫 许平 《Plasma Science and Technology》 SCIE EI CAS CSCD 2024年第3期83-87,共5页
The diagnostic of poloidal magnetic field(B_(p))in field-reversed configuration(FRC),promising for achieving efficient plasma confinement due to its highβ,is a huge challenge because B_(p)is small and reverses around... The diagnostic of poloidal magnetic field(B_(p))in field-reversed configuration(FRC),promising for achieving efficient plasma confinement due to its highβ,is a huge challenge because B_(p)is small and reverses around the core region.The laser-driven ion-beam trace probe(LITP)has been proven to diagnose the B_(p)profile in FRCs recently,whereas the existing iterative reconstruction approach cannot handle the measurement errors well.In this work,the machine learning approach,a fast-growing and powerful technology in automation and control,is applied to B_(p)reconstruction in FRCs based on LITP principles and it has a better performance than the previous approach.The machine learning approach achieves a more accurate reconstruction of B_(p)profile when 20%detector errors are considered,15%B_(p)fluctuation is introduced and the size of the detector is remarkably reduced.Therefore,machine learning could be a powerful support for LITP diagnosis of the magnetic field in magnetic confinement fusion devices. 展开更多
关键词 FRC LITP poloidal magnetic field diagnostics machine learning
下载PDF
Machine Learning Techniques Using Deep Instinctive Encoder-Based Feature Extraction for Optimized Breast Cancer Detection
19
作者 Vaishnawi Priyadarshni Sanjay Kumar Sharma +2 位作者 Mohammad Khalid Imam Rahmani Baijnath Kaushik Rania Almajalid 《Computers, Materials & Continua》 SCIE EI 2024年第2期2441-2468,共28页
Breast cancer(BC)is one of the leading causes of death among women worldwide,as it has emerged as the most commonly diagnosed malignancy in women.Early detection and effective treatment of BC can help save women’s li... Breast cancer(BC)is one of the leading causes of death among women worldwide,as it has emerged as the most commonly diagnosed malignancy in women.Early detection and effective treatment of BC can help save women’s lives.Developing an efficient technology-based detection system can lead to non-destructive and preliminary cancer detection techniques.This paper proposes a comprehensive framework that can effectively diagnose cancerous cells from benign cells using the Curated Breast Imaging Subset of the Digital Database for Screening Mammography(CBIS-DDSM)data set.The novelty of the proposed framework lies in the integration of various techniques,where the fusion of deep learning(DL),traditional machine learning(ML)techniques,and enhanced classification models have been deployed using the curated dataset.The analysis outcome proves that the proposed enhanced RF(ERF),enhanced DT(EDT)and enhanced LR(ELR)models for BC detection outperformed most of the existing models with impressive results. 展开更多
关键词 Autoencoder breast cancer deep neural network convolutional neural network image processing machine learning deep learning
下载PDF
Recent advances in protein conformation sampling by combining machine learning with molecular simulation
20
作者 唐一鸣 杨中元 +7 位作者 姚逸飞 周运 谈圆 王子超 潘瞳 熊瑞 孙俊力 韦广红 《Chinese Physics B》 SCIE EI CAS CSCD 2024年第3期80-87,共8页
The rapid advancement and broad application of machine learning(ML)have driven a groundbreaking revolution in computational biology.One of the most cutting-edge and important applications of ML is its integration with... The rapid advancement and broad application of machine learning(ML)have driven a groundbreaking revolution in computational biology.One of the most cutting-edge and important applications of ML is its integration with molecular simulations to improve the sampling efficiency of the vast conformational space of large biomolecules.This review focuses on recent studies that utilize ML-based techniques in the exploration of protein conformational landscape.We first highlight the recent development of ML-aided enhanced sampling methods,including heuristic algorithms and neural networks that are designed to refine the selection of reaction coordinates for the construction of bias potential,or facilitate the exploration of the unsampled region of the energy landscape.Further,we review the development of autoencoder based methods that combine molecular simulations and deep learning to expand the search for protein conformations.Lastly,we discuss the cutting-edge methodologies for the one-shot generation of protein conformations with precise Boltzmann weights.Collectively,this review demonstrates the promising potential of machine learning in revolutionizing our insight into the complex conformational ensembles of proteins. 展开更多
关键词 machine learning molecular simulation protein conformational space enhanced sampling
原文传递
上一页 1 2 250 下一页 到第
使用帮助 返回顶部