期刊文献+
共找到347,391篇文章
< 1 2 250 >
每页显示 20 50 100
Advancements in machine learning for material design and process optimization in the field of additive manufacturing
1
作者 Hao-ran Zhou Hao Yang +8 位作者 Huai-qian Li Ying-chun Ma Sen Yu Jian shi Jing-chang Cheng Peng Gao Bo Yu Zhi-quan Miao Yan-peng Wei 《China Foundry》 SCIE EI CAS CSCD 2024年第2期101-115,共15页
Additive manufacturing technology is highly regarded due to its advantages,such as high precision and the ability to address complex geometric challenges.However,the development of additive manufacturing process is co... Additive manufacturing technology is highly regarded due to its advantages,such as high precision and the ability to address complex geometric challenges.However,the development of additive manufacturing process is constrained by issues like unclear fundamental principles,complex experimental cycles,and high costs.Machine learning,as a novel artificial intelligence technology,has the potential to deeply engage in the development of additive manufacturing process,assisting engineers in learning and developing new techniques.This paper provides a comprehensive overview of the research and applications of machine learning in the field of additive manufacturing,particularly in model design and process development.Firstly,it introduces the background and significance of machine learning-assisted design in additive manufacturing process.It then further delves into the application of machine learning in additive manufacturing,focusing on model design and process guidance.Finally,it concludes by summarizing and forecasting the development trends of machine learning technology in the field of additive manufacturing. 展开更多
关键词 additive manufacturing machine learning material design process optimization intersection of disciplines embedded machine learning
下载PDF
Machine learning for predicting the outcome of terminal ballistics events
2
作者 Shannon Ryan Neeraj Mohan Sushma +4 位作者 Arun Kumar AV Julian Berk Tahrima Hashem Santu Rana Svetha Venkatesh 《Defence Technology(防务技术)》 SCIE EI CAS CSCD 2024年第1期14-26,共13页
Machine learning(ML) is well suited for the prediction of high-complexity,high-dimensional problems such as those encountered in terminal ballistics.We evaluate the performance of four popular ML-based regression mode... Machine learning(ML) is well suited for the prediction of high-complexity,high-dimensional problems such as those encountered in terminal ballistics.We evaluate the performance of four popular ML-based regression models,extreme gradient boosting(XGBoost),artificial neural network(ANN),support vector regression(SVR),and Gaussian process regression(GP),on two common terminal ballistics’ problems:(a)predicting the V50ballistic limit of monolithic metallic armour impacted by small and medium calibre projectiles and fragments,and(b) predicting the depth to which a projectile will penetrate a target of semi-infinite thickness.To achieve this we utilise two datasets,each consisting of approximately 1000samples,collated from public release sources.We demonstrate that all four model types provide similarly excellent agreement when interpolating within the training data and diverge when extrapolating outside this range.Although extrapolation is not advisable for ML-based regression models,for applications such as lethality/survivability analysis,such capability is required.To circumvent this,we implement expert knowledge and physics-based models via enforced monotonicity,as a Gaussian prior mean,and through a modified loss function.The physics-informed models demonstrate improved performance over both classical physics-based models and the basic ML regression models,providing an ability to accurately fit experimental data when it is available and then revert to the physics-based model when not.The resulting models demonstrate high levels of predictive accuracy over a very wide range of projectile types,target materials and thicknesses,and impact conditions significantly more diverse than that achievable from any existing analytical approach.Compared with numerical analysis tools such as finite element solvers the ML models run orders of magnitude faster.We provide some general guidelines throughout for the development,application,and reporting of ML models in terminal ballistics problems. 展开更多
关键词 machine learning Artificial intelligence Physics-informed machine learning Terminal ballistics Armour
下载PDF
Machine learning applications in stroke medicine:advancements,challenges,and future prospectives
3
作者 Mario Daidone Sergio Ferrantelli Antonino Tuttolomondo 《Neural Regeneration Research》 SCIE CAS CSCD 2024年第4期769-773,共5页
Stroke is a leading cause of disability and mortality worldwide,necessitating the development of advanced technologies to improve its diagnosis,treatment,and patient outcomes.In recent years,machine learning technique... Stroke is a leading cause of disability and mortality worldwide,necessitating the development of advanced technologies to improve its diagnosis,treatment,and patient outcomes.In recent years,machine learning techniques have emerged as promising tools in stroke medicine,enabling efficient analysis of large-scale datasets and facilitating personalized and precision medicine approaches.This abstract provides a comprehensive overview of machine learning’s applications,challenges,and future directions in stroke medicine.Recently introduced machine learning algorithms have been extensively employed in all the fields of stroke medicine.Machine learning models have demonstrated remarkable accuracy in imaging analysis,diagnosing stroke subtypes,risk stratifications,guiding medical treatment,and predicting patient prognosis.Despite the tremendous potential of machine learning in stroke medicine,several challenges must be addressed.These include the need for standardized and interoperable data collection,robust model validation and generalization,and the ethical considerations surrounding privacy and bias.In addition,integrating machine learning models into clinical workflows and establishing regulatory frameworks are critical for ensuring their widespread adoption and impact in routine stroke care.Machine learning promises to revolutionize stroke medicine by enabling precise diagnosis,tailored treatment selection,and improved prognostication.Continued research and collaboration among clinicians,researchers,and technologists are essential for overcoming challenges and realizing the full potential of machine learning in stroke care,ultimately leading to enhanced patient outcomes and quality of life.This review aims to summarize all the current implications of machine learning in stroke diagnosis,treatment,and prognostic evaluation.At the same time,another purpose of this paper is to explore all the future perspectives these techniques can provide in combating this disabling disease. 展开更多
关键词 cerebrovascular disease deep learning machine learning reinforcement learning STROKE stroke therapy supervised learning unsupervised learning
下载PDF
Comparative study of different machine learning models in landslide susceptibility assessment: A case study of Conghua District, Guangzhou, China
4
作者 Ao Zhang Xin-wen Zhao +8 位作者 Xing-yuezi Zhao Xiao-zhan Zheng Min Zeng Xuan Huang Pan Wu Tuo Jiang Shi-chang Wang Jun He Yi-yong Li 《China Geology》 CAS CSCD 2024年第1期104-115,共12页
Machine learning is currently one of the research hotspots in the field of landslide prediction.To clarify and evaluate the differences in characteristics and prediction effects of different machine learning models,Co... Machine learning is currently one of the research hotspots in the field of landslide prediction.To clarify and evaluate the differences in characteristics and prediction effects of different machine learning models,Conghua District,which is the most prone to landslide disasters in Guangzhou,was selected for landslide susceptibility evaluation.The evaluation factors were selected by using correlation analysis and variance expansion factor method.Applying four machine learning methods namely Logistic Regression(LR),Random Forest(RF),Support Vector Machines(SVM),and Extreme Gradient Boosting(XGB),landslide models were constructed.Comparative analysis and evaluation of the model were conducted through statistical indices and receiver operating characteristic(ROC)curves.The results showed that LR,RF,SVM,and XGB models have good predictive performance for landslide susceptibility,with the area under curve(AUC)values of 0.752,0.965,0.996,and 0.998,respectively.XGB model had the highest predictive ability,followed by RF model,SVM model,and LR model.The frequency ratio(FR)accuracy of LR,RF,SVM,and XGB models was 0.775,0.842,0.759,and 0.822,respectively.RF and XGB models were superior to LR and SVM models,indicating that the integrated algorithm has better predictive ability than a single classification algorithm in regional landslide classification problems. 展开更多
关键词 Landslides susceptibility assessment machine learning Logistic Regression Random Forest Support Vector machines XGBoost Assessment model Geological disaster investigation and prevention engineering
下载PDF
A hybrid machine learning optimization algorithm for multivariable pore pressure prediction
5
作者 Song Deng Hao-Yu Pan +8 位作者 Hai-Ge Wang Shou-Kun Xu Xiao-Peng Yan Chao-Wei Li Ming-Guo Peng Hao-Ping Peng Lin Shi Meng Cui Fei Zhao 《Petroleum Science》 SCIE EI CAS CSCD 2024年第1期535-550,共16页
Pore pressure is essential data in drilling design,and its accurate prediction is necessary to ensure drilling safety and improve drilling efficiency.Traditional methods for predicting pore pressure are limited when f... Pore pressure is essential data in drilling design,and its accurate prediction is necessary to ensure drilling safety and improve drilling efficiency.Traditional methods for predicting pore pressure are limited when forming particular structures and lithology.In this paper,a machine learning algorithm and effective stress theorem are used to establish the transformation model between rock physical parameters and pore pressure.This study collects data from three wells.Well 1 had 881 data sets for model training,and Wells 2 and 3 had 538 and 464 data sets for model testing.In this paper,support vector machine(SVM),random forest(RF),extreme gradient boosting(XGB),and multilayer perceptron(MLP)are selected as the machine learning algorithms for pore pressure modeling.In addition,this paper uses the grey wolf optimization(GWO)algorithm,particle swarm optimization(PSO)algorithm,sparrow search algorithm(SSA),and bat algorithm(BA)to establish a hybrid machine learning optimization algorithm,and proposes an improved grey wolf optimization(IGWO)algorithm.The IGWO-MLP model obtained the minimum root mean square error(RMSE)by using the 5-fold cross-validation method for the training data.For the pore pressure data in Well 2 and Well 3,the coefficients of determination(R2)of SVM,RF,XGB,and MLP are 0.9930 and 0.9446,0.9943 and 0.9472,0.9945 and 0.9488,0.9949 and 0.9574.MLP achieves optimal performance on both training and test data,and the MLP model shows a high degree of generalization.It indicates that the IGWO-MLP is an excellent predictor of pore pressure and can be used to predict pore pressure. 展开更多
关键词 Pore pressure Grey wolf optimization Multilayer perceptron Effective stress machine learning
下载PDF
Machine learning in metal-ion battery research: Advancing material prediction, characterization, and status evaluation
6
作者 Tong Yu Chunyang Wang +1 位作者 Huicong Yang Feng Li 《Journal of Energy Chemistry》 SCIE EI CAS CSCD 2024年第3期191-204,I0006,共15页
Metal-ion batteries(MIBs),including alkali metal-ion(Li^(+),Na^(+),and K^(3)),multi-valent metal-ion(Zn^(2+),Mg^(2+),and Al^(3+)),metal-air,and metal-sulfur batteries,play an indispensable role in electrochemical ener... Metal-ion batteries(MIBs),including alkali metal-ion(Li^(+),Na^(+),and K^(3)),multi-valent metal-ion(Zn^(2+),Mg^(2+),and Al^(3+)),metal-air,and metal-sulfur batteries,play an indispensable role in electrochemical energy storage.However,the performance of MIBs is significantly influenced by numerous variables,resulting in multi-dimensional and long-term challenges in the field of battery research and performance enhancement.Machine learning(ML),with its capability to solve intricate tasks and perform robust data processing,is now catalyzing a revolutionary transformation in the development of MIB materials and devices.In this review,we summarize the utilization of ML algorithms that have expedited research on MIBs over the past five years.We present an extensive overview of existing algorithms,elucidating their details,advantages,and limitations in various applications,which encompass electrode screening,material property prediction,electrolyte formulation design,electrode material characterization,manufacturing parameter optimization,and real-time battery status monitoring.Finally,we propose potential solutions and future directions for the application of ML in advancing MIB development. 展开更多
关键词 Metal-ion battery machine learning Electrode materials CHARACTERIZATION Status evaluation
下载PDF
Machine learning with active pharmaceutical ingredient/polymer interaction mechanism:Prediction for complex phase behaviors of pharmaceuticals and formulations
7
作者 Kai Ge Yiping Huang Yuanhui Ji 《Chinese Journal of Chemical Engineering》 SCIE EI CAS CSCD 2024年第2期263-272,共10页
The high throughput prediction of the thermodynamic phase behavior of active pharmaceutical ingredients(APIs)with pharmaceutically relevant excipients remains a major scientific challenge in the screening of pharmaceu... The high throughput prediction of the thermodynamic phase behavior of active pharmaceutical ingredients(APIs)with pharmaceutically relevant excipients remains a major scientific challenge in the screening of pharmaceutical formulations.In this work,a developed machine-learning model efficiently predicts the solubility of APIs in polymers by learning the phase equilibrium principle and using a few molecular descriptors.Under the few-shot learning framework,thermodynamic theory(perturbed-chain statistical associating fluid theory)was used for data augmentation,and computational chemistry was applied for molecular descriptors'screening.The results showed that the developed machine-learning model can predict the API-polymer phase diagram accurately,broaden the solubility data of APIs in polymers,and reproduce the relationship between API solubility and the interaction mechanisms between API and polymer successfully,which provided efficient guidance for the development of pharmaceutical formulations. 展开更多
关键词 Multi-task machine learning Density functional theory Hydrogen bond interaction MISCIBILITY SOLUBILITY
下载PDF
Machine learning for membrane design and discovery
8
作者 Haoyu Yin Muzi Xu +4 位作者 Zhiyao Luo Xiaotian Bi Jiali Li Sui Zhang Xiaonan Wang 《Green Energy & Environment》 SCIE EI CAS CSCD 2024年第1期54-70,共17页
Membrane technologies are becoming increasingly versatile and helpful today for sustainable development.Machine Learning(ML),an essential branch of artificial intelligence(AI),has substantially impacted the research an... Membrane technologies are becoming increasingly versatile and helpful today for sustainable development.Machine Learning(ML),an essential branch of artificial intelligence(AI),has substantially impacted the research and development norm of new materials for energy and environment.This review provides an overview and perspectives on ML methodologies and their applications in membrane design and dis-covery.A brief overview of membrane technologies isfirst provided with the current bottlenecks and potential solutions.Through an appli-cations-based perspective of AI-aided membrane design and discovery,we further show how ML strategies are applied to the membrane discovery cycle(including membrane material design,membrane application,membrane process design,and knowledge extraction),in various membrane systems,ranging from gas,liquid,and fuel cell separation membranes.Furthermore,the best practices of integrating ML methods and specific application targets in membrane design and discovery are presented with an ideal paradigm proposed.The challenges to be addressed and prospects of AI applications in membrane discovery are also highlighted in the end. 展开更多
关键词 machine learning Membranes AI for Membrane DATA-DRIVEN DESIGN
下载PDF
Robust Machine Learning Mapping of sEMG Signals to Future Actuator Commands in Biomechatronic Devices
9
作者 Ali Nasr Sydney Bell +2 位作者 Rachel L.Whittaker Clark R.Dickerson John McPhee 《Journal of Bionic Engineering》 SCIE EI CSCD 2024年第1期270-287,共18页
A machine learning model for regression of interrupted Surface Electromyography(sEMG)signals to future control-oriented signals(e.g.,robot’s joint angle and assistive torque)of an active biomechatronic device for hig... A machine learning model for regression of interrupted Surface Electromyography(sEMG)signals to future control-oriented signals(e.g.,robot’s joint angle and assistive torque)of an active biomechatronic device for high-level myoelectric-based hierarchical control is proposed.A Recurrent Neural Network(RNN)was trained using output data,initially obtained from offline optimization of the biomechatronic(human–robot)device and shifted by the prediction horizon.The input of the RNN consisted of interrupted sEMG signals(to mimic signal disconnections)and previous kinematic signals of the assistive system.The RNN with a 0.1-s prediction horizon could predict the control-oriented joint angle and assistive torque with 92%and 86.5%regression accuracy,respectively,for the test dataset.This proposed approach permits a fast,predictive,and direct estimation of control-oriented signals instead of an iterative process that optimizes assistive torque in the inverse dynamic simulation of a multibody human–robot system.Training with these interrupted input signals significantly improves the regression accuracy in the case of sEMG signal disconnection.This Robust Predictive Control-oriented Machine Learning(Robust-MuscleNET)model can support volitional high-level myoelectric-based control of biomechatronic devices,such as exoskeletons,prostheses,and assistive/resistive robots.Future work should study the application to prosthesis control as well as the repeatability of the high-level controller with electrode shift.The low-level hierarchical controller that manages the human–robot interaction,the assistance/resistance strategy,and the actuator coordination should also be studied. 展开更多
关键词 Myoelectric-based control-Surface electromyography machine learning Multibody system dynamics EXOSKELETON BIONIC
下载PDF
Terrorism Attack Classification Using Machine Learning: The Effectiveness of Using Textual Features Extracted from GTD Dataset
10
作者 Mohammed Abdalsalam Chunlin Li +1 位作者 Abdelghani Dahou Natalia Kryvinska 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第2期1427-1467,共41页
One of the biggest dangers to society today is terrorism, where attacks have become one of the most significantrisks to international peace and national security. Big data, information analysis, and artificial intelli... One of the biggest dangers to society today is terrorism, where attacks have become one of the most significantrisks to international peace and national security. Big data, information analysis, and artificial intelligence (AI) havebecome the basis for making strategic decisions in many sensitive areas, such as fraud detection, risk management,medical diagnosis, and counter-terrorism. However, there is still a need to assess how terrorist attacks are related,initiated, and detected. For this purpose, we propose a novel framework for classifying and predicting terroristattacks. The proposed framework posits that neglected text attributes included in the Global Terrorism Database(GTD) can influence the accuracy of the model’s classification of terrorist attacks, where each part of the datacan provide vital information to enrich the ability of classifier learning. Each data point in a multiclass taxonomyhas one or more tags attached to it, referred as “related tags.” We applied machine learning classifiers to classifyterrorist attack incidents obtained from the GTD. A transformer-based technique called DistilBERT extracts andlearns contextual features from text attributes to acquiremore information from text data. The extracted contextualfeatures are combined with the “key features” of the dataset and used to perform the final classification. Thestudy explored different experimental setups with various classifiers to evaluate the model’s performance. Theexperimental results show that the proposed framework outperforms the latest techniques for classifying terroristattacks with an accuracy of 98.7% using a combined feature set and extreme gradient boosting classifier. 展开更多
关键词 Artificial intelligence machine learning natural language processing data analytic DistilBERT feature extraction terrorism classification GTD dataset
下载PDF
Prediction model for corrosion rate of low-alloy steels under atmospheric conditions using machine learning algorithms
11
作者 Jingou Kuang Zhilin Long 《International Journal of Minerals,Metallurgy and Materials》 SCIE EI CAS CSCD 2024年第2期337-350,共14页
This work constructed a machine learning(ML)model to predict the atmospheric corrosion rate of low-alloy steels(LAS).The material properties of LAS,environmental factors,and exposure time were used as the input,while ... This work constructed a machine learning(ML)model to predict the atmospheric corrosion rate of low-alloy steels(LAS).The material properties of LAS,environmental factors,and exposure time were used as the input,while the corrosion rate as the output.6 dif-ferent ML algorithms were used to construct the proposed model.Through optimization and filtering,the eXtreme gradient boosting(XG-Boost)model exhibited good corrosion rate prediction accuracy.The features of material properties were then transformed into atomic and physical features using the proposed property transformation approach,and the dominant descriptors that affected the corrosion rate were filtered using the recursive feature elimination(RFE)as well as XGBoost methods.The established ML models exhibited better predic-tion performance and generalization ability via property transformation descriptors.In addition,the SHapley additive exPlanations(SHAP)method was applied to analyze the relationship between the descriptors and corrosion rate.The results showed that the property transformation model could effectively help with analyzing the corrosion behavior,thereby significantly improving the generalization ability of corrosion rate prediction models. 展开更多
关键词 machine learning low-alloy steel atmospheric corrosion prediction corrosion rate feature fusion
下载PDF
Enhanced prediction of anisotropic deformation behavior using machine learning with data augmentation
12
作者 Sujeong Byun Jinyeong Yu +3 位作者 Seho Cheon Seong Ho Lee Sung Hyuk Park Taekyung Lee 《Journal of Magnesium and Alloys》 SCIE EI CAS CSCD 2024年第1期186-196,共11页
Mg alloys possess an inherent plastic anisotropy owing to the selective activation of deformation mechanisms depending on the loading condition.This characteristic results in a diverse range of flow curves that vary w... Mg alloys possess an inherent plastic anisotropy owing to the selective activation of deformation mechanisms depending on the loading condition.This characteristic results in a diverse range of flow curves that vary with a deformation condition.This study proposes a novel approach for accurately predicting an anisotropic deformation behavior of wrought Mg alloys using machine learning(ML)with data augmentation.The developed model combines four key strategies from data science:learning the entire flow curves,generative adversarial networks(GAN),algorithm-driven hyperparameter tuning,and gated recurrent unit(GRU)architecture.The proposed model,namely GAN-aided GRU,was extensively evaluated for various predictive scenarios,such as interpolation,extrapolation,and a limited dataset size.The model exhibited significant predictability and improved generalizability for estimating the anisotropic compressive behavior of ZK60 Mg alloys under 11 annealing conditions and for three loading directions.The GAN-aided GRU results were superior to those of previous ML models and constitutive equations.The superior performance was attributed to hyperparameter optimization,GAN-based data augmentation,and the inherent predictivity of the GRU for extrapolation.As a first attempt to employ ML techniques other than artificial neural networks,this study proposes a novel perspective on predicting the anisotropic deformation behaviors of wrought Mg alloys. 展开更多
关键词 Plastic anisotropy Compression ANNEALING machine learning Data augmentation
下载PDF
Quantification of the concrete freeze–thaw environment across the Qinghai–Tibet Plateau based on machine learning algorithms
13
作者 QIN Yanhui MA Haoyuan +3 位作者 ZHANG Lele YIN Jinshuai ZHENG Xionghui LI Shuo 《Journal of Mountain Science》 SCIE CSCD 2024年第1期322-334,共13页
The reasonable quantification of the concrete freezing environment on the Qinghai–Tibet Plateau(QTP) is the primary issue in frost resistant concrete design, which is one of the challenges that the QTP engineering ma... The reasonable quantification of the concrete freezing environment on the Qinghai–Tibet Plateau(QTP) is the primary issue in frost resistant concrete design, which is one of the challenges that the QTP engineering managers should take into account. In this paper, we propose a more realistic method to calculate the number of concrete freeze–thaw cycles(NFTCs) on the QTP. The calculated results show that the NFTCs increase as the altitude of the meteorological station increases with the average NFTCs being 208.7. Four machine learning methods, i.e., the random forest(RF) model, generalized boosting method(GBM), generalized linear model(GLM), and generalized additive model(GAM), are used to fit the NFTCs. The root mean square error(RMSE) values of the RF, GBM, GLM, and GAM are 32.3, 4.3, 247.9, and 161.3, respectively. The R^(2) values of the RF, GBM, GLM, and GAM are 0.93, 0.99, 0.48, and 0.66, respectively. The GBM method performs the best compared to the other three methods, which was shown by the results of RMSE and R^(2) values. The quantitative results from the GBM method indicate that the lowest, medium, and highest NFTC values are distributed in the northern, central, and southern parts of the QTP, respectively. The annual NFTCs in the QTP region are mainly concentrated at 160 and above, and the average NFTCs is 200 across the QTP. Our results can provide scientific guidance and a theoretical basis for the freezing resistance design of concrete in various projects on the QTP. 展开更多
关键词 Freeze–thaw cycles Quantification machine learning algorithms Qinghai–Tibet Plateau CONCRETE
原文传递
A Systematic Literature Review of Machine Learning and Deep Learning Approaches for Spectral Image Classification in Agricultural Applications Using Aerial Photography
14
作者 Usman Khan Muhammad Khalid Khan +4 位作者 Muhammad Ayub Latif Muhammad Naveed Muhammad Mansoor Alam Salman A.Khan Mazliham Mohd Su’ud 《Computers, Materials & Continua》 SCIE EI 2024年第3期2967-3000,共34页
Recently,there has been a notable surge of interest in scientific research regarding spectral images.The potential of these images to revolutionize the digital photography industry,like aerial photography through Unma... Recently,there has been a notable surge of interest in scientific research regarding spectral images.The potential of these images to revolutionize the digital photography industry,like aerial photography through Unmanned Aerial Vehicles(UAVs),has captured considerable attention.One encouraging aspect is their combination with machine learning and deep learning algorithms,which have demonstrated remarkable outcomes in image classification.As a result of this powerful amalgamation,the adoption of spectral images has experienced exponential growth across various domains,with agriculture being one of the prominent beneficiaries.This paper presents an extensive survey encompassing multispectral and hyperspectral images,focusing on their applications for classification challenges in diverse agricultural areas,including plants,grains,fruits,and vegetables.By meticulously examining primary studies,we delve into the specific agricultural domains where multispectral and hyperspectral images have found practical use.Additionally,our attention is directed towards utilizing machine learning techniques for effectively classifying hyperspectral images within the agricultural context.The findings of our investigation reveal that deep learning and support vector machines have emerged as widely employed methods for hyperspectral image classification in agriculture.Nevertheless,we also shed light on the various issues and limitations of working with spectral images.This comprehensive analysis aims to provide valuable insights into the current state of spectral imaging in agriculture and its potential for future advancements. 展开更多
关键词 machine learning deep learning unmanned aerial vehicles multi-spectral images image recognition object detection hyperspectral images aerial photography
下载PDF
Machine Learning Techniques Using Deep Instinctive Encoder-Based Feature Extraction for Optimized Breast Cancer Detection
15
作者 Vaishnawi Priyadarshni Sanjay Kumar Sharma +2 位作者 Mohammad Khalid Imam Rahmani Baijnath Kaushik Rania Almajalid 《Computers, Materials & Continua》 SCIE EI 2024年第2期2441-2468,共28页
Breast cancer(BC)is one of the leading causes of death among women worldwide,as it has emerged as the most commonly diagnosed malignancy in women.Early detection and effective treatment of BC can help save women’s li... Breast cancer(BC)is one of the leading causes of death among women worldwide,as it has emerged as the most commonly diagnosed malignancy in women.Early detection and effective treatment of BC can help save women’s lives.Developing an efficient technology-based detection system can lead to non-destructive and preliminary cancer detection techniques.This paper proposes a comprehensive framework that can effectively diagnose cancerous cells from benign cells using the Curated Breast Imaging Subset of the Digital Database for Screening Mammography(CBIS-DDSM)data set.The novelty of the proposed framework lies in the integration of various techniques,where the fusion of deep learning(DL),traditional machine learning(ML)techniques,and enhanced classification models have been deployed using the curated dataset.The analysis outcome proves that the proposed enhanced RF(ERF),enhanced DT(EDT)and enhanced LR(ELR)models for BC detection outperformed most of the existing models with impressive results. 展开更多
关键词 Autoencoder breast cancer deep neural network convolutional neural network image processing machine learning deep learning
下载PDF
Recent advances in protein conformation sampling by combining machine learning with molecular simulation
16
作者 唐一鸣 杨中元 +7 位作者 姚逸飞 周运 谈圆 王子超 潘瞳 熊瑞 孙俊力 韦广红 《Chinese Physics B》 SCIE EI CAS CSCD 2024年第3期80-87,共8页
The rapid advancement and broad application of machine learning(ML)have driven a groundbreaking revolution in computational biology.One of the most cutting-edge and important applications of ML is its integration with... The rapid advancement and broad application of machine learning(ML)have driven a groundbreaking revolution in computational biology.One of the most cutting-edge and important applications of ML is its integration with molecular simulations to improve the sampling efficiency of the vast conformational space of large biomolecules.This review focuses on recent studies that utilize ML-based techniques in the exploration of protein conformational landscape.We first highlight the recent development of ML-aided enhanced sampling methods,including heuristic algorithms and neural networks that are designed to refine the selection of reaction coordinates for the construction of bias potential,or facilitate the exploration of the unsampled region of the energy landscape.Further,we review the development of autoencoder based methods that combine molecular simulations and deep learning to expand the search for protein conformations.Lastly,we discuss the cutting-edge methodologies for the one-shot generation of protein conformations with precise Boltzmann weights.Collectively,this review demonstrates the promising potential of machine learning in revolutionizing our insight into the complex conformational ensembles of proteins. 展开更多
关键词 machine learning molecular simulation protein conformational space enhanced sampling
原文传递
From prediction to prevention:Machine learning revolutionizes hepatocellular carcinoma recurrence monitoring
17
作者 Mariana Michelle Ramírez-Mejía Nahum Méndez-Sánchez 《World Journal of Gastroenterology》 SCIE CAS 2024年第7期631-635,共5页
In this editorial,we comment on the article by Zhang et al entitled Development of a machine learning-based model for predicting the risk of early postoperative recurrence of hepatocellular carcinoma.Hepatocellular ca... In this editorial,we comment on the article by Zhang et al entitled Development of a machine learning-based model for predicting the risk of early postoperative recurrence of hepatocellular carcinoma.Hepatocellular carcinoma(HCC),which is characterized by high incidence and mortality rates,remains a major global health challenge primarily due to the critical issue of postoperative recurrence.Early recurrence,defined as recurrence that occurs within 2 years posttreatment,is linked to the hidden spread of the primary tumor and significantly impacts patient survival.Traditional predictive factors,including both patient-and treatment-related factors,have limited predictive ability with respect to HCC recurrence.The integration of machine learning algorithms is fueled by the exponential growth of computational power and has revolutionized HCC research.The study by Zhang et al demonstrated the use of a groundbreaking preoperative prediction model for early postoperative HCC recurrence.Challenges persist,including sample size constraints,issues with handling data,and the need for further validation and interpretability.This study emphasizes the need for collaborative efforts,multicenter studies and comparative analyses to validate and refine the model.Overcoming these challenges and exploring innovative approaches,such as multi-omics integration,will enhance personalized oncology care.This study marks a significant stride toward precise,efficient,and personalized oncology practices,thus offering hope for improved patient outcomes in the field of HCC treatment. 展开更多
关键词 Hepatocellular carcinoma Early recurrence machine learning XGBoost model Predictive precision medicine Clinical utility Personalized interventions
下载PDF
Leveraging machine learning for early recurrence prediction in hepatocellular carcinoma:A step towards precision medicine
18
作者 Abhimati Ravikulan Kamran Rostami 《World Journal of Gastroenterology》 SCIE CAS 2024年第5期424-428,共5页
The high rate of early recurrence in hepatocellular carcinoma(HCC)post curative surgical intervention poses a substantial clinical hurdle,impacting patient outcomes and complicating postoperative management.The advent... The high rate of early recurrence in hepatocellular carcinoma(HCC)post curative surgical intervention poses a substantial clinical hurdle,impacting patient outcomes and complicating postoperative management.The advent of machine learning provides a unique opportunity to harness vast datasets,identifying subtle patterns and factors that elude conventional prognostic methods.Machine learning models,equipped with the ability to analyse intricate relationships within datasets,have shown promise in predicting outcomes in various medical disciplines.In the context of HCC,the application of machine learning to predict early recurrence holds potential for personalized postoperative care strategies.This editorial comments on the study carried out exploring the merits and efficacy of random survival forests(RSF)in identifying significant risk factors for recurrence,stratifying patients at low and high risk of HCC recurrence and comparing this to traditional COX proportional hazard models(CPH).In doing so,the study demonstrated that the RSF models are superior to traditional CPH models in predicting recurrence of HCC and represent a giant leap towards precision medicine. 展开更多
关键词 machine learning Artificial intelligence Hepatocellular carcinoma HEPATOLOGY Early recurrence Liver resection
下载PDF
Robust Machine Learning Technique to Classify COVID-19 Using Fusion of Texture and Vesselness of X-Ray Images
19
作者 Shaik Mahaboob Basha Victor Hugo Cde Albuquerque +3 位作者 Samia Allaoua Chelloug Mohamed Abd Elaziz Shaik Hashmitha Mohisin Suhail Parvaze Pathan 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第2期1981-2004,共24页
Manual investigation of chest radiography(CXR)images by physicians is crucial for effective decision-making in COVID-19 diagnosis.However,the high demand during the pandemic necessitates auxiliary help through image a... Manual investigation of chest radiography(CXR)images by physicians is crucial for effective decision-making in COVID-19 diagnosis.However,the high demand during the pandemic necessitates auxiliary help through image analysis and machine learning techniques.This study presents a multi-threshold-based segmentation technique to probe high pixel intensity regions in CXR images of various pathologies,including normal cases.Texture information is extracted using gray co-occurrence matrix(GLCM)-based features,while vessel-like features are obtained using Frangi,Sato,and Meijering filters.Machine learning models employing Decision Tree(DT)and RandomForest(RF)approaches are designed to categorize CXR images into common lung infections,lung opacity(LO),COVID-19,and viral pneumonia(VP).The results demonstrate that the fusion of texture and vesselbased features provides an effective ML model for aiding diagnosis.The ML model validation using performance measures,including an accuracy of approximately 91.8%with an RF-based classifier,supports the usefulness of the feature set and classifier model in categorizing the four different pathologies.Furthermore,the study investigates the importance of the devised features in identifying the underlying pathology and incorporates histogrambased analysis.This analysis reveals varying natural pixel distributions in CXR images belonging to the normal,COVID-19,LO,and VP groups,motivating the incorporation of additional features such as mean,standard deviation,skewness,and percentile based on the filtered images.Notably,the study achieves a considerable improvement in categorizing COVID-19 from LO,with a true positive rate of 97%,further substantiating the effectiveness of the methodology implemented. 展开更多
关键词 Chest radiography(CXR)image COVID-19 CLASSIFIER machine learning random forest texture analysis
下载PDF
Computing large deviation prefactors of stochastic dynamical systems based on machine learning
20
作者 李扬 袁胜兰 +1 位作者 陆凌宏志 刘先斌 《Chinese Physics B》 SCIE EI CAS CSCD 2024年第4期364-373,共10页
We present a large deviation theory that characterizes the exponential estimate for rare events in stochastic dynamical systems in the limit of weak noise.We aim to consider a next-to-leading-order approximation for m... We present a large deviation theory that characterizes the exponential estimate for rare events in stochastic dynamical systems in the limit of weak noise.We aim to consider a next-to-leading-order approximation for more accurate calculation of the mean exit time by computing large deviation prefactors with the aid of machine learning.More specifically,we design a neural network framework to compute quasipotential,most probable paths and prefactors based on the orthogonal decomposition of a vector field.We corroborate the higher effectiveness and accuracy of our algorithm with two toy models.Numerical experiments demonstrate its powerful functionality in exploring the internal mechanism of rare events triggered by weak random fluctuations. 展开更多
关键词 machine learning large deviation prefactors stochastic dynamical systems rare events
原文传递
上一页 1 2 250 下一页 到第
使用帮助 返回顶部