期刊文献+
共找到259,969篇文章
< 1 2 250 >
每页显示 20 50 100
AutoRhythmAI: A Hybrid Machine and Deep Learning Approach for Automated Diagnosis of Arrhythmias
1
作者 S.Jayanthi S.Prasanna Devi 《Computers, Materials & Continua》 SCIE EI 2024年第2期2137-2158,共22页
In healthcare,the persistent challenge of arrhythmias,a leading cause of global mortality,has sparked extensive research into the automation of detection using machine learning(ML)algorithms.However,traditional ML and... In healthcare,the persistent challenge of arrhythmias,a leading cause of global mortality,has sparked extensive research into the automation of detection using machine learning(ML)algorithms.However,traditional ML and AutoML approaches have revealed their limitations,notably regarding feature generalization and automation efficiency.This glaring research gap has motivated the development of AutoRhythmAI,an innovative solution that integrates both machine and deep learning to revolutionize the diagnosis of arrhythmias.Our approach encompasses two distinct pipelines tailored for binary-class and multi-class arrhythmia detection,effectively bridging the gap between data preprocessing and model selection.To validate our system,we have rigorously tested AutoRhythmAI using a multimodal dataset,surpassing the accuracy achieved using a single dataset and underscoring the robustness of our methodology.In the first pipeline,we employ signal filtering and ML algorithms for preprocessing,followed by data balancing and split for training.The second pipeline is dedicated to feature extraction and classification,utilizing deep learning models.Notably,we introduce the‘RRI-convoluted trans-former model’as a novel addition for binary-class arrhythmias.An ensemble-based approach then amalgamates all models,considering their respective weights,resulting in an optimal model pipeline.In our study,the VGGRes Model achieved impressive results in multi-class arrhythmia detection,with an accuracy of 97.39%and firm performance in precision(82.13%),recall(31.91%),and F1-score(82.61%).In the binary-class task,the proposed model achieved an outstanding accuracy of 96.60%.These results highlight the effectiveness of our approach in improving arrhythmia detection,with notably high accuracy and well-balanced performance metrics. 展开更多
关键词 Automated machine learning neural networks deep learning ARRHYTHMIAS
下载PDF
Deep Learning Approach for Hand Gesture Recognition:Applications in Deaf Communication and Healthcare
2
作者 Khursheed Aurangzeb Khalid Javeed +3 位作者 Musaed Alhussein Imad Rida Syed Irtaza Haider Anubha Parashar 《Computers, Materials & Continua》 SCIE EI 2024年第1期127-144,共18页
Hand gestures have been used as a significant mode of communication since the advent of human civilization.By facilitating human-computer interaction(HCI),hand gesture recognition(HGRoc)technology is crucial for seaml... Hand gestures have been used as a significant mode of communication since the advent of human civilization.By facilitating human-computer interaction(HCI),hand gesture recognition(HGRoc)technology is crucial for seamless and error-free HCI.HGRoc technology is pivotal in healthcare and communication for the deaf community.Despite significant advancements in computer vision-based gesture recognition for language understanding,two considerable challenges persist in this field:(a)limited and common gestures are considered,(b)processing multiple channels of information across a network takes huge computational time during discriminative feature extraction.Therefore,a novel hand vision-based convolutional neural network(CNN)model named(HVCNNM)offers several benefits,notably enhanced accuracy,robustness to variations,real-time performance,reduced channels,and scalability.Additionally,these models can be optimized for real-time performance,learn from large amounts of data,and are scalable to handle complex recognition tasks for efficient human-computer interaction.The proposed model was evaluated on two challenging datasets,namely the Massey University Dataset(MUD)and the American Sign Language(ASL)Alphabet Dataset(ASLAD).On the MUD and ASLAD datasets,HVCNNM achieved a score of 99.23% and 99.00%,respectively.These results demonstrate the effectiveness of CNN as a promising HGRoc approach.The findings suggest that the proposed model have potential roles in applications such as sign language recognition,human-computer interaction,and robotics. 展开更多
关键词 Computer vision deep learning gait recognition sign language recognition machine learning
下载PDF
A Systematic Literature Review of Machine Learning and Deep Learning Approaches for Spectral Image Classification in Agricultural Applications Using Aerial Photography
3
作者 Usman Khan Muhammad Khalid Khan +4 位作者 Muhammad Ayub Latif Muhammad Naveed Muhammad Mansoor Alam Salman A.Khan Mazliham Mohd Su’ud 《Computers, Materials & Continua》 SCIE EI 2024年第3期2967-3000,共34页
Recently,there has been a notable surge of interest in scientific research regarding spectral images.The potential of these images to revolutionize the digital photography industry,like aerial photography through Unma... Recently,there has been a notable surge of interest in scientific research regarding spectral images.The potential of these images to revolutionize the digital photography industry,like aerial photography through Unmanned Aerial Vehicles(UAVs),has captured considerable attention.One encouraging aspect is their combination with machine learning and deep learning algorithms,which have demonstrated remarkable outcomes in image classification.As a result of this powerful amalgamation,the adoption of spectral images has experienced exponential growth across various domains,with agriculture being one of the prominent beneficiaries.This paper presents an extensive survey encompassing multispectral and hyperspectral images,focusing on their applications for classification challenges in diverse agricultural areas,including plants,grains,fruits,and vegetables.By meticulously examining primary studies,we delve into the specific agricultural domains where multispectral and hyperspectral images have found practical use.Additionally,our attention is directed towards utilizing machine learning techniques for effectively classifying hyperspectral images within the agricultural context.The findings of our investigation reveal that deep learning and support vector machines have emerged as widely employed methods for hyperspectral image classification in agriculture.Nevertheless,we also shed light on the various issues and limitations of working with spectral images.This comprehensive analysis aims to provide valuable insights into the current state of spectral imaging in agriculture and its potential for future advancements. 展开更多
关键词 Machine learning deep learning unmanned aerial vehicles multi-spectral images image recognition object detection hyperspectral images aerial photography
下载PDF
UAV-Assisted Dynamic Avatar Task Migration for Vehicular Metaverse Services: A Multi-Agent Deep Reinforcement Learning Approach
4
作者 Jiawen Kang Junlong Chen +6 位作者 Minrui Xu Zehui Xiong Yutao Jiao Luchao Han Dusit Niyato Yongju Tong Shengli Xie 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第2期430-445,共16页
Avatars, as promising digital representations and service assistants of users in Metaverses, can enable drivers and passengers to immerse themselves in 3D virtual services and spaces of UAV-assisted vehicular Metavers... Avatars, as promising digital representations and service assistants of users in Metaverses, can enable drivers and passengers to immerse themselves in 3D virtual services and spaces of UAV-assisted vehicular Metaverses. However, avatar tasks include a multitude of human-to-avatar and avatar-to-avatar interactive applications, e.g., augmented reality navigation,which consumes intensive computing resources. It is inefficient and impractical for vehicles to process avatar tasks locally. Fortunately, migrating avatar tasks to the nearest roadside units(RSU)or unmanned aerial vehicles(UAV) for execution is a promising solution to decrease computation overhead and reduce task processing latency, while the high mobility of vehicles brings challenges for vehicles to independently perform avatar migration decisions depending on current and future vehicle status. To address these challenges, in this paper, we propose a novel avatar task migration system based on multi-agent deep reinforcement learning(MADRL) to execute immersive vehicular avatar tasks dynamically. Specifically, we first formulate the problem of avatar task migration from vehicles to RSUs/UAVs as a partially observable Markov decision process that can be solved by MADRL algorithms. We then design the multi-agent proximal policy optimization(MAPPO) approach as the MADRL algorithm for the avatar task migration problem. To overcome slow convergence resulting from the curse of dimensionality and non-stationary issues caused by shared parameters in MAPPO, we further propose a transformer-based MAPPO approach via sequential decision-making models for the efficient representation of relationships among agents. Finally, to motivate terrestrial or non-terrestrial edge servers(e.g., RSUs or UAVs) to share computation resources and ensure traceability of the sharing records, we apply smart contracts and blockchain technologies to achieve secure sharing management. Numerical results demonstrate that the proposed approach outperforms the MAPPO approach by around 2% and effectively reduces approximately 20% of the latency of avatar task execution in UAV-assisted vehicular Metaverses. 展开更多
关键词 AVATAR blockchain metaverses multi-agent deep reinforcement learning transformer UAVS
下载PDF
A Deep Learning Approach for Landmines Detection Based on Airborne Magnetometry Imaging and Edge Computing
5
作者 Ahmed Barnawi Krishan Kumar +2 位作者 Neeraj Kumar Bander Alzahrani Amal Almansour 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第5期2117-2137,共21页
Landmines continue to pose an ongoing threat in various regions around the world,with countless buried landmines affecting numerous human lives.The detonation of these landmines results in thousands of casualties repo... Landmines continue to pose an ongoing threat in various regions around the world,with countless buried landmines affecting numerous human lives.The detonation of these landmines results in thousands of casualties reported worldwide annually.Therefore,there is a pressing need to employ diverse landmine detection techniques for their removal.One effective approach for landmine detection is UAV(Unmanned Aerial Vehicle)based AirborneMagnetometry,which identifies magnetic anomalies in the local terrestrial magnetic field.It can generate a contour plot or heat map that visually represents the magnetic field strength.Despite the effectiveness of this approach,landmine removal remains a challenging and resource-intensive task,fraughtwith risks.Edge computing,on the other hand,can play a crucial role in critical drone monitoring applications like landmine detection.By processing data locally on a nearby edge server,edge computing can reduce communication latency and bandwidth requirements,allowing real-time analysis of magnetic field data.It enables faster decision-making and more efficient landmine detection,potentially saving lives and minimizing the risks involved in the process.Furthermore,edge computing can provide enhanced security and privacy by keeping sensitive data close to the source,reducing the chances of data exposure during transmission.This paper introduces the MAGnetometry Imaging based Classification System(MAGICS),a fully automated UAV-based system designed for landmine and buried object detection and localization.We have developed an efficient deep learning-based strategy for automatic image classification using magnetometry dataset traces.By simulating the proposal in various network scenarios,we have successfully detected landmine signatures present in themagnetometry images.The trained models exhibit significant performance improvements,achieving a maximum mean average precision value of 97.8%. 展开更多
关键词 CNN deep learning landmine detection MAGNETOMETER mean average precision UAV
下载PDF
A Deep Learning Approach to Shape Optimization Problems for Flexoelectric Materials Using the Isogeometric Finite Element Method
6
作者 Yu Cheng Yajun Huang +3 位作者 Shuai Li Zhongbin Zhou Xiaohui Yuan Yanming Xu 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第5期1935-1960,共26页
A new approach for flexoelectricmaterial shape optimization is proposed in this study.In this work,a proxymodel based on artificial neural network(ANN)is used to solve the parameter optimization and shape optimization... A new approach for flexoelectricmaterial shape optimization is proposed in this study.In this work,a proxymodel based on artificial neural network(ANN)is used to solve the parameter optimization and shape optimization problems.To improve the fitting ability of the neural network,we use the idea of pre-training to determine the structure of the neural network and combine different optimizers for training.The isogeometric analysis-finite element method(IGA-FEM)is used to discretize the flexural theoretical formulas and obtain samples,which helps ANN to build a proxy model from the model shape to the target value.The effectiveness of the proposed method is verified through two numerical examples of parameter optimization and one numerical example of shape optimization. 展开更多
关键词 Shape optimization deep learning flexoelectric structure finite element method isogeometric
下载PDF
Fault Estimation for a Class of Markov Jump Piecewise-Affine Systems: Current Feedback Based Iterative Learning Approach
7
作者 Yanzheng Zhu Nuo Xu +2 位作者 Fen Wu Xinkai Chen Donghua Zhou 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第2期418-429,共12页
In this paper, the issues of stochastic stability analysis and fault estimation are investigated for a class of continuoustime Markov jump piecewise-affine(PWA) systems against actuator and sensor faults. Firstly, a n... In this paper, the issues of stochastic stability analysis and fault estimation are investigated for a class of continuoustime Markov jump piecewise-affine(PWA) systems against actuator and sensor faults. Firstly, a novel mode-dependent PWA iterative learning observer with current feedback is designed to estimate the system states and faults, simultaneously, which contains both the previous iteration information and the current feedback mechanism. The auxiliary feedback channel optimizes the response speed of the observer, therefore the estimation error would converge to zero rapidly. Then, sufficient conditions for stochastic stability with guaranteed performance are demonstrated for the estimation error system, and the equivalence relations between the system information and the estimated information can be established via iterative accumulating representation.Finally, two illustrative examples containing a class of tunnel diode circuit systems are presented to fully demonstrate the effectiveness and superiority of the proposed iterative learning observer with current feedback. 展开更多
关键词 Current feedback fault estimation iterative learning observer Markov jump piecewise-affine system
下载PDF
Extensive prediction of drug response in mutation-subtype-specific LUAD with machine learning approach
8
作者 KEGANG JIA YAWEI WANG +1 位作者 QI CAO YOUYU WANG 《Oncology Research》 SCIE 2024年第2期409-419,共11页
Lung cancer is the most prevalent cancer diagnosis and the leading cause of cancer death worldwide.Therapeutic failure in lung cancer(LUAD)is heavily influenced by drug resistance.This challenge stems from the diverse... Lung cancer is the most prevalent cancer diagnosis and the leading cause of cancer death worldwide.Therapeutic failure in lung cancer(LUAD)is heavily influenced by drug resistance.This challenge stems from the diverse cell populations within the tumor,each having unique genetic,epigenetic,and phenotypic profiles.Such variations lead to varied therapeutic responses,thereby contributing to tumor relapse and disease progression.Methods:The Genomics of Drug Sensitivity in Cancer(GDSC)database was used in this investigation to obtain the mRNA expression dataset,genomic mutation profile,and drug sensitivity information of NSCLS.Machine Learning(ML)methods,including Random Forest(RF),Artificial Neurol Network(ANN),and Support Vector Machine(SVM),were used to predict the response status of each compound based on the mRNA and mutation characteristics determined using statistical methods.The most suitable method for each drug was proposed by comparing the prediction accuracy of different ML methods,and the selected mRNA and mutation characteristics were identified as molecular features for the drug-responsive cancer subtype.Finally,the prognostic influence of molecular features on the mutational subtype of LUAD in publicly available datasets.Results:Our analyses yielded 1,564 gene features and 45 mutational features for 46 drugs.Applying the ML approach to predict the drug response for each medication revealed an upstanding performance for SVM in predicting Afuresertib drug response(area under the curve[AUC]0.875)using CIT,GAS2L3,STAG3L3,ATP2B4-mut,and IL15RA-mut as molecular features.Furthermore,the ANN algorithm using 9 mRNA characteristics demonstrated the highest prediction performance(AUC 0.780)in Gefitinib with CCL23-mut.Conclusion:This work extensively investigated the mRNA and mutation signatures associated with drug response in LUAD using a machine-learning approach and proposed a priority algorithm to predict drug response for different drugs. 展开更多
关键词 Lung adenocarcinoma Drug resistance Machine learning Molecular features Personalized treatment
下载PDF
A Hybrid Machine Learning Approach for Improvised QoE in Video Services over 5G Wireless Networks
9
作者 K.B.Ajeyprasaath P.Vetrivelan 《Computers, Materials & Continua》 SCIE EI 2024年第3期3195-3213,共19页
Video streaming applications have grown considerably in recent years.As a result,this becomes one of the most significant contributors to global internet traffic.According to recent studies,the telecommunications indu... Video streaming applications have grown considerably in recent years.As a result,this becomes one of the most significant contributors to global internet traffic.According to recent studies,the telecommunications industry loses millions of dollars due to poor video Quality of Experience(QoE)for users.Among the standard proposals for standardizing the quality of video streaming over internet service providers(ISPs)is the Mean Opinion Score(MOS).However,the accurate finding of QoE by MOS is subjective and laborious,and it varies depending on the user.A fully automated data analytics framework is required to reduce the inter-operator variability characteristic in QoE assessment.This work addresses this concern by suggesting a novel hybrid XGBStackQoE analytical model using a two-level layering technique.Level one combines multiple Machine Learning(ML)models via a layer one Hybrid XGBStackQoE-model.Individual ML models at level one are trained using the entire training data set.The level two Hybrid XGBStackQoE-Model is fitted using the outputs(meta-features)of the layer one ML models.The proposed model outperformed the conventional models,with an accuracy improvement of 4 to 5 percent,which is still higher than the current traditional models.The proposed framework could significantly improve video QoE accuracy. 展开更多
关键词 Hybrid XGBStackQoE-model machine learning MOS performance metrics QOE 5G video services
下载PDF
Personalized assessment and training of neurosurgical skills in virtual reality:An interpretable machine learning approach
10
作者 Fei LI Zhibao QIN +3 位作者 Kai QIAN Shaojun LIANG Chengli LI Yonghang TAI 《虚拟现实与智能硬件(中英文)》 EI 2024年第1期17-29,共13页
Background Virtual reality technology has been widely used in surgical simulators,providing new opportunities for assessing and training surgical skills.Machine learning algorithms are commonly used to analyze and eva... Background Virtual reality technology has been widely used in surgical simulators,providing new opportunities for assessing and training surgical skills.Machine learning algorithms are commonly used to analyze and evaluate the performance of participants.However,their interpretability limits the personalization of the training for individual participants.Methods Seventy-nine participants were recruited and divided into three groups based on their skill level in intracranial tumor resection.Data on the use of surgical tools were collected using a surgical simulator.Feature selection was performed using the Minimum Redundancy Maximum Relevance and SVM-RFE algorithms to obtain the final metrics for training the machine learning model.Five machine learning algorithms were trained to predict the skill level,and the support vector machine performed the best,with an accuracy of 92.41%and Area Under Curve value of 0.98253.The machine learning model was interpreted using Shapley values to identify the important factors contributing to the skill level of each participant.Results This study demonstrates the effectiveness of machine learning in differentiating the evaluation and training of virtual reality neurosurgical performances.The use of Shapley values enables targeted training by identifying deficiencies in individual skills.Conclusions This study provides insights into the use of machine learning for personalized training in virtual reality neurosurgery.The interpretability of the machine learning models enables the development of individualized training programs.In addition,this study highlighted the potential of explanatory models in training external skills. 展开更多
关键词 Machine learning NEUROSURGERY Shapley values Virtual reality Human-robot interaction
下载PDF
Design of a Multi-Stage Ensemble Model for Thyroid Prediction Using Learning Approaches
11
作者 M.L.Maruthi Prasad R.Santhosh 《Intelligent Automation & Soft Computing》 2024年第1期1-13,共13页
This research concentrates to model an efficient thyroid prediction approach,which is considered a baseline for significant problems faced by the women community.The major research problem is the lack of automated mod... This research concentrates to model an efficient thyroid prediction approach,which is considered a baseline for significant problems faced by the women community.The major research problem is the lack of automated model to attain earlier prediction.Some existing model fails to give better prediction accuracy.Here,a novel clinical decision support system is framed to make the proper decision during a time of complexity.Multiple stages are followed in the proposed framework,which plays a substantial role in thyroid prediction.These steps include i)data acquisition,ii)outlier prediction,and iii)multi-stage weight-based ensemble learning process(MS-WEL).The weighted analysis of the base classifier and other classifier models helps bridge the gap encountered in one single classifier model.Various classifiers aremerged to handle the issues identified in others and intend to enhance the prediction rate.The proposed model provides superior outcomes and gives good quality prediction rate.The simulation is done in the MATLAB 2020a environment and establishes a better trade-off than various existing approaches.The model gives a prediction accuracy of 97.28%accuracy compared to other models and shows a better trade than others. 展开更多
关键词 THYROID machine learning PRE-PROCESSING classification prediction rate
下载PDF
Solving Neumann Boundary Problem with Kernel-Regularized Learning Approach
12
作者 Xuexue Ran Baohuai Sheng 《Journal of Applied Mathematics and Physics》 2024年第4期1101-1125,共25页
We provide a kernel-regularized method to give theory solutions for Neumann boundary value problem on the unit ball. We define the reproducing kernel Hilbert space with the spherical harmonics associated with an inner... We provide a kernel-regularized method to give theory solutions for Neumann boundary value problem on the unit ball. We define the reproducing kernel Hilbert space with the spherical harmonics associated with an inner product defined on both the unit ball and the unit sphere, construct the kernel-regularized learning algorithm from the view of semi-supervised learning and bound the upper bounds for the learning rates. The theory analysis shows that the learning algorithm has better uniform convergence according to the number of samples. The research can be regarded as an application of kernel-regularized semi-supervised learning. 展开更多
关键词 Neumann Boundary Value Kernel-Regularized approach Reproducing Kernel Hilbert Space The Unit Ball The Unit Sphere
下载PDF
A Machine-Learning Approach for the Prediction of Fly-Ash Concrete Strength
13
作者 Shanqing Shao Aimin Gong +4 位作者 Ran Wang Xiaoshuang Chen Jing Xu Fulai Wang Feipeng Liu 《Fluid Dynamics & Materials Processing》 EI 2023年第12期3007-3019,共13页
The composite exciter and the CaO to Na_(2)SO_(4) dosing ratios are known to have a strong impact on the mechanical strength offly-ash concrete.In the present study a hybrid approach relying on experiments and a machi... The composite exciter and the CaO to Na_(2)SO_(4) dosing ratios are known to have a strong impact on the mechanical strength offly-ash concrete.In the present study a hybrid approach relying on experiments and a machine-learn-ing technique has been used to tackle this problem.The tests have shown that the optimal admixture of CaO and Na_(2)SO_(4) alone is 8%.The best 3D mechanical strength offly-ash concrete is achieved at 8%of the compound activator;If the 28-day mechanical strength is considered,then,the best performances are obtained at 4%of the compound activator.Moreover,the 3D mechanical strength offly-ash concrete is better when the dosing ratio of CaO to Na_(2)SO_(4) in the compound activator is 1:1;the maximum strength offly-ash concrete at 28-day can be achieved for a 1:1 ratio of CaO to Na_(2)SO_(4) by considering a 4%compound activator.In this case,the compressive andflexural strengths are 260 MPa and 53.6 MPa,respectively;the mechanical strength offly-ash concrete at 28-day can be improved by a 4:1 ratio of CaO to Na_(2)SO_(4) by considering 8%and 12%compound excitants.It is shown that the predictions based on the aforementioned machine-learning approach are accurate and reliable. 展开更多
关键词 Fly ash compound activator machine-learning approach
下载PDF
A Deep Learning Approach to Mesh Segmentation 被引量:1
14
作者 Abubakar Sulaiman Gezawa Qicong Wang +1 位作者 Haruna Chiroma Yunqi Lei 《Computer Modeling in Engineering & Sciences》 SCIE EI 2023年第5期1745-1763,共19页
In the shape analysis community,decomposing a 3D shape intomeaningful parts has become a topic of interest.3D model segmentation is largely used in tasks such as shape deformation,shape partial matching,skeleton extra... In the shape analysis community,decomposing a 3D shape intomeaningful parts has become a topic of interest.3D model segmentation is largely used in tasks such as shape deformation,shape partial matching,skeleton extraction,shape correspondence,shape annotation and texture mapping.Numerous approaches have attempted to provide better segmentation solutions;however,the majority of the previous techniques used handcrafted features,which are usually focused on a particular attribute of 3Dobjects and so are difficult to generalize.In this paper,we propose a three-stage approach for using Multi-view recurrent neural network to automatically segment a 3D shape into visually meaningful sub-meshes.The first stage involves normalizing and scaling a 3D model to fit within the unit sphere and rendering the object into different views.Contrasting viewpoints,on the other hand,might not have been associated,and a 3D region could correlate into totally distinct outcomes depending on the viewpoint.To address this,we ran each view through(shared weights)CNN and Bolster block in order to create a probability boundary map.The Bolster block simulates the area relationships between different views,which helps to improve and refine the data.In stage two,the feature maps generated in the previous step are correlated using a Recurrent Neural network to obtain compatible fine detail responses for each view.Finally,a layer that is fully connected is used to return coherent edges,which are then back project to 3D objects to produce the final segmentation.Experiments on the Princeton Segmentation Benchmark dataset show that our proposed method is effective for mesh segmentation tasks. 展开更多
关键词 Deep learning mesh segmentation 3D shape shape features
下载PDF
An Improved Ensemble Learning Approach for Heart Disease Prediction Using Boosting Algorithms
15
作者 ShahidMohammad Ganie Pijush Kanti Dutta Pramanik +2 位作者 Majid BashirMalik Anand Nayyar Kyung Sup Kwak 《Computer Systems Science & Engineering》 SCIE EI 2023年第9期3993-4006,共14页
Cardiovascular disease is among the top five fatal diseases that affect lives worldwide.Therefore,its early prediction and detection are crucial,allowing one to take proper and necessary measures at earlier stages.Mac... Cardiovascular disease is among the top five fatal diseases that affect lives worldwide.Therefore,its early prediction and detection are crucial,allowing one to take proper and necessary measures at earlier stages.Machine learning(ML)techniques are used to assist healthcare providers in better diagnosing heart disease.This study employed three boosting algorithms,namely,gradient boost,XGBoost,and AdaBoost,to predict heart disease.The dataset contained heart disease-related clinical features and was sourced from the publicly available UCI ML repository.Exploratory data analysis is performed to find the characteristics of data samples about descriptive and inferential statistics.Specifically,it was carried out to identify and replace outliers using the interquartile range and detect and replace the missing values using the imputation method.Results were recorded before and after the data preprocessing techniques were applied.Out of all the algorithms,gradient boosting achieved the highest accuracy rate of 92.20%for the proposed model.The proposed model yielded better results with gradient boosting in terms of precision,recall,and f1-score.It attained better prediction performance than the existing works and can be used for other diseases that share common features using transfer learning. 展开更多
关键词 Heart disease prediction machine learning classifiers ensemble approach XGBoost ADABOOST gradient boost
下载PDF
Explainable Heart Disease Prediction Using Ensemble-Quantum Machine Learning Approach
16
作者 Ghada Abdulsalam Souham Meshoul Hadil Shaiba 《Intelligent Automation & Soft Computing》 SCIE 2023年第4期761-779,共19页
Nowadays,quantum machine learning is attracting great interest in a wide range offields due to its potential superior performance and capabilities.The massive increase in computational capacity and speed of quantum com... Nowadays,quantum machine learning is attracting great interest in a wide range offields due to its potential superior performance and capabilities.The massive increase in computational capacity and speed of quantum computers can lead to a quantum leap in the healthcarefield.Heart disease seriously threa-tens human health since it is the leading cause of death worldwide.Quantum machine learning methods can propose effective solutions to predict heart disease and aid in early diagnosis.In this study,an ensemble machine learning model based on quantum machine learning classifiers is proposed to predict the risk of heart disease.The proposed model is a bagging ensemble learning model where a quantum support vector classifier was used as a base classifier.Further-more,in order to make the model’s outcomes more explainable,the importance of every single feature in the prediction is computed and visualized using SHapley Additive exPlanations(SHAP)framework.In the experimental study,other stand-alone quantum classifiers,namely,Quantum Support Vector Classifier(QSVC),Quantum Neural Network(QNN),and Variational Quantum Classifier(VQC)are applied and compared with classical machine learning classifiers such as Sup-port Vector Machine(SVM),and Artificial Neural Network(ANN).The experi-mental results on the Cleveland dataset reveal the superiority of QSVC compared to the others,which explains its use in the proposed bagging model.The Bagging-QSVC model outperforms all aforementioned classifiers with an accuracy of 90.16%while showing great competitiveness compared to some state-of-the-art models using the same dataset.The results of the study indicate that quantum machine learning classifiers perform better than classical machine learning classi-fiers in predicting heart disease.In addition,the study reveals that the bagging ensemble learning technique is effective in improving the prediction accuracy of quantum classifiers. 展开更多
关键词 Machine learning ensemble learning quantum machine learning explainable machine learning heart disease prediction
下载PDF
Predicting Lumbar Spondylolisthesis: A Hybrid Deep Learning Approach
17
作者 Deepika Saravagi Shweta Agrawal +5 位作者 Manisha Saravagi Sanjiv K.Jain Bhisham Sharma Abolfazl Mehbodniya Subrata Chowdhury Julian L.Webber 《Intelligent Automation & Soft Computing》 SCIE 2023年第8期2133-2151,共19页
Spondylolisthesis is a chronic disease,and a timely diagnosis of it may help in avoiding surgery.Disease identification in x-ray radiographs is very challenging.Strengthening the feature extraction tool in VGG16 has i... Spondylolisthesis is a chronic disease,and a timely diagnosis of it may help in avoiding surgery.Disease identification in x-ray radiographs is very challenging.Strengthening the feature extraction tool in VGG16 has improved the classification rate.But the fully connected layers of VGG16 are not efficient at capturing the positional structure of an object in images.Capsule network(CapsNet)works with capsules(neuron clusters)rather than a single neuron to grasp the properties of the provided image to match the pattern.In this study,an integrated model that is a combination of VGG16 and CapsNet(S-VCNet)is proposed.In the model,VGG16 is used as a feature extractor.After feature extraction,the output is fed to CapsNet for disease identification.A private dataset is used that contains 466 X-ray radiographs,including 186 images displaying a spine with spondylolisthesis and 280 images depicting a normal spine.The suggested model is the first step towards developing a web-based radiological diagnosis tool that can be utilized in outpatient clinics where there are not enough qualified medical professionals.Experimental results demonstrate that the developed model outperformed the other models that are used for lumbar spondylolisthesis diagnosis with 98%accuracy.After the performance check,the model has been successfully deployed on the Gradio web app platform to produce the outcome in less than 20 s. 展开更多
关键词 Gradio lumbar spondylolisthesis transfer learning VGG16 machine learning deep learning
下载PDF
A deep-reinforcement learning approach for optimizing homogeneous droplet routing in digital microfluidic biochips
18
作者 Basudev Saha Bidyut Das Mukta Majumder 《Nanotechnology and Precision Engineering》 EI CAS CSCD 2023年第2期1-12,共12页
Over the past two decades,digital microfluidic biochips have been in much demand for safety-critical and biomedical applications and increasingly important in point-of-care analysis,drug discovery,and immunoassays,amo... Over the past two decades,digital microfluidic biochips have been in much demand for safety-critical and biomedical applications and increasingly important in point-of-care analysis,drug discovery,and immunoassays,among other areas.However,for complex bioassays,finding routes for the transportation of droplets in an electrowetting-on-dielectric digital biochip while maintaining their discreteness is a challenging task.In this study,we propose a deep reinforcement learning-based droplet routing technique for digital microfluidic biochips.The technique is implemented on a distributed architecture to optimize the possible paths for predefined source–target pairs of droplets.The actors of the technique calculate the possible routes of the source–target pairs and store the experience in a replay buffer,and the learner fetches the experiences and updates the routing paths.The proposed algorithm was applied to benchmark suitesⅠand Ⅲ as two different test benches,and it achieved significant improvements over state-of-the-art techniques. 展开更多
关键词 Digital microfluidics BIOCHIP Droplet routing Fluidic constraints Deep learning Reinforcement learning
下载PDF
An Optimized Deep Learning Approach for Improving Airline Services
19
作者 Shimaa Ouf 《Computers, Materials & Continua》 SCIE EI 2023年第4期1213-1233,共21页
The aviation industry is one of the most competitive markets. Themost common approach for airline service providers is to improve passengersatisfaction. Passenger satisfaction in the aviation industry occurs whenpasse... The aviation industry is one of the most competitive markets. Themost common approach for airline service providers is to improve passengersatisfaction. Passenger satisfaction in the aviation industry occurs whenpassengers’ expectations are met during flights. Airline service quality iscritical in attracting new passengers and retaining existing ones. It is crucialto identify passengers’ pain points and enhance their satisfaction with theservices offered. The airlines used a variety of techniques to improve servicequality. They used data analysis approaches to analyze the passenger pointdata. These solutions have focused simply on surveys;consequently, deeplearningapproaches have received insufficient attention. In this study, deepneural networks with the adaptive moment estimation Adam optimizationalgorithm were applied to enhance classification performance. In previousstudies, the quality of the dataset has been ignored. The proposed approachwas applied to the airline passenger satisfaction dataset from the Kagglerepository. It was validated by applying artificial neural networks (ANNs),random forests, and support vector machine techniques to the same dataset. Itwas compared with other research papers that used the same dataset and had asimilar problem. The experimental results showed that the proposed approachoutperformed previous studies. It has achieved an accuracy of 99.3%. 展开更多
关键词 Adam optimizer data pre-processing AIRLINES machine learning deep learning optimization techniques
下载PDF
Survey on Deep Learning Approaches for Detection of Email Security Threat
20
作者 Mozamel M.Saeed Zaher Al Aghbari 《Computers, Materials & Continua》 SCIE EI 2023年第10期325-348,共24页
Emailing is among the cheapest and most easily accessible platforms,and covers every idea of the present century like banking,personal login database,academic information,invitation,marketing,advertisement,social engi... Emailing is among the cheapest and most easily accessible platforms,and covers every idea of the present century like banking,personal login database,academic information,invitation,marketing,advertisement,social engineering,model creation on cyber-based technologies,etc.The uncontrolled development and easy access to the internet are the reasons for the increased insecurity in email communication.Therefore,this review paper aims to investigate deep learning approaches for detecting the threats associated with e-mail security.This study compiles the literature related to the deep learning methodologies,which are applicable for providing safety in the field of cyber security of email in different organizations.Relevant data were extracted from different research depositories.The paper discusses various solutions for handling these threats.Different challenges and issues are also investigated for e-mail security threats including social engineering,malware,spam,and phishing in the existing solutions to identify the core current problem and set the road for future studies.The review analysis showed that communication media is the common platform for attackers to conduct fraudulent activities via spoofed e-mails and fake websites and this research has combined the merit and demerits of the deep learning approaches adaption in email security threat by the usage of models and technologies.The study highlighted the contrasts of deep learning approaches in detecting email security threats.This review study has set criteria to include studies that deal with at least one of the six machine models in cyber security. 展开更多
关键词 Attackers deep learning methods e-mail security threats machine learning PHISHING
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部