In healthcare,the persistent challenge of arrhythmias,a leading cause of global mortality,has sparked extensive research into the automation of detection using machine learning(ML)algorithms.However,traditional ML and...In healthcare,the persistent challenge of arrhythmias,a leading cause of global mortality,has sparked extensive research into the automation of detection using machine learning(ML)algorithms.However,traditional ML and AutoML approaches have revealed their limitations,notably regarding feature generalization and automation efficiency.This glaring research gap has motivated the development of AutoRhythmAI,an innovative solution that integrates both machine and deep learning to revolutionize the diagnosis of arrhythmias.Our approach encompasses two distinct pipelines tailored for binary-class and multi-class arrhythmia detection,effectively bridging the gap between data preprocessing and model selection.To validate our system,we have rigorously tested AutoRhythmAI using a multimodal dataset,surpassing the accuracy achieved using a single dataset and underscoring the robustness of our methodology.In the first pipeline,we employ signal filtering and ML algorithms for preprocessing,followed by data balancing and split for training.The second pipeline is dedicated to feature extraction and classification,utilizing deep learning models.Notably,we introduce the‘RRI-convoluted trans-former model’as a novel addition for binary-class arrhythmias.An ensemble-based approach then amalgamates all models,considering their respective weights,resulting in an optimal model pipeline.In our study,the VGGRes Model achieved impressive results in multi-class arrhythmia detection,with an accuracy of 97.39%and firm performance in precision(82.13%),recall(31.91%),and F1-score(82.61%).In the binary-class task,the proposed model achieved an outstanding accuracy of 96.60%.These results highlight the effectiveness of our approach in improving arrhythmia detection,with notably high accuracy and well-balanced performance metrics.展开更多
Hand gestures have been used as a significant mode of communication since the advent of human civilization.By facilitating human-computer interaction(HCI),hand gesture recognition(HGRoc)technology is crucial for seaml...Hand gestures have been used as a significant mode of communication since the advent of human civilization.By facilitating human-computer interaction(HCI),hand gesture recognition(HGRoc)technology is crucial for seamless and error-free HCI.HGRoc technology is pivotal in healthcare and communication for the deaf community.Despite significant advancements in computer vision-based gesture recognition for language understanding,two considerable challenges persist in this field:(a)limited and common gestures are considered,(b)processing multiple channels of information across a network takes huge computational time during discriminative feature extraction.Therefore,a novel hand vision-based convolutional neural network(CNN)model named(HVCNNM)offers several benefits,notably enhanced accuracy,robustness to variations,real-time performance,reduced channels,and scalability.Additionally,these models can be optimized for real-time performance,learn from large amounts of data,and are scalable to handle complex recognition tasks for efficient human-computer interaction.The proposed model was evaluated on two challenging datasets,namely the Massey University Dataset(MUD)and the American Sign Language(ASL)Alphabet Dataset(ASLAD).On the MUD and ASLAD datasets,HVCNNM achieved a score of 99.23% and 99.00%,respectively.These results demonstrate the effectiveness of CNN as a promising HGRoc approach.The findings suggest that the proposed model have potential roles in applications such as sign language recognition,human-computer interaction,and robotics.展开更多
Recently,there has been a notable surge of interest in scientific research regarding spectral images.The potential of these images to revolutionize the digital photography industry,like aerial photography through Unma...Recently,there has been a notable surge of interest in scientific research regarding spectral images.The potential of these images to revolutionize the digital photography industry,like aerial photography through Unmanned Aerial Vehicles(UAVs),has captured considerable attention.One encouraging aspect is their combination with machine learning and deep learning algorithms,which have demonstrated remarkable outcomes in image classification.As a result of this powerful amalgamation,the adoption of spectral images has experienced exponential growth across various domains,with agriculture being one of the prominent beneficiaries.This paper presents an extensive survey encompassing multispectral and hyperspectral images,focusing on their applications for classification challenges in diverse agricultural areas,including plants,grains,fruits,and vegetables.By meticulously examining primary studies,we delve into the specific agricultural domains where multispectral and hyperspectral images have found practical use.Additionally,our attention is directed towards utilizing machine learning techniques for effectively classifying hyperspectral images within the agricultural context.The findings of our investigation reveal that deep learning and support vector machines have emerged as widely employed methods for hyperspectral image classification in agriculture.Nevertheless,we also shed light on the various issues and limitations of working with spectral images.This comprehensive analysis aims to provide valuable insights into the current state of spectral imaging in agriculture and its potential for future advancements.展开更多
Avatars, as promising digital representations and service assistants of users in Metaverses, can enable drivers and passengers to immerse themselves in 3D virtual services and spaces of UAV-assisted vehicular Metavers...Avatars, as promising digital representations and service assistants of users in Metaverses, can enable drivers and passengers to immerse themselves in 3D virtual services and spaces of UAV-assisted vehicular Metaverses. However, avatar tasks include a multitude of human-to-avatar and avatar-to-avatar interactive applications, e.g., augmented reality navigation,which consumes intensive computing resources. It is inefficient and impractical for vehicles to process avatar tasks locally. Fortunately, migrating avatar tasks to the nearest roadside units(RSU)or unmanned aerial vehicles(UAV) for execution is a promising solution to decrease computation overhead and reduce task processing latency, while the high mobility of vehicles brings challenges for vehicles to independently perform avatar migration decisions depending on current and future vehicle status. To address these challenges, in this paper, we propose a novel avatar task migration system based on multi-agent deep reinforcement learning(MADRL) to execute immersive vehicular avatar tasks dynamically. Specifically, we first formulate the problem of avatar task migration from vehicles to RSUs/UAVs as a partially observable Markov decision process that can be solved by MADRL algorithms. We then design the multi-agent proximal policy optimization(MAPPO) approach as the MADRL algorithm for the avatar task migration problem. To overcome slow convergence resulting from the curse of dimensionality and non-stationary issues caused by shared parameters in MAPPO, we further propose a transformer-based MAPPO approach via sequential decision-making models for the efficient representation of relationships among agents. Finally, to motivate terrestrial or non-terrestrial edge servers(e.g., RSUs or UAVs) to share computation resources and ensure traceability of the sharing records, we apply smart contracts and blockchain technologies to achieve secure sharing management. Numerical results demonstrate that the proposed approach outperforms the MAPPO approach by around 2% and effectively reduces approximately 20% of the latency of avatar task execution in UAV-assisted vehicular Metaverses.展开更多
Landmines continue to pose an ongoing threat in various regions around the world,with countless buried landmines affecting numerous human lives.The detonation of these landmines results in thousands of casualties repo...Landmines continue to pose an ongoing threat in various regions around the world,with countless buried landmines affecting numerous human lives.The detonation of these landmines results in thousands of casualties reported worldwide annually.Therefore,there is a pressing need to employ diverse landmine detection techniques for their removal.One effective approach for landmine detection is UAV(Unmanned Aerial Vehicle)based AirborneMagnetometry,which identifies magnetic anomalies in the local terrestrial magnetic field.It can generate a contour plot or heat map that visually represents the magnetic field strength.Despite the effectiveness of this approach,landmine removal remains a challenging and resource-intensive task,fraughtwith risks.Edge computing,on the other hand,can play a crucial role in critical drone monitoring applications like landmine detection.By processing data locally on a nearby edge server,edge computing can reduce communication latency and bandwidth requirements,allowing real-time analysis of magnetic field data.It enables faster decision-making and more efficient landmine detection,potentially saving lives and minimizing the risks involved in the process.Furthermore,edge computing can provide enhanced security and privacy by keeping sensitive data close to the source,reducing the chances of data exposure during transmission.This paper introduces the MAGnetometry Imaging based Classification System(MAGICS),a fully automated UAV-based system designed for landmine and buried object detection and localization.We have developed an efficient deep learning-based strategy for automatic image classification using magnetometry dataset traces.By simulating the proposal in various network scenarios,we have successfully detected landmine signatures present in themagnetometry images.The trained models exhibit significant performance improvements,achieving a maximum mean average precision value of 97.8%.展开更多
A new approach for flexoelectricmaterial shape optimization is proposed in this study.In this work,a proxymodel based on artificial neural network(ANN)is used to solve the parameter optimization and shape optimization...A new approach for flexoelectricmaterial shape optimization is proposed in this study.In this work,a proxymodel based on artificial neural network(ANN)is used to solve the parameter optimization and shape optimization problems.To improve the fitting ability of the neural network,we use the idea of pre-training to determine the structure of the neural network and combine different optimizers for training.The isogeometric analysis-finite element method(IGA-FEM)is used to discretize the flexural theoretical formulas and obtain samples,which helps ANN to build a proxy model from the model shape to the target value.The effectiveness of the proposed method is verified through two numerical examples of parameter optimization and one numerical example of shape optimization.展开更多
In this paper, the issues of stochastic stability analysis and fault estimation are investigated for a class of continuoustime Markov jump piecewise-affine(PWA) systems against actuator and sensor faults. Firstly, a n...In this paper, the issues of stochastic stability analysis and fault estimation are investigated for a class of continuoustime Markov jump piecewise-affine(PWA) systems against actuator and sensor faults. Firstly, a novel mode-dependent PWA iterative learning observer with current feedback is designed to estimate the system states and faults, simultaneously, which contains both the previous iteration information and the current feedback mechanism. The auxiliary feedback channel optimizes the response speed of the observer, therefore the estimation error would converge to zero rapidly. Then, sufficient conditions for stochastic stability with guaranteed performance are demonstrated for the estimation error system, and the equivalence relations between the system information and the estimated information can be established via iterative accumulating representation.Finally, two illustrative examples containing a class of tunnel diode circuit systems are presented to fully demonstrate the effectiveness and superiority of the proposed iterative learning observer with current feedback.展开更多
Lung cancer is the most prevalent cancer diagnosis and the leading cause of cancer death worldwide.Therapeutic failure in lung cancer(LUAD)is heavily influenced by drug resistance.This challenge stems from the diverse...Lung cancer is the most prevalent cancer diagnosis and the leading cause of cancer death worldwide.Therapeutic failure in lung cancer(LUAD)is heavily influenced by drug resistance.This challenge stems from the diverse cell populations within the tumor,each having unique genetic,epigenetic,and phenotypic profiles.Such variations lead to varied therapeutic responses,thereby contributing to tumor relapse and disease progression.Methods:The Genomics of Drug Sensitivity in Cancer(GDSC)database was used in this investigation to obtain the mRNA expression dataset,genomic mutation profile,and drug sensitivity information of NSCLS.Machine Learning(ML)methods,including Random Forest(RF),Artificial Neurol Network(ANN),and Support Vector Machine(SVM),were used to predict the response status of each compound based on the mRNA and mutation characteristics determined using statistical methods.The most suitable method for each drug was proposed by comparing the prediction accuracy of different ML methods,and the selected mRNA and mutation characteristics were identified as molecular features for the drug-responsive cancer subtype.Finally,the prognostic influence of molecular features on the mutational subtype of LUAD in publicly available datasets.Results:Our analyses yielded 1,564 gene features and 45 mutational features for 46 drugs.Applying the ML approach to predict the drug response for each medication revealed an upstanding performance for SVM in predicting Afuresertib drug response(area under the curve[AUC]0.875)using CIT,GAS2L3,STAG3L3,ATP2B4-mut,and IL15RA-mut as molecular features.Furthermore,the ANN algorithm using 9 mRNA characteristics demonstrated the highest prediction performance(AUC 0.780)in Gefitinib with CCL23-mut.Conclusion:This work extensively investigated the mRNA and mutation signatures associated with drug response in LUAD using a machine-learning approach and proposed a priority algorithm to predict drug response for different drugs.展开更多
Video streaming applications have grown considerably in recent years.As a result,this becomes one of the most significant contributors to global internet traffic.According to recent studies,the telecommunications indu...Video streaming applications have grown considerably in recent years.As a result,this becomes one of the most significant contributors to global internet traffic.According to recent studies,the telecommunications industry loses millions of dollars due to poor video Quality of Experience(QoE)for users.Among the standard proposals for standardizing the quality of video streaming over internet service providers(ISPs)is the Mean Opinion Score(MOS).However,the accurate finding of QoE by MOS is subjective and laborious,and it varies depending on the user.A fully automated data analytics framework is required to reduce the inter-operator variability characteristic in QoE assessment.This work addresses this concern by suggesting a novel hybrid XGBStackQoE analytical model using a two-level layering technique.Level one combines multiple Machine Learning(ML)models via a layer one Hybrid XGBStackQoE-model.Individual ML models at level one are trained using the entire training data set.The level two Hybrid XGBStackQoE-Model is fitted using the outputs(meta-features)of the layer one ML models.The proposed model outperformed the conventional models,with an accuracy improvement of 4 to 5 percent,which is still higher than the current traditional models.The proposed framework could significantly improve video QoE accuracy.展开更多
Background Virtual reality technology has been widely used in surgical simulators,providing new opportunities for assessing and training surgical skills.Machine learning algorithms are commonly used to analyze and eva...Background Virtual reality technology has been widely used in surgical simulators,providing new opportunities for assessing and training surgical skills.Machine learning algorithms are commonly used to analyze and evaluate the performance of participants.However,their interpretability limits the personalization of the training for individual participants.Methods Seventy-nine participants were recruited and divided into three groups based on their skill level in intracranial tumor resection.Data on the use of surgical tools were collected using a surgical simulator.Feature selection was performed using the Minimum Redundancy Maximum Relevance and SVM-RFE algorithms to obtain the final metrics for training the machine learning model.Five machine learning algorithms were trained to predict the skill level,and the support vector machine performed the best,with an accuracy of 92.41%and Area Under Curve value of 0.98253.The machine learning model was interpreted using Shapley values to identify the important factors contributing to the skill level of each participant.Results This study demonstrates the effectiveness of machine learning in differentiating the evaluation and training of virtual reality neurosurgical performances.The use of Shapley values enables targeted training by identifying deficiencies in individual skills.Conclusions This study provides insights into the use of machine learning for personalized training in virtual reality neurosurgery.The interpretability of the machine learning models enables the development of individualized training programs.In addition,this study highlighted the potential of explanatory models in training external skills.展开更多
This research concentrates to model an efficient thyroid prediction approach,which is considered a baseline for significant problems faced by the women community.The major research problem is the lack of automated mod...This research concentrates to model an efficient thyroid prediction approach,which is considered a baseline for significant problems faced by the women community.The major research problem is the lack of automated model to attain earlier prediction.Some existing model fails to give better prediction accuracy.Here,a novel clinical decision support system is framed to make the proper decision during a time of complexity.Multiple stages are followed in the proposed framework,which plays a substantial role in thyroid prediction.These steps include i)data acquisition,ii)outlier prediction,and iii)multi-stage weight-based ensemble learning process(MS-WEL).The weighted analysis of the base classifier and other classifier models helps bridge the gap encountered in one single classifier model.Various classifiers aremerged to handle the issues identified in others and intend to enhance the prediction rate.The proposed model provides superior outcomes and gives good quality prediction rate.The simulation is done in the MATLAB 2020a environment and establishes a better trade-off than various existing approaches.The model gives a prediction accuracy of 97.28%accuracy compared to other models and shows a better trade than others.展开更多
We provide a kernel-regularized method to give theory solutions for Neumann boundary value problem on the unit ball. We define the reproducing kernel Hilbert space with the spherical harmonics associated with an inner...We provide a kernel-regularized method to give theory solutions for Neumann boundary value problem on the unit ball. We define the reproducing kernel Hilbert space with the spherical harmonics associated with an inner product defined on both the unit ball and the unit sphere, construct the kernel-regularized learning algorithm from the view of semi-supervised learning and bound the upper bounds for the learning rates. The theory analysis shows that the learning algorithm has better uniform convergence according to the number of samples. The research can be regarded as an application of kernel-regularized semi-supervised learning.展开更多
The composite exciter and the CaO to Na_(2)SO_(4) dosing ratios are known to have a strong impact on the mechanical strength offly-ash concrete.In the present study a hybrid approach relying on experiments and a machi...The composite exciter and the CaO to Na_(2)SO_(4) dosing ratios are known to have a strong impact on the mechanical strength offly-ash concrete.In the present study a hybrid approach relying on experiments and a machine-learn-ing technique has been used to tackle this problem.The tests have shown that the optimal admixture of CaO and Na_(2)SO_(4) alone is 8%.The best 3D mechanical strength offly-ash concrete is achieved at 8%of the compound activator;If the 28-day mechanical strength is considered,then,the best performances are obtained at 4%of the compound activator.Moreover,the 3D mechanical strength offly-ash concrete is better when the dosing ratio of CaO to Na_(2)SO_(4) in the compound activator is 1:1;the maximum strength offly-ash concrete at 28-day can be achieved for a 1:1 ratio of CaO to Na_(2)SO_(4) by considering a 4%compound activator.In this case,the compressive andflexural strengths are 260 MPa and 53.6 MPa,respectively;the mechanical strength offly-ash concrete at 28-day can be improved by a 4:1 ratio of CaO to Na_(2)SO_(4) by considering 8%and 12%compound excitants.It is shown that the predictions based on the aforementioned machine-learning approach are accurate and reliable.展开更多
In the shape analysis community,decomposing a 3D shape intomeaningful parts has become a topic of interest.3D model segmentation is largely used in tasks such as shape deformation,shape partial matching,skeleton extra...In the shape analysis community,decomposing a 3D shape intomeaningful parts has become a topic of interest.3D model segmentation is largely used in tasks such as shape deformation,shape partial matching,skeleton extraction,shape correspondence,shape annotation and texture mapping.Numerous approaches have attempted to provide better segmentation solutions;however,the majority of the previous techniques used handcrafted features,which are usually focused on a particular attribute of 3Dobjects and so are difficult to generalize.In this paper,we propose a three-stage approach for using Multi-view recurrent neural network to automatically segment a 3D shape into visually meaningful sub-meshes.The first stage involves normalizing and scaling a 3D model to fit within the unit sphere and rendering the object into different views.Contrasting viewpoints,on the other hand,might not have been associated,and a 3D region could correlate into totally distinct outcomes depending on the viewpoint.To address this,we ran each view through(shared weights)CNN and Bolster block in order to create a probability boundary map.The Bolster block simulates the area relationships between different views,which helps to improve and refine the data.In stage two,the feature maps generated in the previous step are correlated using a Recurrent Neural network to obtain compatible fine detail responses for each view.Finally,a layer that is fully connected is used to return coherent edges,which are then back project to 3D objects to produce the final segmentation.Experiments on the Princeton Segmentation Benchmark dataset show that our proposed method is effective for mesh segmentation tasks.展开更多
Cardiovascular disease is among the top five fatal diseases that affect lives worldwide.Therefore,its early prediction and detection are crucial,allowing one to take proper and necessary measures at earlier stages.Mac...Cardiovascular disease is among the top five fatal diseases that affect lives worldwide.Therefore,its early prediction and detection are crucial,allowing one to take proper and necessary measures at earlier stages.Machine learning(ML)techniques are used to assist healthcare providers in better diagnosing heart disease.This study employed three boosting algorithms,namely,gradient boost,XGBoost,and AdaBoost,to predict heart disease.The dataset contained heart disease-related clinical features and was sourced from the publicly available UCI ML repository.Exploratory data analysis is performed to find the characteristics of data samples about descriptive and inferential statistics.Specifically,it was carried out to identify and replace outliers using the interquartile range and detect and replace the missing values using the imputation method.Results were recorded before and after the data preprocessing techniques were applied.Out of all the algorithms,gradient boosting achieved the highest accuracy rate of 92.20%for the proposed model.The proposed model yielded better results with gradient boosting in terms of precision,recall,and f1-score.It attained better prediction performance than the existing works and can be used for other diseases that share common features using transfer learning.展开更多
Nowadays,quantum machine learning is attracting great interest in a wide range offields due to its potential superior performance and capabilities.The massive increase in computational capacity and speed of quantum com...Nowadays,quantum machine learning is attracting great interest in a wide range offields due to its potential superior performance and capabilities.The massive increase in computational capacity and speed of quantum computers can lead to a quantum leap in the healthcarefield.Heart disease seriously threa-tens human health since it is the leading cause of death worldwide.Quantum machine learning methods can propose effective solutions to predict heart disease and aid in early diagnosis.In this study,an ensemble machine learning model based on quantum machine learning classifiers is proposed to predict the risk of heart disease.The proposed model is a bagging ensemble learning model where a quantum support vector classifier was used as a base classifier.Further-more,in order to make the model’s outcomes more explainable,the importance of every single feature in the prediction is computed and visualized using SHapley Additive exPlanations(SHAP)framework.In the experimental study,other stand-alone quantum classifiers,namely,Quantum Support Vector Classifier(QSVC),Quantum Neural Network(QNN),and Variational Quantum Classifier(VQC)are applied and compared with classical machine learning classifiers such as Sup-port Vector Machine(SVM),and Artificial Neural Network(ANN).The experi-mental results on the Cleveland dataset reveal the superiority of QSVC compared to the others,which explains its use in the proposed bagging model.The Bagging-QSVC model outperforms all aforementioned classifiers with an accuracy of 90.16%while showing great competitiveness compared to some state-of-the-art models using the same dataset.The results of the study indicate that quantum machine learning classifiers perform better than classical machine learning classi-fiers in predicting heart disease.In addition,the study reveals that the bagging ensemble learning technique is effective in improving the prediction accuracy of quantum classifiers.展开更多
Spondylolisthesis is a chronic disease,and a timely diagnosis of it may help in avoiding surgery.Disease identification in x-ray radiographs is very challenging.Strengthening the feature extraction tool in VGG16 has i...Spondylolisthesis is a chronic disease,and a timely diagnosis of it may help in avoiding surgery.Disease identification in x-ray radiographs is very challenging.Strengthening the feature extraction tool in VGG16 has improved the classification rate.But the fully connected layers of VGG16 are not efficient at capturing the positional structure of an object in images.Capsule network(CapsNet)works with capsules(neuron clusters)rather than a single neuron to grasp the properties of the provided image to match the pattern.In this study,an integrated model that is a combination of VGG16 and CapsNet(S-VCNet)is proposed.In the model,VGG16 is used as a feature extractor.After feature extraction,the output is fed to CapsNet for disease identification.A private dataset is used that contains 466 X-ray radiographs,including 186 images displaying a spine with spondylolisthesis and 280 images depicting a normal spine.The suggested model is the first step towards developing a web-based radiological diagnosis tool that can be utilized in outpatient clinics where there are not enough qualified medical professionals.Experimental results demonstrate that the developed model outperformed the other models that are used for lumbar spondylolisthesis diagnosis with 98%accuracy.After the performance check,the model has been successfully deployed on the Gradio web app platform to produce the outcome in less than 20 s.展开更多
Over the past two decades,digital microfluidic biochips have been in much demand for safety-critical and biomedical applications and increasingly important in point-of-care analysis,drug discovery,and immunoassays,amo...Over the past two decades,digital microfluidic biochips have been in much demand for safety-critical and biomedical applications and increasingly important in point-of-care analysis,drug discovery,and immunoassays,among other areas.However,for complex bioassays,finding routes for the transportation of droplets in an electrowetting-on-dielectric digital biochip while maintaining their discreteness is a challenging task.In this study,we propose a deep reinforcement learning-based droplet routing technique for digital microfluidic biochips.The technique is implemented on a distributed architecture to optimize the possible paths for predefined source–target pairs of droplets.The actors of the technique calculate the possible routes of the source–target pairs and store the experience in a replay buffer,and the learner fetches the experiences and updates the routing paths.The proposed algorithm was applied to benchmark suitesⅠand Ⅲ as two different test benches,and it achieved significant improvements over state-of-the-art techniques.展开更多
The aviation industry is one of the most competitive markets. Themost common approach for airline service providers is to improve passengersatisfaction. Passenger satisfaction in the aviation industry occurs whenpasse...The aviation industry is one of the most competitive markets. Themost common approach for airline service providers is to improve passengersatisfaction. Passenger satisfaction in the aviation industry occurs whenpassengers’ expectations are met during flights. Airline service quality iscritical in attracting new passengers and retaining existing ones. It is crucialto identify passengers’ pain points and enhance their satisfaction with theservices offered. The airlines used a variety of techniques to improve servicequality. They used data analysis approaches to analyze the passenger pointdata. These solutions have focused simply on surveys;consequently, deeplearningapproaches have received insufficient attention. In this study, deepneural networks with the adaptive moment estimation Adam optimizationalgorithm were applied to enhance classification performance. In previousstudies, the quality of the dataset has been ignored. The proposed approachwas applied to the airline passenger satisfaction dataset from the Kagglerepository. It was validated by applying artificial neural networks (ANNs),random forests, and support vector machine techniques to the same dataset. Itwas compared with other research papers that used the same dataset and had asimilar problem. The experimental results showed that the proposed approachoutperformed previous studies. It has achieved an accuracy of 99.3%.展开更多
Emailing is among the cheapest and most easily accessible platforms,and covers every idea of the present century like banking,personal login database,academic information,invitation,marketing,advertisement,social engi...Emailing is among the cheapest and most easily accessible platforms,and covers every idea of the present century like banking,personal login database,academic information,invitation,marketing,advertisement,social engineering,model creation on cyber-based technologies,etc.The uncontrolled development and easy access to the internet are the reasons for the increased insecurity in email communication.Therefore,this review paper aims to investigate deep learning approaches for detecting the threats associated with e-mail security.This study compiles the literature related to the deep learning methodologies,which are applicable for providing safety in the field of cyber security of email in different organizations.Relevant data were extracted from different research depositories.The paper discusses various solutions for handling these threats.Different challenges and issues are also investigated for e-mail security threats including social engineering,malware,spam,and phishing in the existing solutions to identify the core current problem and set the road for future studies.The review analysis showed that communication media is the common platform for attackers to conduct fraudulent activities via spoofed e-mails and fake websites and this research has combined the merit and demerits of the deep learning approaches adaption in email security threat by the usage of models and technologies.The study highlighted the contrasts of deep learning approaches in detecting email security threats.This review study has set criteria to include studies that deal with at least one of the six machine models in cyber security.展开更多
文摘In healthcare,the persistent challenge of arrhythmias,a leading cause of global mortality,has sparked extensive research into the automation of detection using machine learning(ML)algorithms.However,traditional ML and AutoML approaches have revealed their limitations,notably regarding feature generalization and automation efficiency.This glaring research gap has motivated the development of AutoRhythmAI,an innovative solution that integrates both machine and deep learning to revolutionize the diagnosis of arrhythmias.Our approach encompasses two distinct pipelines tailored for binary-class and multi-class arrhythmia detection,effectively bridging the gap between data preprocessing and model selection.To validate our system,we have rigorously tested AutoRhythmAI using a multimodal dataset,surpassing the accuracy achieved using a single dataset and underscoring the robustness of our methodology.In the first pipeline,we employ signal filtering and ML algorithms for preprocessing,followed by data balancing and split for training.The second pipeline is dedicated to feature extraction and classification,utilizing deep learning models.Notably,we introduce the‘RRI-convoluted trans-former model’as a novel addition for binary-class arrhythmias.An ensemble-based approach then amalgamates all models,considering their respective weights,resulting in an optimal model pipeline.In our study,the VGGRes Model achieved impressive results in multi-class arrhythmia detection,with an accuracy of 97.39%and firm performance in precision(82.13%),recall(31.91%),and F1-score(82.61%).In the binary-class task,the proposed model achieved an outstanding accuracy of 96.60%.These results highlight the effectiveness of our approach in improving arrhythmia detection,with notably high accuracy and well-balanced performance metrics.
基金funded by Researchers Supporting Project Number(RSPD2024 R947),King Saud University,Riyadh,Saudi Arabia.
文摘Hand gestures have been used as a significant mode of communication since the advent of human civilization.By facilitating human-computer interaction(HCI),hand gesture recognition(HGRoc)technology is crucial for seamless and error-free HCI.HGRoc technology is pivotal in healthcare and communication for the deaf community.Despite significant advancements in computer vision-based gesture recognition for language understanding,two considerable challenges persist in this field:(a)limited and common gestures are considered,(b)processing multiple channels of information across a network takes huge computational time during discriminative feature extraction.Therefore,a novel hand vision-based convolutional neural network(CNN)model named(HVCNNM)offers several benefits,notably enhanced accuracy,robustness to variations,real-time performance,reduced channels,and scalability.Additionally,these models can be optimized for real-time performance,learn from large amounts of data,and are scalable to handle complex recognition tasks for efficient human-computer interaction.The proposed model was evaluated on two challenging datasets,namely the Massey University Dataset(MUD)and the American Sign Language(ASL)Alphabet Dataset(ASLAD).On the MUD and ASLAD datasets,HVCNNM achieved a score of 99.23% and 99.00%,respectively.These results demonstrate the effectiveness of CNN as a promising HGRoc approach.The findings suggest that the proposed model have potential roles in applications such as sign language recognition,human-computer interaction,and robotics.
文摘Recently,there has been a notable surge of interest in scientific research regarding spectral images.The potential of these images to revolutionize the digital photography industry,like aerial photography through Unmanned Aerial Vehicles(UAVs),has captured considerable attention.One encouraging aspect is their combination with machine learning and deep learning algorithms,which have demonstrated remarkable outcomes in image classification.As a result of this powerful amalgamation,the adoption of spectral images has experienced exponential growth across various domains,with agriculture being one of the prominent beneficiaries.This paper presents an extensive survey encompassing multispectral and hyperspectral images,focusing on their applications for classification challenges in diverse agricultural areas,including plants,grains,fruits,and vegetables.By meticulously examining primary studies,we delve into the specific agricultural domains where multispectral and hyperspectral images have found practical use.Additionally,our attention is directed towards utilizing machine learning techniques for effectively classifying hyperspectral images within the agricultural context.The findings of our investigation reveal that deep learning and support vector machines have emerged as widely employed methods for hyperspectral image classification in agriculture.Nevertheless,we also shed light on the various issues and limitations of working with spectral images.This comprehensive analysis aims to provide valuable insights into the current state of spectral imaging in agriculture and its potential for future advancements.
基金supported in part by NSFC (62102099, U22A2054, 62101594)in part by the Pearl River Talent Recruitment Program (2021QN02S643)+9 种基金Guangzhou Basic Research Program (2023A04J1699)in part by the National Research Foundation, SingaporeInfocomm Media Development Authority under its Future Communications Research Development ProgrammeDSO National Laboratories under the AI Singapore Programme under AISG Award No AISG2-RP-2020-019Energy Research Test-Bed and Industry Partnership Funding Initiative, Energy Grid (EG) 2.0 programmeDesCartes and the Campus for Research Excellence and Technological Enterprise (CREATE) programmeMOE Tier 1 under Grant RG87/22in part by the Singapore University of Technology and Design (SUTD) (SRG-ISTD-2021- 165)in part by the SUTD-ZJU IDEA Grant SUTD-ZJU (VP) 202102in part by the Ministry of Education, Singapore, through its SUTD Kickstarter Initiative (SKI 20210204)。
文摘Avatars, as promising digital representations and service assistants of users in Metaverses, can enable drivers and passengers to immerse themselves in 3D virtual services and spaces of UAV-assisted vehicular Metaverses. However, avatar tasks include a multitude of human-to-avatar and avatar-to-avatar interactive applications, e.g., augmented reality navigation,which consumes intensive computing resources. It is inefficient and impractical for vehicles to process avatar tasks locally. Fortunately, migrating avatar tasks to the nearest roadside units(RSU)or unmanned aerial vehicles(UAV) for execution is a promising solution to decrease computation overhead and reduce task processing latency, while the high mobility of vehicles brings challenges for vehicles to independently perform avatar migration decisions depending on current and future vehicle status. To address these challenges, in this paper, we propose a novel avatar task migration system based on multi-agent deep reinforcement learning(MADRL) to execute immersive vehicular avatar tasks dynamically. Specifically, we first formulate the problem of avatar task migration from vehicles to RSUs/UAVs as a partially observable Markov decision process that can be solved by MADRL algorithms. We then design the multi-agent proximal policy optimization(MAPPO) approach as the MADRL algorithm for the avatar task migration problem. To overcome slow convergence resulting from the curse of dimensionality and non-stationary issues caused by shared parameters in MAPPO, we further propose a transformer-based MAPPO approach via sequential decision-making models for the efficient representation of relationships among agents. Finally, to motivate terrestrial or non-terrestrial edge servers(e.g., RSUs or UAVs) to share computation resources and ensure traceability of the sharing records, we apply smart contracts and blockchain technologies to achieve secure sharing management. Numerical results demonstrate that the proposed approach outperforms the MAPPO approach by around 2% and effectively reduces approximately 20% of the latency of avatar task execution in UAV-assisted vehicular Metaverses.
基金funded by Institutional Fund Projects under Grant No(IFPNC-001-611-2020).
文摘Landmines continue to pose an ongoing threat in various regions around the world,with countless buried landmines affecting numerous human lives.The detonation of these landmines results in thousands of casualties reported worldwide annually.Therefore,there is a pressing need to employ diverse landmine detection techniques for their removal.One effective approach for landmine detection is UAV(Unmanned Aerial Vehicle)based AirborneMagnetometry,which identifies magnetic anomalies in the local terrestrial magnetic field.It can generate a contour plot or heat map that visually represents the magnetic field strength.Despite the effectiveness of this approach,landmine removal remains a challenging and resource-intensive task,fraughtwith risks.Edge computing,on the other hand,can play a crucial role in critical drone monitoring applications like landmine detection.By processing data locally on a nearby edge server,edge computing can reduce communication latency and bandwidth requirements,allowing real-time analysis of magnetic field data.It enables faster decision-making and more efficient landmine detection,potentially saving lives and minimizing the risks involved in the process.Furthermore,edge computing can provide enhanced security and privacy by keeping sensitive data close to the source,reducing the chances of data exposure during transmission.This paper introduces the MAGnetometry Imaging based Classification System(MAGICS),a fully automated UAV-based system designed for landmine and buried object detection and localization.We have developed an efficient deep learning-based strategy for automatic image classification using magnetometry dataset traces.By simulating the proposal in various network scenarios,we have successfully detected landmine signatures present in themagnetometry images.The trained models exhibit significant performance improvements,achieving a maximum mean average precision value of 97.8%.
基金supported by a Major Research Project in Higher Education Institutions in Henan Province,with Project Number 23A560015.
文摘A new approach for flexoelectricmaterial shape optimization is proposed in this study.In this work,a proxymodel based on artificial neural network(ANN)is used to solve the parameter optimization and shape optimization problems.To improve the fitting ability of the neural network,we use the idea of pre-training to determine the structure of the neural network and combine different optimizers for training.The isogeometric analysis-finite element method(IGA-FEM)is used to discretize the flexural theoretical formulas and obtain samples,which helps ANN to build a proxy model from the model shape to the target value.The effectiveness of the proposed method is verified through two numerical examples of parameter optimization and one numerical example of shape optimization.
基金supported in part by the National Natural Science Foundation of China (62222310, U1813201, 61973131, 62033008)the Research Fund for the Taishan Scholar Project of Shandong Province of China+2 种基金the NSFSD(ZR2022ZD34)Japan Society for the Promotion of Science (21K04129)Fujian Outstanding Youth Science Fund (2020J06022)。
文摘In this paper, the issues of stochastic stability analysis and fault estimation are investigated for a class of continuoustime Markov jump piecewise-affine(PWA) systems against actuator and sensor faults. Firstly, a novel mode-dependent PWA iterative learning observer with current feedback is designed to estimate the system states and faults, simultaneously, which contains both the previous iteration information and the current feedback mechanism. The auxiliary feedback channel optimizes the response speed of the observer, therefore the estimation error would converge to zero rapidly. Then, sufficient conditions for stochastic stability with guaranteed performance are demonstrated for the estimation error system, and the equivalence relations between the system information and the estimated information can be established via iterative accumulating representation.Finally, two illustrative examples containing a class of tunnel diode circuit systems are presented to fully demonstrate the effectiveness and superiority of the proposed iterative learning observer with current feedback.
文摘Lung cancer is the most prevalent cancer diagnosis and the leading cause of cancer death worldwide.Therapeutic failure in lung cancer(LUAD)is heavily influenced by drug resistance.This challenge stems from the diverse cell populations within the tumor,each having unique genetic,epigenetic,and phenotypic profiles.Such variations lead to varied therapeutic responses,thereby contributing to tumor relapse and disease progression.Methods:The Genomics of Drug Sensitivity in Cancer(GDSC)database was used in this investigation to obtain the mRNA expression dataset,genomic mutation profile,and drug sensitivity information of NSCLS.Machine Learning(ML)methods,including Random Forest(RF),Artificial Neurol Network(ANN),and Support Vector Machine(SVM),were used to predict the response status of each compound based on the mRNA and mutation characteristics determined using statistical methods.The most suitable method for each drug was proposed by comparing the prediction accuracy of different ML methods,and the selected mRNA and mutation characteristics were identified as molecular features for the drug-responsive cancer subtype.Finally,the prognostic influence of molecular features on the mutational subtype of LUAD in publicly available datasets.Results:Our analyses yielded 1,564 gene features and 45 mutational features for 46 drugs.Applying the ML approach to predict the drug response for each medication revealed an upstanding performance for SVM in predicting Afuresertib drug response(area under the curve[AUC]0.875)using CIT,GAS2L3,STAG3L3,ATP2B4-mut,and IL15RA-mut as molecular features.Furthermore,the ANN algorithm using 9 mRNA characteristics demonstrated the highest prediction performance(AUC 0.780)in Gefitinib with CCL23-mut.Conclusion:This work extensively investigated the mRNA and mutation signatures associated with drug response in LUAD using a machine-learning approach and proposed a priority algorithm to predict drug response for different drugs.
文摘Video streaming applications have grown considerably in recent years.As a result,this becomes one of the most significant contributors to global internet traffic.According to recent studies,the telecommunications industry loses millions of dollars due to poor video Quality of Experience(QoE)for users.Among the standard proposals for standardizing the quality of video streaming over internet service providers(ISPs)is the Mean Opinion Score(MOS).However,the accurate finding of QoE by MOS is subjective and laborious,and it varies depending on the user.A fully automated data analytics framework is required to reduce the inter-operator variability characteristic in QoE assessment.This work addresses this concern by suggesting a novel hybrid XGBStackQoE analytical model using a two-level layering technique.Level one combines multiple Machine Learning(ML)models via a layer one Hybrid XGBStackQoE-model.Individual ML models at level one are trained using the entire training data set.The level two Hybrid XGBStackQoE-Model is fitted using the outputs(meta-features)of the layer one ML models.The proposed model outperformed the conventional models,with an accuracy improvement of 4 to 5 percent,which is still higher than the current traditional models.The proposed framework could significantly improve video QoE accuracy.
基金Supported by the Yunnan Key Laboratory of Opto-Electronic Information Technology,Postgraduate Research Innovation Fund of Yunnan Normal University (YJSJJ22-B79)the National Natural Science Foundation of China (62062069,62062070,62005235)。
文摘Background Virtual reality technology has been widely used in surgical simulators,providing new opportunities for assessing and training surgical skills.Machine learning algorithms are commonly used to analyze and evaluate the performance of participants.However,their interpretability limits the personalization of the training for individual participants.Methods Seventy-nine participants were recruited and divided into three groups based on their skill level in intracranial tumor resection.Data on the use of surgical tools were collected using a surgical simulator.Feature selection was performed using the Minimum Redundancy Maximum Relevance and SVM-RFE algorithms to obtain the final metrics for training the machine learning model.Five machine learning algorithms were trained to predict the skill level,and the support vector machine performed the best,with an accuracy of 92.41%and Area Under Curve value of 0.98253.The machine learning model was interpreted using Shapley values to identify the important factors contributing to the skill level of each participant.Results This study demonstrates the effectiveness of machine learning in differentiating the evaluation and training of virtual reality neurosurgical performances.The use of Shapley values enables targeted training by identifying deficiencies in individual skills.Conclusions This study provides insights into the use of machine learning for personalized training in virtual reality neurosurgery.The interpretability of the machine learning models enables the development of individualized training programs.In addition,this study highlighted the potential of explanatory models in training external skills.
文摘This research concentrates to model an efficient thyroid prediction approach,which is considered a baseline for significant problems faced by the women community.The major research problem is the lack of automated model to attain earlier prediction.Some existing model fails to give better prediction accuracy.Here,a novel clinical decision support system is framed to make the proper decision during a time of complexity.Multiple stages are followed in the proposed framework,which plays a substantial role in thyroid prediction.These steps include i)data acquisition,ii)outlier prediction,and iii)multi-stage weight-based ensemble learning process(MS-WEL).The weighted analysis of the base classifier and other classifier models helps bridge the gap encountered in one single classifier model.Various classifiers aremerged to handle the issues identified in others and intend to enhance the prediction rate.The proposed model provides superior outcomes and gives good quality prediction rate.The simulation is done in the MATLAB 2020a environment and establishes a better trade-off than various existing approaches.The model gives a prediction accuracy of 97.28%accuracy compared to other models and shows a better trade than others.
文摘We provide a kernel-regularized method to give theory solutions for Neumann boundary value problem on the unit ball. We define the reproducing kernel Hilbert space with the spherical harmonics associated with an inner product defined on both the unit ball and the unit sphere, construct the kernel-regularized learning algorithm from the view of semi-supervised learning and bound the upper bounds for the learning rates. The theory analysis shows that the learning algorithm has better uniform convergence according to the number of samples. The research can be regarded as an application of kernel-regularized semi-supervised learning.
基金supported by the Scientific Research Fund Project of Yunnan Education Department(Grant Numbers 2023J1974 and 2023J1976)the Yunnan University Professional Degree Graduate Student Practical Innovation Fund Project(Grant Number ZC-22222374)also supported by the Yunnan Provincial Education Department Fund(Grant No.2022Y286).
文摘The composite exciter and the CaO to Na_(2)SO_(4) dosing ratios are known to have a strong impact on the mechanical strength offly-ash concrete.In the present study a hybrid approach relying on experiments and a machine-learn-ing technique has been used to tackle this problem.The tests have shown that the optimal admixture of CaO and Na_(2)SO_(4) alone is 8%.The best 3D mechanical strength offly-ash concrete is achieved at 8%of the compound activator;If the 28-day mechanical strength is considered,then,the best performances are obtained at 4%of the compound activator.Moreover,the 3D mechanical strength offly-ash concrete is better when the dosing ratio of CaO to Na_(2)SO_(4) in the compound activator is 1:1;the maximum strength offly-ash concrete at 28-day can be achieved for a 1:1 ratio of CaO to Na_(2)SO_(4) by considering a 4%compound activator.In this case,the compressive andflexural strengths are 260 MPa and 53.6 MPa,respectively;the mechanical strength offly-ash concrete at 28-day can be improved by a 4:1 ratio of CaO to Na_(2)SO_(4) by considering 8%and 12%compound excitants.It is shown that the predictions based on the aforementioned machine-learning approach are accurate and reliable.
基金supported by the National Natural Science Foundation of China (61671397).
文摘In the shape analysis community,decomposing a 3D shape intomeaningful parts has become a topic of interest.3D model segmentation is largely used in tasks such as shape deformation,shape partial matching,skeleton extraction,shape correspondence,shape annotation and texture mapping.Numerous approaches have attempted to provide better segmentation solutions;however,the majority of the previous techniques used handcrafted features,which are usually focused on a particular attribute of 3Dobjects and so are difficult to generalize.In this paper,we propose a three-stage approach for using Multi-view recurrent neural network to automatically segment a 3D shape into visually meaningful sub-meshes.The first stage involves normalizing and scaling a 3D model to fit within the unit sphere and rendering the object into different views.Contrasting viewpoints,on the other hand,might not have been associated,and a 3D region could correlate into totally distinct outcomes depending on the viewpoint.To address this,we ran each view through(shared weights)CNN and Bolster block in order to create a probability boundary map.The Bolster block simulates the area relationships between different views,which helps to improve and refine the data.In stage two,the feature maps generated in the previous step are correlated using a Recurrent Neural network to obtain compatible fine detail responses for each view.Finally,a layer that is fully connected is used to return coherent edges,which are then back project to 3D objects to produce the final segmentation.Experiments on the Princeton Segmentation Benchmark dataset show that our proposed method is effective for mesh segmentation tasks.
基金This work was supported by National Research Foundation of Korea-Grant funded by the Korean Government(MSIT)-NRF-2020R1A2B5B02002478.
文摘Cardiovascular disease is among the top five fatal diseases that affect lives worldwide.Therefore,its early prediction and detection are crucial,allowing one to take proper and necessary measures at earlier stages.Machine learning(ML)techniques are used to assist healthcare providers in better diagnosing heart disease.This study employed three boosting algorithms,namely,gradient boost,XGBoost,and AdaBoost,to predict heart disease.The dataset contained heart disease-related clinical features and was sourced from the publicly available UCI ML repository.Exploratory data analysis is performed to find the characteristics of data samples about descriptive and inferential statistics.Specifically,it was carried out to identify and replace outliers using the interquartile range and detect and replace the missing values using the imputation method.Results were recorded before and after the data preprocessing techniques were applied.Out of all the algorithms,gradient boosting achieved the highest accuracy rate of 92.20%for the proposed model.The proposed model yielded better results with gradient boosting in terms of precision,recall,and f1-score.It attained better prediction performance than the existing works and can be used for other diseases that share common features using transfer learning.
基金supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2022R196),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Nowadays,quantum machine learning is attracting great interest in a wide range offields due to its potential superior performance and capabilities.The massive increase in computational capacity and speed of quantum computers can lead to a quantum leap in the healthcarefield.Heart disease seriously threa-tens human health since it is the leading cause of death worldwide.Quantum machine learning methods can propose effective solutions to predict heart disease and aid in early diagnosis.In this study,an ensemble machine learning model based on quantum machine learning classifiers is proposed to predict the risk of heart disease.The proposed model is a bagging ensemble learning model where a quantum support vector classifier was used as a base classifier.Further-more,in order to make the model’s outcomes more explainable,the importance of every single feature in the prediction is computed and visualized using SHapley Additive exPlanations(SHAP)framework.In the experimental study,other stand-alone quantum classifiers,namely,Quantum Support Vector Classifier(QSVC),Quantum Neural Network(QNN),and Variational Quantum Classifier(VQC)are applied and compared with classical machine learning classifiers such as Sup-port Vector Machine(SVM),and Artificial Neural Network(ANN).The experi-mental results on the Cleveland dataset reveal the superiority of QSVC compared to the others,which explains its use in the proposed bagging model.The Bagging-QSVC model outperforms all aforementioned classifiers with an accuracy of 90.16%while showing great competitiveness compared to some state-of-the-art models using the same dataset.The results of the study indicate that quantum machine learning classifiers perform better than classical machine learning classi-fiers in predicting heart disease.In addition,the study reveals that the bagging ensemble learning technique is effective in improving the prediction accuracy of quantum classifiers.
文摘Spondylolisthesis is a chronic disease,and a timely diagnosis of it may help in avoiding surgery.Disease identification in x-ray radiographs is very challenging.Strengthening the feature extraction tool in VGG16 has improved the classification rate.But the fully connected layers of VGG16 are not efficient at capturing the positional structure of an object in images.Capsule network(CapsNet)works with capsules(neuron clusters)rather than a single neuron to grasp the properties of the provided image to match the pattern.In this study,an integrated model that is a combination of VGG16 and CapsNet(S-VCNet)is proposed.In the model,VGG16 is used as a feature extractor.After feature extraction,the output is fed to CapsNet for disease identification.A private dataset is used that contains 466 X-ray radiographs,including 186 images displaying a spine with spondylolisthesis and 280 images depicting a normal spine.The suggested model is the first step towards developing a web-based radiological diagnosis tool that can be utilized in outpatient clinics where there are not enough qualified medical professionals.Experimental results demonstrate that the developed model outperformed the other models that are used for lumbar spondylolisthesis diagnosis with 98%accuracy.After the performance check,the model has been successfully deployed on the Gradio web app platform to produce the outcome in less than 20 s.
文摘Over the past two decades,digital microfluidic biochips have been in much demand for safety-critical and biomedical applications and increasingly important in point-of-care analysis,drug discovery,and immunoassays,among other areas.However,for complex bioassays,finding routes for the transportation of droplets in an electrowetting-on-dielectric digital biochip while maintaining their discreteness is a challenging task.In this study,we propose a deep reinforcement learning-based droplet routing technique for digital microfluidic biochips.The technique is implemented on a distributed architecture to optimize the possible paths for predefined source–target pairs of droplets.The actors of the technique calculate the possible routes of the source–target pairs and store the experience in a replay buffer,and the learner fetches the experiences and updates the routing paths.The proposed algorithm was applied to benchmark suitesⅠand Ⅲ as two different test benches,and it achieved significant improvements over state-of-the-art techniques.
文摘The aviation industry is one of the most competitive markets. Themost common approach for airline service providers is to improve passengersatisfaction. Passenger satisfaction in the aviation industry occurs whenpassengers’ expectations are met during flights. Airline service quality iscritical in attracting new passengers and retaining existing ones. It is crucialto identify passengers’ pain points and enhance their satisfaction with theservices offered. The airlines used a variety of techniques to improve servicequality. They used data analysis approaches to analyze the passenger pointdata. These solutions have focused simply on surveys;consequently, deeplearningapproaches have received insufficient attention. In this study, deepneural networks with the adaptive moment estimation Adam optimizationalgorithm were applied to enhance classification performance. In previousstudies, the quality of the dataset has been ignored. The proposed approachwas applied to the airline passenger satisfaction dataset from the Kagglerepository. It was validated by applying artificial neural networks (ANNs),random forests, and support vector machine techniques to the same dataset. Itwas compared with other research papers that used the same dataset and had asimilar problem. The experimental results showed that the proposed approachoutperformed previous studies. It has achieved an accuracy of 99.3%.
基金supported via funding from Prince Sattam bin Abdulaziz University Project Number(PSAU/2023/R/1444).
文摘Emailing is among the cheapest and most easily accessible platforms,and covers every idea of the present century like banking,personal login database,academic information,invitation,marketing,advertisement,social engineering,model creation on cyber-based technologies,etc.The uncontrolled development and easy access to the internet are the reasons for the increased insecurity in email communication.Therefore,this review paper aims to investigate deep learning approaches for detecting the threats associated with e-mail security.This study compiles the literature related to the deep learning methodologies,which are applicable for providing safety in the field of cyber security of email in different organizations.Relevant data were extracted from different research depositories.The paper discusses various solutions for handling these threats.Different challenges and issues are also investigated for e-mail security threats including social engineering,malware,spam,and phishing in the existing solutions to identify the core current problem and set the road for future studies.The review analysis showed that communication media is the common platform for attackers to conduct fraudulent activities via spoofed e-mails and fake websites and this research has combined the merit and demerits of the deep learning approaches adaption in email security threat by the usage of models and technologies.The study highlighted the contrasts of deep learning approaches in detecting email security threats.This review study has set criteria to include studies that deal with at least one of the six machine models in cyber security.