Software Defined Network(SDN)and Network Function Virtualization(NFV)technology promote several benefits to network operators,including reduced maintenance costs,increased network operational performance,simplified ne...Software Defined Network(SDN)and Network Function Virtualization(NFV)technology promote several benefits to network operators,including reduced maintenance costs,increased network operational performance,simplified network lifecycle,and policies management.Network vulnerabilities try to modify services provided by Network Function Virtualization MANagement and Orchestration(NFV MANO),and malicious attacks in different scenarios disrupt the NFV Orchestrator(NFVO)and Virtualized Infrastructure Manager(VIM)lifecycle management related to network services or individual Virtualized Network Function(VNF).This paper proposes an anomaly detection mechanism that monitors threats in NFV MANO and manages promptly and adaptively to implement and handle security functions in order to enhance the quality of experience for end users.An anomaly detector investigates these identified risks and provides secure network services.It enables virtual network security functions and identifies anomalies in Kubernetes(a cloud-based platform).For training and testing purpose of the proposed approach,an intrusion-containing dataset is used that hold multiple malicious activities like a Smurf,Neptune,Teardrop,Pod,Land,IPsweep,etc.,categorized as Probing(Prob),Denial of Service(DoS),User to Root(U2R),and Remote to User(R2L)attacks.An anomaly detector is anticipated with the capabilities of a Machine Learning(ML)technique,making use of supervised learning techniques like Logistic Regression(LR),Support Vector Machine(SVM),Random Forest(RF),Naïve Bayes(NB),and Extreme Gradient Boosting(XGBoost).The proposed framework has been evaluated by deploying the identified ML algorithm on a Jupyter notebook in Kubeflow to simulate Kubernetes for validation purposes.RF classifier has shown better outcomes(99.90%accuracy)than other classifiers in detecting anomalies/intrusions in the containerized environment.展开更多
Proactive Semantic Interference (PSI) and failure to recover from PSI (frPSI), are novel constructs assessed by the LASSI-L. These measures are sensitive to cognitive changes in early Mild Cognitive Impairment (MCI) a...Proactive Semantic Interference (PSI) and failure to recover from PSI (frPSI), are novel constructs assessed by the LASSI-L. These measures are sensitive to cognitive changes in early Mild Cognitive Impairment (MCI) and preclinical AD determined by Aβ load using PET. The goal of this study was to compare a new computerized version of the LASSI-L (LASSI-Brief Computerized) to the standard paper-and-pencil version of the test. In this study, we examined 110 cognitively unimpaired (CU) older adults and 79 with amnestic MCI (aMCI) who were administered the paper-and-pencil form of the LASSI-L. Their performance was compared with 62 CU older adults and 52 aMCI participants examined using the LASSI-BC. After adjustment for covariates (degree of initial learning, sex, education, and language of evaluation) both the standard and computerized versions distinguished between aMCI and CU participants. The performance of CU and aMCI groups using either form was relatively commensurate. Importantly, an optimal combination of Cued B2 recall and Cued B1 intrusions on the LASSI-BC yielded an area under the ROC curve of .927, a sensitivity of 92.3% and specificity of 88.1%, relative to an area under the ROC curve of .815, a sensitivity of 72.5%, and a specificity of 79.1% obtained for the paper-and-pencil LASSI-L. Overall, the LASSI-BC was comparable, and in some ways, superior to the paper-and-pencil LASSI-L. Advantages of the LASSI-BC include a more standardized administration, suitability for remote assessment, and an automated scoring mechanism that can be verified by a built-in audio recording of responses.展开更多
Analyzing big data, especially medical data, helps to provide good health care to patients and face the risks of death. The COVID-19 pandemic has had a significant impact on public health worldwide, emphasizing the ne...Analyzing big data, especially medical data, helps to provide good health care to patients and face the risks of death. The COVID-19 pandemic has had a significant impact on public health worldwide, emphasizing the need for effective risk prediction models. Machine learning (ML) techniques have shown promise in analyzing complex data patterns and predicting disease outcomes. The accuracy of these techniques is greatly affected by changing their parameters. Hyperparameter optimization plays a crucial role in improving model performance. In this work, the Particle Swarm Optimization (PSO) algorithm was used to effectively search the hyperparameter space and improve the predictive power of the machine learning models by identifying the optimal hyperparameters that can provide the highest accuracy. A dataset with a variety of clinical and epidemiological characteristics linked to COVID-19 cases was used in this study. Various machine learning models, including Random Forests, Decision Trees, Support Vector Machines, and Neural Networks, were utilized to capture the complex relationships present in the data. To evaluate the predictive performance of the models, the accuracy metric was employed. The experimental findings showed that the suggested method of estimating COVID-19 risk is effective. When compared to baseline models, the optimized machine learning models performed better and produced better results.展开更多
Limbal Stem Cell Deficiency(LSCD)is an eye disease that can cause corneal opacity and vascularization.In its advanced stage it can lead to a degree of visual impairment.It involves the changing in the semispherical sh...Limbal Stem Cell Deficiency(LSCD)is an eye disease that can cause corneal opacity and vascularization.In its advanced stage it can lead to a degree of visual impairment.It involves the changing in the semispherical shape of the cornea to a drooping shape to downwards direction.LSCD is hard to be diagnosed at early stages.The color and texture of the cornea surface can provide significant information about the cornea affected by LSCD.Parameters such as shape and texture are very crucial to differentiate normal from LSCD cornea.Although several medical approaches exist,most of them requires complicated procedure and medical devices.Therefore,in this paper,we pursued the development of a LSCD detection technique(LDT)utilizing image processing methods.Early diagnosis of LSCD is very crucial for physicians to arrange for effective treatment.In the proposed technique,we developed a method for LSCD detection utilizing frontal eye images.A dataset of 280 eye images of frontal and lateral LSCD and normal patients were used in this research.First,the cornea region of both frontal and lateral images is segmented,and the geometric features are extracted through the automated active contour model and the spline curve.While the texture features are extracted using the feature selection algorithm.The experimental results exhibited that the combined features of the geometric and texture will exhibit accuracy of 95.95%,sensitivity of 97.91% and specificity of 94.05% with the random forest classifier of n=40.As a result,this research developed a Limbal stem cell deficiency detection system utilizing features’fusion using image processing techniques for frontal and lateral digital images of the eyes.展开更多
Recently,Internet of Things(IoT)devices produces massive quantity of data from distinct sources that get transmitted over public networks.Cybersecurity becomes a challenging issue in the IoT environment where the exis...Recently,Internet of Things(IoT)devices produces massive quantity of data from distinct sources that get transmitted over public networks.Cybersecurity becomes a challenging issue in the IoT environment where the existence of cyber threats needs to be resolved.The development of automated tools for cyber threat detection and classification using machine learning(ML)and artificial intelligence(AI)tools become essential to accomplish security in the IoT environment.It is needed to minimize security issues related to IoT gadgets effectively.Therefore,this article introduces a new Mayfly optimization(MFO)with regularized extreme learning machine(RELM)model,named MFO-RELM for Cybersecurity Threat Detection and classification in IoT environment.The presented MFORELM technique accomplishes the effectual identification of cybersecurity threats that exist in the IoT environment.For accomplishing this,the MFO-RELM model pre-processes the actual IoT data into a meaningful format.In addition,the RELM model receives the pre-processed data and carries out the classification process.In order to boost the performance of the RELM model,the MFO algorithm has been employed to it.The performance validation of the MFO-RELM model is tested using standard datasets and the results highlighted the better outcomes of the MFO-RELM model under distinct aspects.展开更多
Automatic deception recognition has received considerable atten-tion from the machine learning community due to recent research on its vast application to social media,interviews,law enforcement,and the mil-itary.Vide...Automatic deception recognition has received considerable atten-tion from the machine learning community due to recent research on its vast application to social media,interviews,law enforcement,and the mil-itary.Video analysis-based techniques for automated deception detection have received increasing interest.This study develops a new self-adaptive population-based firefly algorithm with a deep learning-enabled automated deception detection(SAPFF-DLADD)model for analyzing facial cues.Ini-tially,the input video is separated into a set of video frames.Then,the SAPFF-DLADD model applies the MobileNet-based feature extractor to produce a useful set of features.The long short-term memory(LSTM)model is exploited for deception detection and classification.In the final stage,the SAPFF technique is applied to optimally alter the hyperparameter values of the LSTM model,showing the novelty of the work.The experimental validation of the SAPFF-DLADD model is tested using the Miami University Deception Detection Database(MU3D),a database comprised of two classes,namely,truth and deception.An extensive comparative analysis reported a better performance of the SAPFF-DLADD model compared to recent approaches,with a higher accuracy of 99%.展开更多
This survey paper aims to show methods to analyze and classify field satellite images using deep learning and machine learning algorithms.Users of deep learning-based Convolutional Neural Network(CNN)technology to har...This survey paper aims to show methods to analyze and classify field satellite images using deep learning and machine learning algorithms.Users of deep learning-based Convolutional Neural Network(CNN)technology to harvest fields from satellite images or generate zones of interest were among the planned application scenarios(ROI).Using machine learning,the satellite image is placed on the input image,segmented,and then tagged.In contem-porary categorization,field size ratio,Local Binary Pattern(LBP)histograms,and color data are taken into account.Field satellite image localization has several practical applications,including pest management,scene analysis,and field tracking.The relationship between satellite images in a specific area,or contextual information,is essential to comprehending the field in its whole.展开更多
Face mask detection has several applications,including real-time surveillance,biometrics,etc.Identifying face masks is also helpful for crowd control and ensuring people wear them publicly.With monitoring personnel,it...Face mask detection has several applications,including real-time surveillance,biometrics,etc.Identifying face masks is also helpful for crowd control and ensuring people wear them publicly.With monitoring personnel,it is impossible to ensure that people wear face masks;automated systems are a much superior option for face mask detection and monitoring.This paper introduces a simple and efficient approach for masked face detection.The architecture of the proposed approach is very straightforward;it combines deep learning and local binary patterns to extract features and classify themasmasked or unmasked.The proposed systemrequires hardware withminimal power consumption compared to state-of-the-art deep learning algorithms.Our proposed system maintains two steps.At first,this work extracted the local features of an image by using a local binary pattern descriptor,and then we used deep learning to extract global features.The proposed approach has achieved excellent accuracy and high performance.The performance of the proposed method was tested on three benchmark datasets:the realworld masked faces dataset(RMFD),the simulated masked faces dataset(SMFD),and labeled faces in the wild(LFW).Performancemetrics for the proposed technique weremeasured in terms of accuracy,precision,recall,and F1-score.Results indicated the efficiency of the proposed technique,providing accuracies of 99.86%,99.98%,and 100%for RMFD,SMFD,and LFW,respectively.Moreover,the proposed method outperformed state-of-the-art deep learning methods in the recent bibliography for the same problem under study and on the same evaluation datasets.展开更多
Denial of Service(DoS/DDoS)intrusions are damaging cyberattacks,and their identification is of great interest to the Intrusion Detection System(IDS).Existing IDS are mainly based on Machine Learning(ML)methods includi...Denial of Service(DoS/DDoS)intrusions are damaging cyberattacks,and their identification is of great interest to the Intrusion Detection System(IDS).Existing IDS are mainly based on Machine Learning(ML)methods including Deep Neural Networks(DNN),but which are rarely hybridized with other techniques.The intrusion data used are generally imbalanced and contain multiple features.Thus,the proposed approach aims to use a DNN-based method to detect DoS/DDoS attacks using CICIDS2017,CSE-CICIDS2018 and CICDDoS 2019 datasets,according to the following key points.a)Three imbalanced CICIDS2017-2018-2019 datasets,including Benign and DoS/DDoS attack classes,are used.b)A new technique based on K-means is developed to obtain semi-balanced datasets.c)As a feature selectionmethod,LDA(Linear Discriminant Analysis)performance measure is chosen.d)Four metaheuristic algorithms,counting Artificial Immune System(AIS),Firefly Algorithm(FA),Invasive Weeds Optimization(IWO)and Cuckoo Search(CS)are used,for the first time together,to increase the performance of the suggested DNN-based DoS attacks detection.The experimental results,based on semi-balanced training and test datasets,indicated that AIS,FA,IWO and CS-based DNNs can achieve promising results,even when cross-validated.AIS-DNN yields a tested accuracy of 99.97%,99.98%and 99.99%,for the three considered datasets,respectively,outperforming performance established in several related works.展开更多
Machine Learning(ML)-based prediction and classification systems employ data and learning algorithms to forecast target values.However,improving predictive accuracy is a crucial step for informed decision-making.In th...Machine Learning(ML)-based prediction and classification systems employ data and learning algorithms to forecast target values.However,improving predictive accuracy is a crucial step for informed decision-making.In the healthcare domain,data are available in the form of genetic profiles and clinical characteristics to build prediction models for complex tasks like cancer detection or diagnosis.Among ML algorithms,Artificial Neural Networks(ANNs)are considered the most suitable framework for many classification tasks.The network weights and the activation functions are the two crucial elements in the learning process of an ANN.These weights affect the prediction ability and the convergence efficiency of the network.In traditional settings,ANNs assign random weights to the inputs.This research aims to develop a learning system for reliable cancer prediction by initializing more realistic weights computed using a supervised setting instead of random weights.The proposed learning system uses hybrid and traditional machine learning techniques such as Support Vector Machine(SVM),Linear Discriminant Analysis(LDA),Random Forest(RF),k-Nearest Neighbour(kNN),and ANN to achieve better accuracy in colon and breast cancer classification.This system computes the confusion matrix-based metrics for traditional and proposed frameworks.The proposed framework attains the highest accuracy of 89.24 percent using the colon cancer dataset and 72.20 percent using the breast cancer dataset,which outperforms the other models.The results show that the proposed learning system has higher predictive accuracies than conventional classifiers for each dataset,overcoming previous research limitations.Moreover,the proposed framework is of use to predict and classify cancer patients accurately.Consequently,this will facilitate the effective management of cancer patients.展开更多
Combined Economic and Emission Dispatch(CEED)task forms multi-objective optimization problems to be resolved to minimize emission and fuel costs.The disadvantage of the conventional method is its incapability to avoid...Combined Economic and Emission Dispatch(CEED)task forms multi-objective optimization problems to be resolved to minimize emission and fuel costs.The disadvantage of the conventional method is its incapability to avoid falling in local optimal,particularly when handling nonlinear and complex systems.Metaheuristics have recently received considerable attention due to their enhanced capacity to prevent local optimal solutions in addressing all the optimization problems as a black box.Therefore,this paper focuses on the design of an improved sand cat optimization algorithm based CEED(ISCOA-CEED)technique.The ISCOA-CEED technique majorly concen-trates on reducing fuel costs and the emission of generation units.Moreover,the presented ISCOA-CEED technique transforms the equality constraints of the CEED issue into inequality constraints.Besides,the improved sand cat optimization algorithm(ISCOA)is derived from the integration of tra-ditional SCOA with the Levy Flight(LF)concept.At last,the ISCOA-CEED technique is applied to solve a series of 6 and 11 generators in the CEED issue.The experimental validation of the ISCOA-CEED technique ensured the enhanced performance of the presented ISCOA-CEED technique over other recent approaches.展开更多
The deployment of sensor nodes is an important aspect in mobile wireless sensor networks for increasing network performance.The longevity of the networks is mostly determined by the proportion of energy consumed and t...The deployment of sensor nodes is an important aspect in mobile wireless sensor networks for increasing network performance.The longevity of the networks is mostly determined by the proportion of energy consumed and the sensor nodes’access network.The optimal or ideal positioning of sensors improves the portable sensor networks effectiveness.Coverage and energy usage are mostly determined by successful sensor placement strategies.Nature-inspired algorithms are the most effective solution for short sensor lifetime.The primary objective of work is to conduct a comparative analysis of nature-inspired optimization for wireless sensor networks(WSNs’)maximum network coverage.Moreover,it identifies quantity of installed sensor nodes for the given area.Superiority of algorithm has been identified based on value of optimized energy.The first half of the paper’s literature on nature-inspired algorithms is discussed.Later six metaheuristics algorithms(Grey wolf,Ant lion,Dragonfly,Whale,Moth flame,Sine cosine optimizer)are compared for optimal coverage of WSNs.The simulation outcomes confirm that whale opti-mization algorithm(WOA)gives optimized Energy with improved network coverage with the least number of nodes.This comparison will be helpful for researchers who will use WSNs in their applications.展开更多
Next-generation networks,including the Internet of Things(IoT),fifth-generation cellular systems(5G),and sixth-generation cellular systems(6G),suf-fer from the dramatic increase of the number of deployed devices.This p...Next-generation networks,including the Internet of Things(IoT),fifth-generation cellular systems(5G),and sixth-generation cellular systems(6G),suf-fer from the dramatic increase of the number of deployed devices.This puts high constraints and challenges on the design of such networks.Structural changing of the network is one of such challenges that affect the network performance,includ-ing the required quality of service(QoS).The fractal dimension(FD)is consid-ered one of the main indicators used to represent the structure of the communication network.To this end,this work analyzes the FD of the network and its use for telecommunication networks investigation and planning.The clus-ter growing method for assessing the FD is introduced and analyzed.The article proposes a novel method for estimating the FD of a communication network,based on assessing the network’s connectivity,by searching for the shortest routes.Unlike the cluster growing method,the proposed method does not require multiple iterations,which reduces the number of calculations,and increases the stability of the results obtained.Thus,the proposed method requires less compu-tational cost than the cluster growing method and achieves higher stability.The method is quite simple to implement and can be used in the tasks of research and planning of modern and promising communication networks.The developed method is evaluated for two different network structures and compared with the cluster growing method.Results validate the developed method.展开更多
As the Internet of Things(IoT)endures to develop,a huge count of data has been created.An IoT platform is rather sensitive to security challenges as individual data can be leaked,or sensor data could be used to cause ...As the Internet of Things(IoT)endures to develop,a huge count of data has been created.An IoT platform is rather sensitive to security challenges as individual data can be leaked,or sensor data could be used to cause accidents.As typical intrusion detection system(IDS)studies can be frequently designed for working well on databases,it can be unknown if they intend to work well in altering network environments.Machine learning(ML)techniques are depicted to have a higher capacity at assisting mitigate an attack on IoT device and another edge system with reasonable accuracy.This article introduces a new Bird Swarm Algorithm with Wavelet Neural Network for Intrusion Detection(BSAWNN-ID)in the IoT platform.The main intention of the BSAWNN-ID algorithm lies in detecting and classifying intrusions in the IoT platform.The BSAWNN-ID technique primarily designs a feature subset selection using the coyote optimization algorithm(FSS-COA)to attain this.Next,to detect intrusions,the WNN model is utilized.At last,theWNNparameters are optimally modified by the use of BSA.Awidespread experiment is performed to depict the better performance of the BSAWNNID technique.The resultant values indicated the better performance of the BSAWNN-ID technique over other models,with an accuracy of 99.64%on the UNSW-NB15 dataset.展开更多
Recent security applications in mobile technologies and computer sys-tems use face recognition for high-end security.Despite numerous security tech-niques,face recognition is considered a high-security control.Develop...Recent security applications in mobile technologies and computer sys-tems use face recognition for high-end security.Despite numerous security tech-niques,face recognition is considered a high-security control.Developers fuse and carry out face identification as an access authority into these applications.Still,face identification authentication is sensitive to attacks with a 2-D photo image or captured video to access the system as an authorized user.In the existing spoofing detection algorithm,there was some loss in the recreation of images.This research proposes an unobtrusive technique to detect face spoofing attacks that apply a single frame of the sequenced set of frames to overcome the above-said problems.This research offers a novel Edge-Net autoencoder to select convoluted and dominant features of the input diffused structure.First,this pro-posed method is tested with the Cross-ethnicity Face Anti-spoofing(CASIA),Fetal alcohol spectrum disorders(FASD)dataset.This database has three models of attacks:distorted photographs in printed form,photographs with removed eyes portion,and video attacks.The images are taken with three different quality cameras:low,average,and high-quality real and spoofed images.An extensive experimental study was performed with CASIA-FASD,3 Diagnostic Machine Aid-Digital(DMAD)dataset that proved higher results when compared to existing algorithms.展开更多
Traditional security systems are exposed to many various attacks,which represents a major challenge for the spread of the Internet in the future.Innovative techniques have been suggested for detecting attacks using ma...Traditional security systems are exposed to many various attacks,which represents a major challenge for the spread of the Internet in the future.Innovative techniques have been suggested for detecting attacks using machine learning and deep learning.The significant advantage of deep learning is that it is highly efficient,but it needs a large training time with a lot of data.Therefore,in this paper,we present a new feature reduction strategy based on Distributed Cumulative Histograms(DCH)to distinguish between dataset features to locate the most effective features.Cumulative histograms assess the dataset instance patterns of the applied features to identify the most effective attributes that can significantly impact the classification results.Three different models for detecting attacks using Convolutional Neural Network(CNN)and Long Short-Term Memory Network(LSTM)are also proposed.The accuracy test of attack detection using the hybrid model was 98.96%on the UNSW-NP15 dataset.The proposed model is compared with wrapper-based and filter-based Feature Selection(FS)models.The proposed model reduced classification time and increased detection accuracy.展开更多
Biometric security is a growing trend,as it supports the authentication of persons using confidential biometric data.Most of the transmitted data in multi-media systems are susceptible to attacks,which affect the secur...Biometric security is a growing trend,as it supports the authentication of persons using confidential biometric data.Most of the transmitted data in multi-media systems are susceptible to attacks,which affect the security of these sys-tems.Biometric systems provide sufficient protection and privacy for users.The recently-introduced cancellable biometric recognition systems have not been investigated in the presence of different types of attacks.In addition,they have not been studied on different and large biometric datasets.Another point that deserves consideration is the hardware implementation of cancellable biometric recognition systems.This paper presents a suggested hybrid cancellable biometric recognition system based on a 3D chaotic cryptosystem.The rationale behind the utilization of the 3D chaotic cryptosystem is to guarantee strong encryption of biometric templates,and hence enhance the security and privacy of users.The suggested cryptosystem adds significant permutation and diffusion to the encrypted biometric templates.We introduce some sort of attack analysis in this paper to prove the robustness of the proposed cryptosystem against attacks.In addition,a Field Programmable Gate Array(FPGA)implementation of the pro-posed system is introduced.The obtained results with the proposed cryptosystem are compared with those of the traditional encryption schemes,such as Double Random Phase Encoding(DRPE)to reveal superiority,and hence high recogni-tion performance of the proposed cancellable biometric recognition system.The obtained results prove that the proposed cryptosystem enhances the security and leads to better efficiency of the cancellable biometric recognition system in the presence of different types of attacks.展开更多
One of the significant health issues affecting women that impacts their fertility and results in serious health concerns is Polycystic ovarian syndrome(PCOS).Consequently,timely screening of polycystic ovarian syndrom...One of the significant health issues affecting women that impacts their fertility and results in serious health concerns is Polycystic ovarian syndrome(PCOS).Consequently,timely screening of polycystic ovarian syndrome can help in the process of recovery.Finding a method to aid doctors in this procedure was crucial due to the difficulties in detecting this condition.This research aimed to determine whether it is possible to optimize the detection of PCOS utilizing Deep Learning algorithms and methodologies.Additionally,feature selection methods that produce the most important subset of features can speed up calculation and enhance the effectiveness of classifiers.In this research,the tri-stage wrapper method is used because it reduces the computation time.The proposed study for the Automatic diagnosis of PCOS contains preprocessing,data normalization,feature selection,and classification.A dataset with 39 characteristics,including metabolism,neuroimaging,hormones,and biochemical information for 541 subjects,was employed in this scenario.To start,this research pre-processed the information.Next for feature selection,a tri-stage wrapper method such as Mutual Information,ReliefF,Chi-Square,and Xvariance is used.Then,various classification methods are tested and trained.Deep learning techniques including convolutional neural network(CNN),multi-layer perceptron(MLP),Recurrent neural network(RNN),and Bi long short-term memory(Bi-LSTM)are utilized for categorization.The experimental finding demonstrates that with effective feature extraction process using tri stage wrapper method+CNN delivers the highest precision(97%),high accuracy(98.67%),and recall(89%)when compared with other machine learning algorithms.展开更多
Applied linguistics is one of the fields in the linguistics domain and deals with the practical applications of the language studies such as speech processing,language teaching,translation and speech therapy.The ever-...Applied linguistics is one of the fields in the linguistics domain and deals with the practical applications of the language studies such as speech processing,language teaching,translation and speech therapy.The ever-growing Online Social Networks(OSNs)experience a vital issue to confront,i.e.,hate speech.Amongst the OSN-oriented security problems,the usage of offensive language is the most important threat that is prevalently found across the Internet.Based on the group targeted,the offensive language varies in terms of adult content,hate speech,racism,cyberbullying,abuse,trolling and profanity.Amongst these,hate speech is the most intimidating form of using offensive language in which the targeted groups or individuals are intimidated with the intent of creating harm,social chaos or violence.Machine Learning(ML)techniques have recently been applied to recognize hate speech-related content.The current research article introduces a Grasshopper Optimization with an Attentive Recurrent Network for Offensive Speech Detection(GOARN-OSD)model for social media.The GOARNOSD technique integrates the concepts of DL and metaheuristic algorithms for detecting hate speech.In the presented GOARN-OSD technique,the primary stage involves the data pre-processing and word embedding processes.Then,this study utilizes the Attentive Recurrent Network(ARN)model for hate speech recognition and classification.At last,the Grasshopper Optimization Algorithm(GOA)is exploited as a hyperparameter optimizer to boost the performance of the hate speech recognition process.To depict the promising performance of the proposed GOARN-OSD method,a widespread experimental analysis was conducted.The comparison study outcomes demonstrate the superior performance of the proposed GOARN-OSD model over other state-of-the-art approaches.展开更多
Rainfall plays a significant role in managing the water level in the reser-voir.The unpredictable amount of rainfall due to the climate change can cause either overflow or dry in the reservoir.Many individuals,especia...Rainfall plays a significant role in managing the water level in the reser-voir.The unpredictable amount of rainfall due to the climate change can cause either overflow or dry in the reservoir.Many individuals,especially those in the agricultural sector,rely on rain forecasts.Forecasting rainfall is challenging because of the changing nature of the weather.The area of Jimma in southwest Oromia,Ethiopia is the subject of this research,which aims to develop a rainfall forecasting model.To estimate Jimma's daily rainfall,we propose a novel approach based on optimizing the parameters of long short-term memory(LSTM)using Al-Biruni earth radius(BER)optimization algorithm for boosting the fore-casting accuracy.N ash-Sutcliffe model eficiency(NSE),mean square error(MSE),root MSE(RMSE),mean absolute error(MAE),and R2 were all used in the conducted experiments to assess the proposed approach,with final scores of(0.61),(430.81),(19.12),and(11.09),respectively.Moreover,we compared the proposed model to current machine-learning regression models;such as non-optimized LSTM,bidirectional LSTM(BiLSTM),gated recurrent unit(GRU),and convolutional LSTM(ConvLSTM).It was found that the proposed approach achieved the lowest RMSE of(19.12).In addition,the experimental results show that the proposed model has R-with a value outperforming the other models,which confirms the superiority of the proposed approach.On the other hand,a statistical analysis is performed to measure the significance and stability of the proposed approach and the recorded results proved the expected perfomance.展开更多
基金This work was funded by the Deanship of Scientific Research at Jouf University under Grant Number(DSR2022-RG-0102).
文摘Software Defined Network(SDN)and Network Function Virtualization(NFV)technology promote several benefits to network operators,including reduced maintenance costs,increased network operational performance,simplified network lifecycle,and policies management.Network vulnerabilities try to modify services provided by Network Function Virtualization MANagement and Orchestration(NFV MANO),and malicious attacks in different scenarios disrupt the NFV Orchestrator(NFVO)and Virtualized Infrastructure Manager(VIM)lifecycle management related to network services or individual Virtualized Network Function(VNF).This paper proposes an anomaly detection mechanism that monitors threats in NFV MANO and manages promptly and adaptively to implement and handle security functions in order to enhance the quality of experience for end users.An anomaly detector investigates these identified risks and provides secure network services.It enables virtual network security functions and identifies anomalies in Kubernetes(a cloud-based platform).For training and testing purpose of the proposed approach,an intrusion-containing dataset is used that hold multiple malicious activities like a Smurf,Neptune,Teardrop,Pod,Land,IPsweep,etc.,categorized as Probing(Prob),Denial of Service(DoS),User to Root(U2R),and Remote to User(R2L)attacks.An anomaly detector is anticipated with the capabilities of a Machine Learning(ML)technique,making use of supervised learning techniques like Logistic Regression(LR),Support Vector Machine(SVM),Random Forest(RF),Naïve Bayes(NB),and Extreme Gradient Boosting(XGBoost).The proposed framework has been evaluated by deploying the identified ML algorithm on a Jupyter notebook in Kubeflow to simulate Kubernetes for validation purposes.RF classifier has shown better outcomes(99.90%accuracy)than other classifiers in detecting anomalies/intrusions in the containerized environment.
文摘Proactive Semantic Interference (PSI) and failure to recover from PSI (frPSI), are novel constructs assessed by the LASSI-L. These measures are sensitive to cognitive changes in early Mild Cognitive Impairment (MCI) and preclinical AD determined by Aβ load using PET. The goal of this study was to compare a new computerized version of the LASSI-L (LASSI-Brief Computerized) to the standard paper-and-pencil version of the test. In this study, we examined 110 cognitively unimpaired (CU) older adults and 79 with amnestic MCI (aMCI) who were administered the paper-and-pencil form of the LASSI-L. Their performance was compared with 62 CU older adults and 52 aMCI participants examined using the LASSI-BC. After adjustment for covariates (degree of initial learning, sex, education, and language of evaluation) both the standard and computerized versions distinguished between aMCI and CU participants. The performance of CU and aMCI groups using either form was relatively commensurate. Importantly, an optimal combination of Cued B2 recall and Cued B1 intrusions on the LASSI-BC yielded an area under the ROC curve of .927, a sensitivity of 92.3% and specificity of 88.1%, relative to an area under the ROC curve of .815, a sensitivity of 72.5%, and a specificity of 79.1% obtained for the paper-and-pencil LASSI-L. Overall, the LASSI-BC was comparable, and in some ways, superior to the paper-and-pencil LASSI-L. Advantages of the LASSI-BC include a more standardized administration, suitability for remote assessment, and an automated scoring mechanism that can be verified by a built-in audio recording of responses.
文摘Analyzing big data, especially medical data, helps to provide good health care to patients and face the risks of death. The COVID-19 pandemic has had a significant impact on public health worldwide, emphasizing the need for effective risk prediction models. Machine learning (ML) techniques have shown promise in analyzing complex data patterns and predicting disease outcomes. The accuracy of these techniques is greatly affected by changing their parameters. Hyperparameter optimization plays a crucial role in improving model performance. In this work, the Particle Swarm Optimization (PSO) algorithm was used to effectively search the hyperparameter space and improve the predictive power of the machine learning models by identifying the optimal hyperparameters that can provide the highest accuracy. A dataset with a variety of clinical and epidemiological characteristics linked to COVID-19 cases was used in this study. Various machine learning models, including Random Forests, Decision Trees, Support Vector Machines, and Neural Networks, were utilized to capture the complex relationships present in the data. To evaluate the predictive performance of the models, the accuracy metric was employed. The experimental findings showed that the suggested method of estimating COVID-19 risk is effective. When compared to baseline models, the optimized machine learning models performed better and produced better results.
基金funded by the Deanship of Scientific Research at Princess Nourah bint Abdulrahman University through the Fast-track Research Funding Program.
文摘Limbal Stem Cell Deficiency(LSCD)is an eye disease that can cause corneal opacity and vascularization.In its advanced stage it can lead to a degree of visual impairment.It involves the changing in the semispherical shape of the cornea to a drooping shape to downwards direction.LSCD is hard to be diagnosed at early stages.The color and texture of the cornea surface can provide significant information about the cornea affected by LSCD.Parameters such as shape and texture are very crucial to differentiate normal from LSCD cornea.Although several medical approaches exist,most of them requires complicated procedure and medical devices.Therefore,in this paper,we pursued the development of a LSCD detection technique(LDT)utilizing image processing methods.Early diagnosis of LSCD is very crucial for physicians to arrange for effective treatment.In the proposed technique,we developed a method for LSCD detection utilizing frontal eye images.A dataset of 280 eye images of frontal and lateral LSCD and normal patients were used in this research.First,the cornea region of both frontal and lateral images is segmented,and the geometric features are extracted through the automated active contour model and the spline curve.While the texture features are extracted using the feature selection algorithm.The experimental results exhibited that the combined features of the geometric and texture will exhibit accuracy of 95.95%,sensitivity of 97.91% and specificity of 94.05% with the random forest classifier of n=40.As a result,this research developed a Limbal stem cell deficiency detection system utilizing features’fusion using image processing techniques for frontal and lateral digital images of the eyes.
基金The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work under grant number(RGP 2/142/43)Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2022R161)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:(22UQU4210118DSR06).
文摘Recently,Internet of Things(IoT)devices produces massive quantity of data from distinct sources that get transmitted over public networks.Cybersecurity becomes a challenging issue in the IoT environment where the existence of cyber threats needs to be resolved.The development of automated tools for cyber threat detection and classification using machine learning(ML)and artificial intelligence(AI)tools become essential to accomplish security in the IoT environment.It is needed to minimize security issues related to IoT gadgets effectively.Therefore,this article introduces a new Mayfly optimization(MFO)with regularized extreme learning machine(RELM)model,named MFO-RELM for Cybersecurity Threat Detection and classification in IoT environment.The presented MFORELM technique accomplishes the effectual identification of cybersecurity threats that exist in the IoT environment.For accomplishing this,the MFO-RELM model pre-processes the actual IoT data into a meaningful format.In addition,the RELM model receives the pre-processed data and carries out the classification process.In order to boost the performance of the RELM model,the MFO algorithm has been employed to it.The performance validation of the MFO-RELM model is tested using standard datasets and the results highlighted the better outcomes of the MFO-RELM model under distinct aspects.
文摘Automatic deception recognition has received considerable atten-tion from the machine learning community due to recent research on its vast application to social media,interviews,law enforcement,and the mil-itary.Video analysis-based techniques for automated deception detection have received increasing interest.This study develops a new self-adaptive population-based firefly algorithm with a deep learning-enabled automated deception detection(SAPFF-DLADD)model for analyzing facial cues.Ini-tially,the input video is separated into a set of video frames.Then,the SAPFF-DLADD model applies the MobileNet-based feature extractor to produce a useful set of features.The long short-term memory(LSTM)model is exploited for deception detection and classification.In the final stage,the SAPFF technique is applied to optimally alter the hyperparameter values of the LSTM model,showing the novelty of the work.The experimental validation of the SAPFF-DLADD model is tested using the Miami University Deception Detection Database(MU3D),a database comprised of two classes,namely,truth and deception.An extensive comparative analysis reported a better performance of the SAPFF-DLADD model compared to recent approaches,with a higher accuracy of 99%.
文摘This survey paper aims to show methods to analyze and classify field satellite images using deep learning and machine learning algorithms.Users of deep learning-based Convolutional Neural Network(CNN)technology to harvest fields from satellite images or generate zones of interest were among the planned application scenarios(ROI).Using machine learning,the satellite image is placed on the input image,segmented,and then tagged.In contem-porary categorization,field size ratio,Local Binary Pattern(LBP)histograms,and color data are taken into account.Field satellite image localization has several practical applications,including pest management,scene analysis,and field tracking.The relationship between satellite images in a specific area,or contextual information,is essential to comprehending the field in its whole.
基金Princess Nourah bint Abdulrahman University Researchers Supporting Project Number (PNURSP2023R442),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia。
文摘Face mask detection has several applications,including real-time surveillance,biometrics,etc.Identifying face masks is also helpful for crowd control and ensuring people wear them publicly.With monitoring personnel,it is impossible to ensure that people wear face masks;automated systems are a much superior option for face mask detection and monitoring.This paper introduces a simple and efficient approach for masked face detection.The architecture of the proposed approach is very straightforward;it combines deep learning and local binary patterns to extract features and classify themasmasked or unmasked.The proposed systemrequires hardware withminimal power consumption compared to state-of-the-art deep learning algorithms.Our proposed system maintains two steps.At first,this work extracted the local features of an image by using a local binary pattern descriptor,and then we used deep learning to extract global features.The proposed approach has achieved excellent accuracy and high performance.The performance of the proposed method was tested on three benchmark datasets:the realworld masked faces dataset(RMFD),the simulated masked faces dataset(SMFD),and labeled faces in the wild(LFW).Performancemetrics for the proposed technique weremeasured in terms of accuracy,precision,recall,and F1-score.Results indicated the efficiency of the proposed technique,providing accuracies of 99.86%,99.98%,and 100%for RMFD,SMFD,and LFW,respectively.Moreover,the proposed method outperformed state-of-the-art deep learning methods in the recent bibliography for the same problem under study and on the same evaluation datasets.
文摘Denial of Service(DoS/DDoS)intrusions are damaging cyberattacks,and their identification is of great interest to the Intrusion Detection System(IDS).Existing IDS are mainly based on Machine Learning(ML)methods including Deep Neural Networks(DNN),but which are rarely hybridized with other techniques.The intrusion data used are generally imbalanced and contain multiple features.Thus,the proposed approach aims to use a DNN-based method to detect DoS/DDoS attacks using CICIDS2017,CSE-CICIDS2018 and CICDDoS 2019 datasets,according to the following key points.a)Three imbalanced CICIDS2017-2018-2019 datasets,including Benign and DoS/DDoS attack classes,are used.b)A new technique based on K-means is developed to obtain semi-balanced datasets.c)As a feature selectionmethod,LDA(Linear Discriminant Analysis)performance measure is chosen.d)Four metaheuristic algorithms,counting Artificial Immune System(AIS),Firefly Algorithm(FA),Invasive Weeds Optimization(IWO)and Cuckoo Search(CS)are used,for the first time together,to increase the performance of the suggested DNN-based DoS attacks detection.The experimental results,based on semi-balanced training and test datasets,indicated that AIS,FA,IWO and CS-based DNNs can achieve promising results,even when cross-validated.AIS-DNN yields a tested accuracy of 99.97%,99.98%and 99.99%,for the three considered datasets,respectively,outperforming performance established in several related works.
文摘Machine Learning(ML)-based prediction and classification systems employ data and learning algorithms to forecast target values.However,improving predictive accuracy is a crucial step for informed decision-making.In the healthcare domain,data are available in the form of genetic profiles and clinical characteristics to build prediction models for complex tasks like cancer detection or diagnosis.Among ML algorithms,Artificial Neural Networks(ANNs)are considered the most suitable framework for many classification tasks.The network weights and the activation functions are the two crucial elements in the learning process of an ANN.These weights affect the prediction ability and the convergence efficiency of the network.In traditional settings,ANNs assign random weights to the inputs.This research aims to develop a learning system for reliable cancer prediction by initializing more realistic weights computed using a supervised setting instead of random weights.The proposed learning system uses hybrid and traditional machine learning techniques such as Support Vector Machine(SVM),Linear Discriminant Analysis(LDA),Random Forest(RF),k-Nearest Neighbour(kNN),and ANN to achieve better accuracy in colon and breast cancer classification.This system computes the confusion matrix-based metrics for traditional and proposed frameworks.The proposed framework attains the highest accuracy of 89.24 percent using the colon cancer dataset and 72.20 percent using the breast cancer dataset,which outperforms the other models.The results show that the proposed learning system has higher predictive accuracies than conventional classifiers for each dataset,overcoming previous research limitations.Moreover,the proposed framework is of use to predict and classify cancer patients accurately.Consequently,this will facilitate the effective management of cancer patients.
基金supported via funding from Prince Sattam bin Abdulaziz University project number(PSAU/2023/R/1444)The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:22UQU4340237DSR65.
文摘Combined Economic and Emission Dispatch(CEED)task forms multi-objective optimization problems to be resolved to minimize emission and fuel costs.The disadvantage of the conventional method is its incapability to avoid falling in local optimal,particularly when handling nonlinear and complex systems.Metaheuristics have recently received considerable attention due to their enhanced capacity to prevent local optimal solutions in addressing all the optimization problems as a black box.Therefore,this paper focuses on the design of an improved sand cat optimization algorithm based CEED(ISCOA-CEED)technique.The ISCOA-CEED technique majorly concen-trates on reducing fuel costs and the emission of generation units.Moreover,the presented ISCOA-CEED technique transforms the equality constraints of the CEED issue into inequality constraints.Besides,the improved sand cat optimization algorithm(ISCOA)is derived from the integration of tra-ditional SCOA with the Levy Flight(LF)concept.At last,the ISCOA-CEED technique is applied to solve a series of 6 and 11 generators in the CEED issue.The experimental validation of the ISCOA-CEED technique ensured the enhanced performance of the presented ISCOA-CEED technique over other recent approaches.
文摘The deployment of sensor nodes is an important aspect in mobile wireless sensor networks for increasing network performance.The longevity of the networks is mostly determined by the proportion of energy consumed and the sensor nodes’access network.The optimal or ideal positioning of sensors improves the portable sensor networks effectiveness.Coverage and energy usage are mostly determined by successful sensor placement strategies.Nature-inspired algorithms are the most effective solution for short sensor lifetime.The primary objective of work is to conduct a comparative analysis of nature-inspired optimization for wireless sensor networks(WSNs’)maximum network coverage.Moreover,it identifies quantity of installed sensor nodes for the given area.Superiority of algorithm has been identified based on value of optimized energy.The first half of the paper’s literature on nature-inspired algorithms is discussed.Later six metaheuristics algorithms(Grey wolf,Ant lion,Dragonfly,Whale,Moth flame,Sine cosine optimizer)are compared for optimal coverage of WSNs.The simulation outcomes confirm that whale opti-mization algorithm(WOA)gives optimized Energy with improved network coverage with the least number of nodes.This comparison will be helpful for researchers who will use WSNs in their applications.
基金supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2022R66),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Next-generation networks,including the Internet of Things(IoT),fifth-generation cellular systems(5G),and sixth-generation cellular systems(6G),suf-fer from the dramatic increase of the number of deployed devices.This puts high constraints and challenges on the design of such networks.Structural changing of the network is one of such challenges that affect the network performance,includ-ing the required quality of service(QoS).The fractal dimension(FD)is consid-ered one of the main indicators used to represent the structure of the communication network.To this end,this work analyzes the FD of the network and its use for telecommunication networks investigation and planning.The clus-ter growing method for assessing the FD is introduced and analyzed.The article proposes a novel method for estimating the FD of a communication network,based on assessing the network’s connectivity,by searching for the shortest routes.Unlike the cluster growing method,the proposed method does not require multiple iterations,which reduces the number of calculations,and increases the stability of the results obtained.Thus,the proposed method requires less compu-tational cost than the cluster growing method and achieves higher stability.The method is quite simple to implement and can be used in the tasks of research and planning of modern and promising communication networks.The developed method is evaluated for two different network structures and compared with the cluster growing method.Results validate the developed method.
基金This work was funded by the Deanship of Scientific Research at Princess Nourah bint Abdulrahman University,through the Research Groups Program Grant No.(RGP-1443-0048).
文摘As the Internet of Things(IoT)endures to develop,a huge count of data has been created.An IoT platform is rather sensitive to security challenges as individual data can be leaked,or sensor data could be used to cause accidents.As typical intrusion detection system(IDS)studies can be frequently designed for working well on databases,it can be unknown if they intend to work well in altering network environments.Machine learning(ML)techniques are depicted to have a higher capacity at assisting mitigate an attack on IoT device and another edge system with reasonable accuracy.This article introduces a new Bird Swarm Algorithm with Wavelet Neural Network for Intrusion Detection(BSAWNN-ID)in the IoT platform.The main intention of the BSAWNN-ID algorithm lies in detecting and classifying intrusions in the IoT platform.The BSAWNN-ID technique primarily designs a feature subset selection using the coyote optimization algorithm(FSS-COA)to attain this.Next,to detect intrusions,the WNN model is utilized.At last,theWNNparameters are optimally modified by the use of BSA.Awidespread experiment is performed to depict the better performance of the BSAWNNID technique.The resultant values indicated the better performance of the BSAWNN-ID technique over other models,with an accuracy of 99.64%on the UNSW-NB15 dataset.
文摘Recent security applications in mobile technologies and computer sys-tems use face recognition for high-end security.Despite numerous security tech-niques,face recognition is considered a high-security control.Developers fuse and carry out face identification as an access authority into these applications.Still,face identification authentication is sensitive to attacks with a 2-D photo image or captured video to access the system as an authorized user.In the existing spoofing detection algorithm,there was some loss in the recreation of images.This research proposes an unobtrusive technique to detect face spoofing attacks that apply a single frame of the sequenced set of frames to overcome the above-said problems.This research offers a novel Edge-Net autoencoder to select convoluted and dominant features of the input diffused structure.First,this pro-posed method is tested with the Cross-ethnicity Face Anti-spoofing(CASIA),Fetal alcohol spectrum disorders(FASD)dataset.This database has three models of attacks:distorted photographs in printed form,photographs with removed eyes portion,and video attacks.The images are taken with three different quality cameras:low,average,and high-quality real and spoofed images.An extensive experimental study was performed with CASIA-FASD,3 Diagnostic Machine Aid-Digital(DMAD)dataset that proved higher results when compared to existing algorithms.
文摘Traditional security systems are exposed to many various attacks,which represents a major challenge for the spread of the Internet in the future.Innovative techniques have been suggested for detecting attacks using machine learning and deep learning.The significant advantage of deep learning is that it is highly efficient,but it needs a large training time with a lot of data.Therefore,in this paper,we present a new feature reduction strategy based on Distributed Cumulative Histograms(DCH)to distinguish between dataset features to locate the most effective features.Cumulative histograms assess the dataset instance patterns of the applied features to identify the most effective attributes that can significantly impact the classification results.Three different models for detecting attacks using Convolutional Neural Network(CNN)and Long Short-Term Memory Network(LSTM)are also proposed.The accuracy test of attack detection using the hybrid model was 98.96%on the UNSW-NP15 dataset.The proposed model is compared with wrapper-based and filter-based Feature Selection(FS)models.The proposed model reduced classification time and increased detection accuracy.
文摘Biometric security is a growing trend,as it supports the authentication of persons using confidential biometric data.Most of the transmitted data in multi-media systems are susceptible to attacks,which affect the security of these sys-tems.Biometric systems provide sufficient protection and privacy for users.The recently-introduced cancellable biometric recognition systems have not been investigated in the presence of different types of attacks.In addition,they have not been studied on different and large biometric datasets.Another point that deserves consideration is the hardware implementation of cancellable biometric recognition systems.This paper presents a suggested hybrid cancellable biometric recognition system based on a 3D chaotic cryptosystem.The rationale behind the utilization of the 3D chaotic cryptosystem is to guarantee strong encryption of biometric templates,and hence enhance the security and privacy of users.The suggested cryptosystem adds significant permutation and diffusion to the encrypted biometric templates.We introduce some sort of attack analysis in this paper to prove the robustness of the proposed cryptosystem against attacks.In addition,a Field Programmable Gate Array(FPGA)implementation of the pro-posed system is introduced.The obtained results with the proposed cryptosystem are compared with those of the traditional encryption schemes,such as Double Random Phase Encoding(DRPE)to reveal superiority,and hence high recogni-tion performance of the proposed cancellable biometric recognition system.The obtained results prove that the proposed cryptosystem enhances the security and leads to better efficiency of the cancellable biometric recognition system in the presence of different types of attacks.
基金The authors extend their appreciation to the Deputyship for Research&Innovation,Ministry of Education in Saudi Arabia for funding this research work through Project Number WE-44-0033.
文摘One of the significant health issues affecting women that impacts their fertility and results in serious health concerns is Polycystic ovarian syndrome(PCOS).Consequently,timely screening of polycystic ovarian syndrome can help in the process of recovery.Finding a method to aid doctors in this procedure was crucial due to the difficulties in detecting this condition.This research aimed to determine whether it is possible to optimize the detection of PCOS utilizing Deep Learning algorithms and methodologies.Additionally,feature selection methods that produce the most important subset of features can speed up calculation and enhance the effectiveness of classifiers.In this research,the tri-stage wrapper method is used because it reduces the computation time.The proposed study for the Automatic diagnosis of PCOS contains preprocessing,data normalization,feature selection,and classification.A dataset with 39 characteristics,including metabolism,neuroimaging,hormones,and biochemical information for 541 subjects,was employed in this scenario.To start,this research pre-processed the information.Next for feature selection,a tri-stage wrapper method such as Mutual Information,ReliefF,Chi-Square,and Xvariance is used.Then,various classification methods are tested and trained.Deep learning techniques including convolutional neural network(CNN),multi-layer perceptron(MLP),Recurrent neural network(RNN),and Bi long short-term memory(Bi-LSTM)are utilized for categorization.The experimental finding demonstrates that with effective feature extraction process using tri stage wrapper method+CNN delivers the highest precision(97%),high accuracy(98.67%),and recall(89%)when compared with other machine learning algorithms.
基金Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2023R281)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia+1 种基金Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code: (22UQU4331004DSR031)supported via funding from Prince Sattam bin Abdulaziz University project number (PSAU/2023/R/1444).
文摘Applied linguistics is one of the fields in the linguistics domain and deals with the practical applications of the language studies such as speech processing,language teaching,translation and speech therapy.The ever-growing Online Social Networks(OSNs)experience a vital issue to confront,i.e.,hate speech.Amongst the OSN-oriented security problems,the usage of offensive language is the most important threat that is prevalently found across the Internet.Based on the group targeted,the offensive language varies in terms of adult content,hate speech,racism,cyberbullying,abuse,trolling and profanity.Amongst these,hate speech is the most intimidating form of using offensive language in which the targeted groups or individuals are intimidated with the intent of creating harm,social chaos or violence.Machine Learning(ML)techniques have recently been applied to recognize hate speech-related content.The current research article introduces a Grasshopper Optimization with an Attentive Recurrent Network for Offensive Speech Detection(GOARN-OSD)model for social media.The GOARNOSD technique integrates the concepts of DL and metaheuristic algorithms for detecting hate speech.In the presented GOARN-OSD technique,the primary stage involves the data pre-processing and word embedding processes.Then,this study utilizes the Attentive Recurrent Network(ARN)model for hate speech recognition and classification.At last,the Grasshopper Optimization Algorithm(GOA)is exploited as a hyperparameter optimizer to boost the performance of the hate speech recognition process.To depict the promising performance of the proposed GOARN-OSD method,a widespread experimental analysis was conducted.The comparison study outcomes demonstrate the superior performance of the proposed GOARN-OSD model over other state-of-the-art approaches.
文摘Rainfall plays a significant role in managing the water level in the reser-voir.The unpredictable amount of rainfall due to the climate change can cause either overflow or dry in the reservoir.Many individuals,especially those in the agricultural sector,rely on rain forecasts.Forecasting rainfall is challenging because of the changing nature of the weather.The area of Jimma in southwest Oromia,Ethiopia is the subject of this research,which aims to develop a rainfall forecasting model.To estimate Jimma's daily rainfall,we propose a novel approach based on optimizing the parameters of long short-term memory(LSTM)using Al-Biruni earth radius(BER)optimization algorithm for boosting the fore-casting accuracy.N ash-Sutcliffe model eficiency(NSE),mean square error(MSE),root MSE(RMSE),mean absolute error(MAE),and R2 were all used in the conducted experiments to assess the proposed approach,with final scores of(0.61),(430.81),(19.12),and(11.09),respectively.Moreover,we compared the proposed model to current machine-learning regression models;such as non-optimized LSTM,bidirectional LSTM(BiLSTM),gated recurrent unit(GRU),and convolutional LSTM(ConvLSTM).It was found that the proposed approach achieved the lowest RMSE of(19.12).In addition,the experimental results show that the proposed model has R-with a value outperforming the other models,which confirms the superiority of the proposed approach.On the other hand,a statistical analysis is performed to measure the significance and stability of the proposed approach and the recorded results proved the expected perfomance.