期刊文献+
共找到26篇文章
< 1 2 >
每页显示 20 50 100
An Integrated Analysis of Yield Prediction Models:A Comprehensive Review of Advancements and Challenges
1
作者 Nidhi Parashar Prashant Johri +2 位作者 Arfat Ahmad Khan Nitin Gaur Seifedine Kadry 《Computers, Materials & Continua》 SCIE EI 2024年第7期389-425,共37页
The growing global requirement for food and the need for sustainable farming in an era of a changing climate and scarce resources have inspired substantial crop yield prediction research.Deep learning(DL)and machine l... The growing global requirement for food and the need for sustainable farming in an era of a changing climate and scarce resources have inspired substantial crop yield prediction research.Deep learning(DL)and machine learning(ML)models effectively deal with such challenges.This research paper comprehensively analyses recent advancements in crop yield prediction from January 2016 to March 2024.In addition,it analyses the effectiveness of various input parameters considered in crop yield prediction models.We conducted an in-depth search and gathered studies that employed crop modeling and AI-based methods to predict crop yield.The total number of articles reviewed for crop yield prediction using ML,meta-modeling(Crop models coupled with ML/DL),and DL-based prediction models and input parameter selection is 125.We conduct the research by setting up five objectives for this research and discussing them after analyzing the selected research papers.Each study is assessed based on the crop type,input parameters employed for prediction,the modeling techniques adopted,and the evaluation metrics used for estimatingmodel performance.We also discuss the ethical and social impacts of AI on agriculture.However,various approaches presented in the scientific literature have delivered impressive predictions,they are complicateddue to intricate,multifactorial influences oncropgrowthand theneed for accuratedata-driven models.Therefore,thorough research is required to deal with challenges in predicting agricultural output. 展开更多
关键词 Machine learning crop yield prediction deep learning remote sensing long short-term memory time series prediction systematic literature review
下载PDF
A NovelMethod for Determining Tourism Carrying Capacity in a Decision-Making Context Using q−Rung Orthopair Fuzzy Hypersoft Environment
2
作者 Salma Khan Muhammad Gulistan +2 位作者 NasreenKausar Seifedine Kadry Jungeun Kim 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第2期1951-1979,共29页
Tourism is a popular activity that allows individuals to escape their daily routines and explore new destinations for various reasons,including leisure,pleasure,or business.A recent study has proposed a unique mathema... Tourism is a popular activity that allows individuals to escape their daily routines and explore new destinations for various reasons,including leisure,pleasure,or business.A recent study has proposed a unique mathematical concept called a q−Rung orthopair fuzzy hypersoft set(q−ROFHS)to enhance the formal representation of human thought processes and evaluate tourism carrying capacity.This approach can capture the imprecision and ambiguity often present in human perception.With the advanced mathematical tools in this field,the study has also incorporated the Einstein aggregation operator and score function into the q−ROFHS values to supportmultiattribute decision-making algorithms.By implementing this technique,effective plans can be developed for social and economic development while avoiding detrimental effects such as overcrowding or environmental damage caused by tourism.A case study of selected tourism carrying capacity will demonstrate the proposed methodology. 展开更多
关键词 q−Rung orthopair fuzzy hypersoft set DECISION-MAKING tourism carrying capacity aggregation operator
下载PDF
Novelty of Different Distance Approach for Multi-Criteria Decision-Making Challenges Using q-Rung Vague Sets
3
作者 Murugan Palanikumar Nasreen Kausar +3 位作者 Dragan Pamucar Seifedine Kadry Chomyong Kim Yunyoung Nam 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第6期3353-3385,共33页
In this article,multiple attribute decision-making problems are solved using the vague normal set(VNS).It is possible to generalize the vague set(VS)and q-rung fuzzy set(FS)into the q-rung vague set(VS).A log q-rung n... In this article,multiple attribute decision-making problems are solved using the vague normal set(VNS).It is possible to generalize the vague set(VS)and q-rung fuzzy set(FS)into the q-rung vague set(VS).A log q-rung normal vague weighted averaging(log q-rung NVWA),a log q-rung normal vague weighted geometric(log q-rung NVWG),a log generalized q-rung normal vague weighted averaging(log Gq-rung NVWA),and a log generalized q-rungnormal vagueweightedgeometric(logGq-rungNVWG)operator are discussed in this article.Adescription is provided of the scoring function,accuracy function and operational laws of the log q-rung VS.The algorithms underlying these functions are also described.A numerical example is provided to extend the Euclidean distance and the Humming distance.Additionally,idempotency,boundedness,commutativity,and monotonicity of the log q-rung VS are examined as they facilitate recognizing the optimal alternative more quickly and help clarify conceptualization.We chose five anemia patients with four types of symptoms including seizures,emotional shock or hysteria,brain cause,and high fever,who had either retrograde amnesia,anterograde amnesia,transient global amnesia,post-traumatic amnesia,or infantile amnesia.Natural numbers q are used to express the results of the models.To demonstrate the effectiveness and accuracy of the models we are investigating,we compare several existing models with those that have been developed. 展开更多
关键词 Vague set aggregating operators euclidean distance hamming distance decision making
下载PDF
A Novel Framework for Learning and Classifying the Imbalanced Multi-Label Data
4
作者 P.K.A.Chitra S.Appavu alias Balamurugan +3 位作者 S.Geetha Seifedine Kadry Jungeun Kim Keejun Han 《Computer Systems Science & Engineering》 2024年第5期1367-1385,共19页
A generalization of supervised single-label learning based on the assumption that each sample in a dataset may belong to more than one class simultaneously is called multi-label learning.The main objective of this wor... A generalization of supervised single-label learning based on the assumption that each sample in a dataset may belong to more than one class simultaneously is called multi-label learning.The main objective of this work is to create a novel framework for learning and classifying imbalancedmulti-label data.This work proposes a framework of two phases.The imbalanced distribution of themulti-label dataset is addressed through the proposed Borderline MLSMOTE resampling method in phase 1.Later,an adaptive weighted l21 norm regularized(Elastic-net)multilabel logistic regression is used to predict unseen samples in phase 2.The proposed Borderline MLSMOTE resampling method focuses on samples with concurrent high labels in contrast to conventional MLSMOTE.The minority labels in these samples are called difficult minority labels and are more prone to penalize classification performance.The concurrentmeasure is considered borderline,and labels associated with samples are regarded as borderline labels in the decision boundary.In phase II,a novel adaptive l21 norm regularized weighted multi-label logistic regression is used to handle balanced data with different weighted synthetic samples.Experimentation on various benchmark datasets shows the outperformance of the proposed method and its powerful predictive performances over existing conventional state-of-the-art multi-label methods. 展开更多
关键词 Multi-label imbalanced data multi-label learning Borderline MLSMOTE concurrent multi-label adaptive weighted multi-label elastic net difficult minority label
下载PDF
A deep learning fusion model for accurate classification of brain tumours in Magnetic Resonance images
5
作者 Nechirvan Asaad Zebari Chira Nadheef Mohammed +8 位作者 Dilovan Asaad Zebari Mazin Abed Mohammed Diyar Qader Zeebaree Haydar Abdulameer Marhoon Karrar Hameed Abdulkareem Seifedine Kadry Wattana Viriyasitavat Jan Nedoma Radek Martinek 《CAAI Transactions on Intelligence Technology》 SCIE EI 2024年第4期790-804,共15页
Detecting brain tumours is complex due to the natural variation in their location, shape, and intensity in images. While having accurate detection and segmentation of brain tumours would be beneficial, current methods... Detecting brain tumours is complex due to the natural variation in their location, shape, and intensity in images. While having accurate detection and segmentation of brain tumours would be beneficial, current methods still need to solve this problem despite the numerous available approaches. Precise analysis of Magnetic Resonance Imaging (MRI) is crucial for detecting, segmenting, and classifying brain tumours in medical diagnostics. Magnetic Resonance Imaging is a vital component in medical diagnosis, and it requires precise, efficient, careful, efficient, and reliable image analysis techniques. The authors developed a Deep Learning (DL) fusion model to classify brain tumours reliably. Deep Learning models require large amounts of training data to achieve good results, so the researchers utilised data augmentation techniques to increase the dataset size for training models. VGG16, ResNet50, and convolutional deep belief networks networks extracted deep features from MRI images. Softmax was used as the classifier, and the training set was supplemented with intentionally created MRI images of brain tumours in addition to the genuine ones. The features of two DL models were combined in the proposed model to generate a fusion model, which significantly increased classification accuracy. An openly accessible dataset from the internet was used to test the model's performance, and the experimental results showed that the proposed fusion model achieved a classification accuracy of 98.98%. Finally, the results were compared with existing methods, and the proposed model outperformed them significantly. 展开更多
关键词 brain tumour deep learning feature fusion model MRI images multi‐classification
下载PDF
Task Offloading and Resource Allocation in IoT Based Mobile Edge Computing Using Deep Learning 被引量:1
6
作者 Ily s Abdullaev Natalia Prodanova +3 位作者 KAruna Bhaskar ELaxmi Lydia Seifedine Kadry Jungeun Kim 《Computers, Materials & Continua》 SCIE EI 2023年第8期1463-1477,共15页
Recently,computation offloading has become an effective method for overcoming the constraint of a mobile device(MD)using computationintensivemobile and offloading delay-sensitive application tasks to the remote cloud-... Recently,computation offloading has become an effective method for overcoming the constraint of a mobile device(MD)using computationintensivemobile and offloading delay-sensitive application tasks to the remote cloud-based data center.Smart city benefitted from offloading to edge point.Consider a mobile edge computing(MEC)network in multiple regions.They comprise N MDs and many access points,in which everyMDhasM independent real-time tasks.This study designs a new Task Offloading and Resource Allocation in IoT-based MEC using Deep Learning with Seagull Optimization(TORA-DLSGO)algorithm.The proposed TORA-DLSGO technique addresses the resource management issue in the MEC server,which enables an optimum offloading decision to minimize the system cost.In addition,an objective function is derived based on minimizing energy consumption subject to the latency requirements and restricted resources.The TORA-DLSGO technique uses the deep belief network(DBN)model for optimum offloading decision-making.Finally,the SGO algorithm is used for the parameter tuning of the DBN model.The simulation results exemplify that the TORA-DLSGO technique outperformed the existing model in reducing client overhead in the MEC systems with a maximum reward of 0.8967. 展开更多
关键词 Mobile edge computing seagull optimization deep belief network resource management parameter tuning
下载PDF
Design of Evolutionary Algorithm Based Unequal Clustering for Energy Aware Wireless Sensor Networks
7
作者 Mohammed Altaf Ahmed T.Satyanarayana Murthy +4 位作者 Fayadh Alenezi E.Laxmi Lydia Seifedine Kadry Yena Kim Yunyoung Nam 《Computer Systems Science & Engineering》 SCIE EI 2023年第10期1283-1297,共15页
Wireless Sensor Networks(WSN)play a vital role in several real-time applications ranging from military to civilian.Despite the benefits of WSN,energy efficiency becomes a major part of the challenging issue in WSN,whi... Wireless Sensor Networks(WSN)play a vital role in several real-time applications ranging from military to civilian.Despite the benefits of WSN,energy efficiency becomes a major part of the challenging issue in WSN,which necessitate proper load balancing amongst the clusters and serves a wider monitoring region.The clustering technique for WSN has several benefits:lower delay,higher energy efficiency,and collision avoidance.But clustering protocol has several challenges.In a large-scale network,cluster-based protocols mainly adapt multi-hop routing to save energy,leading to hot spot problems.A hot spot problem becomes a problem where a cluster node nearer to the base station(BS)tends to drain the energy much quicker than other nodes because of the need to implement more transmission.This article introduces a Jumping Spider Optimization Based Unequal Clustering Protocol for Mitigating Hotspot Problems(JSOUCP-MHP)in WSN.The JSO algorithm is stimulated by the characteristics of spiders naturally and mathematically modelled the hunting mechanism such as search,persecution,and jumping skills to attack prey.The presented JSOUCPMHP technique mainly resolves the hot spot issue for maximizing the network lifespan.The JSOUCP-MHP technique elects a proper set of cluster heads(CHs)using average residual energy(RE)to attain this.In addition,the JSOUCP-MHP technique determines the cluster sizes based on two measures,i.e.,RE and distance to BS(DBS),showing the novelty of the work.The proposed JSOUCP-MHP technique is examined under several experiments to ensure its supremacy.The comparison study shows the significance of the JSOUCPMHP technique over other models. 展开更多
关键词 Wireless sensor networks energy efficiency cluster heads unequal clustering hot spot issue lifetime enhancement
下载PDF
Artificial Intelligence-Enabled Cooperative Cluster-Based Data Collection for Unmanned Aerial Vehicles 被引量:1
8
作者 R.Rajender C.S.S.Anupama +3 位作者 G.Jose Moses E.Laxmi Lydia Seifedine Kadry Sangsoon Lim 《Computers, Materials & Continua》 SCIE EI 2022年第11期3351-3365,共15页
In recent times,sixth generation(6G)communication technologies have become a hot research topic because of maximum throughput and low delay services for mobile users.It encompasses several heterogeneous resource and c... In recent times,sixth generation(6G)communication technologies have become a hot research topic because of maximum throughput and low delay services for mobile users.It encompasses several heterogeneous resource and communication standard in ensuring incessant availability of service.At the same time,the development of 6G enables the Unmanned Aerial Vehicles(UAVs)in offering cost and time-efficient solution to several applications like healthcare,surveillance,disaster management,etc.In UAV networks,energy efficiency and data collection are considered the major process for high quality network communication.But these procedures are found to be challenging because of maximum mobility,unstable links,dynamic topology,and energy restricted UAVs.These issues are solved by the use of artificial intelligence(AI)and energy efficient clustering techniques for UAVs in the 6G environment.With this inspiration,this work designs an artificial intelligence enabled cooperative cluster-based data collection technique for unmanned aerial vehicles(AECCDC-UAV)in 6G environment.The proposed AECCDC-UAV technique purposes for dividing the UAV network as to different clusters and allocate a cluster head(CH)to each cluster in such a way that the energy consumption(ECM)gets minimized.The presented AECCDC-UAV technique involves a quasi-oppositional shuffled shepherd optimization(QOSSO)algorithm for selecting the CHs and construct clusters.The QOSSO algorithm derives a fitness function involving three input parameters residual energy of UAVs,distance to neighboring UAVs,and degree of UAVs.The performance of the AECCDC-UAV technique is validated in many aspects and the obtained experimental values demonstration promising results over the recent state of art methods. 展开更多
关键词 6G unmanned aerial vehicles resource allocation energy efficiency artificial intelligence CLUSTERING data collection
下载PDF
Pattern Recognition of Modulation Signal Classification Using Deep Neural Networks
9
作者 D.Venugopal V.Mohan +3 位作者 S.Ramesh S.Janupriya Sangsoon Lim Seifedine Kadry 《Computer Systems Science & Engineering》 SCIE EI 2022年第11期545-558,共14页
In recent times,pattern recognition of communication modulation signals has gained significant attention in several application areas such as military,civilian field,etc.It becomes essential to design a safe and robus... In recent times,pattern recognition of communication modulation signals has gained significant attention in several application areas such as military,civilian field,etc.It becomes essential to design a safe and robust feature extraction(FE)approach to efficiently identify the various signal modulation types in a complex platform.Several works have derived new techniques to extract the feature parameters namely instant features,fractal features,and so on.In addition,machine learning(ML)and deep learning(DL)approaches can be commonly employed for modulation signal classification.In this view,this paper designs pattern recognition of communication signal modulation using fractal features with deep neural networks(CSM-FFDNN).The goal of the CSM-FFDNN model is to classify the different types of digitally modulated signals.The proposed CSM-FFDNN model involves two major processes namely FE and classification.The proposed model uses Sevcik Fractal Dimension(SFD)technique to extract the fractal features from the digital modulated signals.Besides,the extracted features are fed into the DNN model for modulation signal classification.To improve the classification performance of the DNN model,a barnacles mating optimizer(BMO)is used for the hyperparameter tuning of the DNN model in such a way that the DNN performance can be raised.A wide range of simulations takes place to highlight the enhanced performance of the CSM-FFDNN model.The experimental outcomes pointed out the superior recognition rate of the CSM-FFDNN model over the recent state of art methods interms of different evaluation parameters. 展开更多
关键词 Pattern recognition signal modulation communication signals deep learning feature extraction
下载PDF
CNN Based Features Extraction and Selection Using EPO Optimizer for Cotton Leaf Diseases Classification
10
作者 Mehwish Zafar JaveriaAmin +3 位作者 Muhammad Sharif Muhammad Almas Anjum Seifedine Kadry Jungeun Kim 《Computers, Materials & Continua》 SCIE EI 2023年第9期2779-2793,共15页
Worldwide cotton is the most profitable cash crop.Each year the production of this crop suffers because of several diseases.At an early stage,computerized methods are used for disease detection that may reduce the los... Worldwide cotton is the most profitable cash crop.Each year the production of this crop suffers because of several diseases.At an early stage,computerized methods are used for disease detection that may reduce the loss in the production of cotton.Although several methods are proposed for the detection of cotton diseases,however,still there are limitations because of low-quality images,size,shape,variations in orientation,and complex background.Due to these factors,there is a need for novel methods for features extraction/selection for the accurate cotton disease classification.Therefore in this research,an optimized features fusion-based model is proposed,in which two pre-trained architectures called EfficientNet-b0 and Inception-v3 are utilized to extract features,each model extracts the feature vector of length N×1000.After that,the extracted features are serially concatenated having a feature vector lengthN×2000.Themost prominent features are selected usingEmperor PenguinOptimizer(EPO)method.The method is evaluated on two publically available datasets,such as Kaggle cotton disease dataset-I,and Kaggle cotton-leaf-infection-II.The EPO method returns the feature vector of length 1×755,and 1×824 using dataset-I,and dataset-II,respectively.The classification is performed using 5,7,and 10 folds cross-validation.The Quadratic Discriminant Analysis(QDA)classifier provides an accuracy of 98.9%on 5 fold,98.96%on 7 fold,and 99.07%on 10 fold using Kaggle cotton disease dataset-I while the Ensemble Subspace K Nearest Neighbor(KNN)provides 99.16%on 5 fold,98.99%on 7 fold,and 99.27%on 10 fold using Kaggle cotton-leaf-infection dataset-II. 展开更多
关键词 Deep learning cotton disease detection features selection classification efficientnet-b0 inception-v3 quadratic discriminant analysis subspace KNN
下载PDF
对流水平板上二次混合的三重扩散纳米流体的流体力学和流体磁学对比分析
11
作者 KHALID Abdulkhaliq M-alharbi HINA Gul +2 位作者 MUHAMMAD Ramzan SEIFEDINE Kadry ABDULKAFI Mohammed-saeed 《Journal of Central South University》 SCIE EI CAS CSCD 2023年第8期2616-2626,共11页
当两种不同密度和扩散速率的纳米流体混合时,会发生三重扩散和对流现象。本文主要目的是比较对流水平板上二次混合的三重扩散纳米流体的流体力学和流体磁学。文中建立的模型其他新特征考虑了可变导热系数和非线性热辐射热通量。该问题... 当两种不同密度和扩散速率的纳米流体混合时,会发生三重扩散和对流现象。本文主要目的是比较对流水平板上二次混合的三重扩散纳米流体的流体力学和流体磁学。文中建立的模型其他新特征考虑了可变导热系数和非线性热辐射热通量。该问题由一组方程组成,借助MATLAB bvp4c包进行求解。为了使参数与相关剖面的影响可视化,给出了图形说明,并利用图表来评估关键参数变化是如何影响相关场及其相应的物理量。通过对盐1和盐2的Dufour参数的多次估计,可以推断出流体流速的波动。验证了流体动力流相对于流体磁流的优势。此外,还将特定情景下已发表的结果与当前研究结果进行了比较。 展开更多
关键词 三重扩散纳米流体 二次混合对流 可变导热系数 非线性热辐射
下载PDF
Arithmetic Optimization with Ensemble Deep Transfer Learning Based Melanoma Classification
12
作者 K.Kalyani Sara A Althubiti +4 位作者 Mohammed Altaf Ahmed ELaxmi Lydia Seifedine Kadry Neunggyu Han Yunyoung Nam 《Computers, Materials & Continua》 SCIE EI 2023年第4期149-164,共16页
Melanoma is a skin disease with high mortality rate while earlydiagnoses of the disease can increase the survival chances of patients. Itis challenging to automatically diagnose melanoma from dermoscopic skinsamples. ... Melanoma is a skin disease with high mortality rate while earlydiagnoses of the disease can increase the survival chances of patients. Itis challenging to automatically diagnose melanoma from dermoscopic skinsamples. Computer-Aided Diagnostic (CAD) tool saves time and effort indiagnosing melanoma compared to existing medical approaches. In this background,there is a need exists to design an automated classification modelfor melanoma that can utilize deep and rich feature datasets of an imagefor disease classification. The current study develops an Intelligent ArithmeticOptimization with Ensemble Deep Transfer Learning Based MelanomaClassification (IAOEDTT-MC) model. The proposed IAOEDTT-MC modelfocuses on identification and classification of melanoma from dermoscopicimages. To accomplish this, IAOEDTT-MC model applies image preprocessingat the initial stage in which Gabor Filtering (GF) technique is utilized.In addition, U-Net segmentation approach is employed to segment the lesionregions in dermoscopic images. Besides, an ensemble of DL models includingResNet50 and ElasticNet models is applied in this study. Moreover, AOalgorithm with Gated Recurrent Unit (GRU) method is utilized for identificationand classification of melanoma. The proposed IAOEDTT-MC methodwas experimentally validated with the help of benchmark datasets and theproposed model attained maximum accuracy of 92.09% on ISIC 2017 dataset. 展开更多
关键词 Skin cancer deep learning melanoma classification DERMOSCOPY computer aided diagnosis
下载PDF
Feature Fusion Based Deep Transfer Learning Based Human Gait Classification Model
13
作者 C.S.S.Anupama Rafina Zakieva +4 位作者 Afanasiy Sergin E.Laxmi Lydia Seifedine Kadry Chomyong Kim Yunyoung Nam 《Intelligent Automation & Soft Computing》 SCIE 2023年第8期1453-1468,共16页
Gait is a biological typical that defines the method by that people walk.Walking is the most significant performance which keeps our day-to-day life and physical condition.Surface electromyography(sEMG)is a weak bioel... Gait is a biological typical that defines the method by that people walk.Walking is the most significant performance which keeps our day-to-day life and physical condition.Surface electromyography(sEMG)is a weak bioelectric signal that portrays the functional state between the human muscles and nervous system to any extent.Gait classifiers dependent upon sEMG signals are extremely utilized in analysing muscle diseases and as a guide path for recovery treatment.Several approaches are established in the works for gait recognition utilizing conventional and deep learning(DL)approaches.This study designs an Enhanced Artificial Algae Algorithm with Hybrid Deep Learning based Human Gait Classification(EAAA-HDLGR)technique on sEMG signals.The EAAA-HDLGR technique extracts the time domain(TD)and frequency domain(FD)features from the sEMG signals and is fused.In addition,the EAAA-HDLGR technique exploits the hybrid deep learning(HDL)model for gait recognition.At last,an EAAA-based hyperparameter optimizer is applied for the HDL model,which is mainly derived from the quasi-oppositional based learning(QOBL)concept,showing the novelty of the work.A brief classifier outcome of the EAAA-HDLGR technique is examined under diverse aspects,and the results indicate improving the EAAA-HDLGR technique.The results imply that the EAAA-HDLGR technique accomplishes improved results with the inclusion of EAAA on gait recognition. 展开更多
关键词 Feature fusion human gait recognition deep learning electromyography signals artificial algae algorithm
下载PDF
Sand Cat Swarm Optimization with Deep Transfer Learning for Skin Cancer Classification
14
作者 C.S.S.Anupama Saud Yonbawi +3 位作者 G.Jose Moses E.Laxmi Lydia Seifedine Kadry Jungeun Kim 《Computer Systems Science & Engineering》 SCIE EI 2023年第11期2079-2095,共17页
Skin cancer is one of the most dangerous cancer.Because of the high melanoma death rate,skin cancer is divided into non-melanoma and melanoma.The dermatologist finds it difficult to identify skin cancer from dermoscop... Skin cancer is one of the most dangerous cancer.Because of the high melanoma death rate,skin cancer is divided into non-melanoma and melanoma.The dermatologist finds it difficult to identify skin cancer from dermoscopy images of skin lesions.Sometimes,pathology and biopsy examinations are required for cancer diagnosis.Earlier studies have formulated computer-based systems for detecting skin cancer from skin lesion images.With recent advancements in hardware and software technologies,deep learning(DL)has developed as a potential technique for feature learning.Therefore,this study develops a new sand cat swarm optimization with a deep transfer learning method for skin cancer detection and classification(SCSODTL-SCC)technique.The major intention of the SCSODTL-SCC model lies in the recognition and classification of different types of skin cancer on dermoscopic images.Primarily,Dull razor approach-related hair removal and median filtering-based noise elimination are performed.Moreover,the U2Net segmentation approach is employed for detecting infected lesion regions in dermoscopic images.Furthermore,the NASNetLarge-based feature extractor with a hybrid deep belief network(DBN)model is used for classification.Finally,the classification performance can be improved by the SCSO algorithm for the hyperparameter tuning process,showing the novelty of the work.The simulation values of the SCSODTL-SCC model are scrutinized on the benchmark skin lesion dataset.The comparative results assured that the SCSODTL-SCC model had shown maximum skin cancer classification performance in different measures. 展开更多
关键词 Deep learning skin cancer dermoscopic images sand cat swarm optimization machine learning
下载PDF
Harris Hawks Optimizer with Graph Convolutional Network Based Weed Detection in Precision Agriculture
15
作者 Saud Yonbawi Sultan Alahmari +4 位作者 T.Satyanarayana Murthy Padmakar Maddala E.Laxmi Lydia Seifedine Kadry Jungeun Kim 《Computer Systems Science & Engineering》 SCIE EI 2023年第8期1533-1547,共15页
Precision agriculture includes the optimum and adequate use of resources depending on several variables that govern crop yield.Precision agriculture offers a novel solution utilizing a systematic technique for current... Precision agriculture includes the optimum and adequate use of resources depending on several variables that govern crop yield.Precision agriculture offers a novel solution utilizing a systematic technique for current agricultural problems like balancing production and environmental concerns.Weed control has become one of the significant problems in the agricultural sector.In traditional weed control,the entire field is treated uniformly by spraying the soil,a single herbicide dose,weed,and crops in the same way.For more precise farming,robots could accomplish targeted weed treatment if they could specifically find the location of the dispensable plant and identify the weed type.This may lessen by large margin utilization of agrochemicals on agricultural fields and favour sustainable agriculture.This study presents a Harris Hawks Optimizer with Graph Convolutional Network based Weed Detection(HHOGCN-WD)technique for Precision Agriculture.The HHOGCN-WD technique mainly focuses on identifying and classifying weeds for precision agriculture.For image pre-processing,the HHOGCN-WD model utilizes a bilateral normal filter(BNF)for noise removal.In addition,coupled convolutional neural network(CCNet)model is utilized to derive a set of feature vectors.To detect and classify weed,the GCN model is utilized with the HHO algorithm as a hyperparameter optimizer to improve the detection performance.The experimental results of the HHOGCN-WD technique are investigated under the benchmark dataset.The results indicate the promising performance of the presented HHOGCN-WD model over other recent approaches,with increased accuracy of 99.13%. 展开更多
关键词 Weed detection precision agriculture graph convolutional network harris hawks optimizer hyperparameter tuning
下载PDF
Leveraging Multimodal Ensemble Fusion-Based Deep Learning for COVID-19 on Chest Radiographs
16
作者 Mohamed Yacin Sikkandar K.Hemalatha +4 位作者 M.Subashree S.Srinivasan Seifedine Kadry Jungeun Kim Keejun Han 《Computer Systems Science & Engineering》 SCIE EI 2023年第10期873-889,共17页
Recently,COVID-19 has posed a challenging threat to researchers,scientists,healthcare professionals,and administrations over the globe,from its diagnosis to its treatment.The researchers are making persistent efforts ... Recently,COVID-19 has posed a challenging threat to researchers,scientists,healthcare professionals,and administrations over the globe,from its diagnosis to its treatment.The researchers are making persistent efforts to derive probable solutions formanaging the pandemic in their areas.One of the widespread and effective ways to detect COVID-19 is to utilize radiological images comprising X-rays and computed tomography(CT)scans.At the same time,the recent advances in machine learning(ML)and deep learning(DL)models show promising results in medical imaging.Particularly,the convolutional neural network(CNN)model can be applied to identifying abnormalities on chest radiographs.While the epidemic of COVID-19,much research is led on processing the data compared with DL techniques,particularly CNN.This study develops an improved fruit fly optimization with a deep learning-enabled fusion(IFFO-DLEF)model for COVID-19 detection and classification.The major intention of the IFFO-DLEF model is to investigate the presence or absence of COVID-19.To do so,the presented IFFODLEF model applies image pre-processing at the initial stage.In addition,the ensemble of three DL models such as DenseNet169,EfficientNet,and ResNet50,are used for feature extraction.Moreover,the IFFO algorithm with a multilayer perceptron(MLP)classification model is utilized to identify and classify COVID-19.The parameter optimization of the MLP approach utilizing the IFFO technique helps in accomplishing enhanced classification performance.The experimental result analysis of the IFFO-DLEF model carried out on the CXR image database portrayed the better performance of the presented IFFO-DLEF model over recent approaches. 展开更多
关键词 COVID-19 computer vision deep learning image classification fusion model
下载PDF
DeepCNN:Spectro-temporal feature representation for speech emotion recognition
17
作者 Nasir Saleem Jiechao Gao +4 位作者 Rizwana Irfan Ahmad Almadhor Hafiz Tayyab Rauf Yudong Zhang Seifedine Kadry 《CAAI Transactions on Intelligence Technology》 SCIE EI 2023年第2期401-417,共17页
Speech emotion recognition(SER)is an important research problem in human-computer interaction systems.The representation and extraction of features are significant challenges in SER systems.Despite the promising resul... Speech emotion recognition(SER)is an important research problem in human-computer interaction systems.The representation and extraction of features are significant challenges in SER systems.Despite the promising results of recent studies,they generally do not leverage progressive fusion techniques for effective feature representation and increasing receptive fields.To mitigate this problem,this article proposes DeepCNN,which is a fusion of spectral and temporal features of emotional speech by parallelising convolutional neural networks(CNNs)and a convolution layer-based transformer.Two parallel CNNs are applied to extract the spectral features(2D-CNN)and temporal features(1D-CNN)representations.A 2D-convolution layer-based transformer module extracts spectro-temporal features and concatenates them with features from parallel CNNs.The learnt low-level concatenated features are then applied to a deep framework of convolutional blocks,which retrieves high-level feature representation and subsequently categorises the emotional states using an attention gated recurrent unit and classification layer.This fusion technique results in a deeper hierarchical feature representation at a lower computational cost while simultaneously expanding the filter depth and reducing the feature map.The Berlin Database of Emotional Speech(EMO-BD)and Interactive Emotional Dyadic Motion Capture(IEMOCAP)datasets are used in experiments to recognise distinct speech emotions.With efficient spectral and temporal feature representation,the proposed SER model achieves 94.2%accuracy for different emotions on the EMO-BD and 81.1%accuracy on the IEMOCAP dataset respectively.The proposed SER system,DeepCNN,outperforms the baseline SER systems in terms of emotion recognition accuracy on the EMO-BD and IEMOCAP datasets. 展开更多
关键词 decision making deep learning
下载PDF
Convergence of blockchain and Internet of Things:integration, security, and use cases
18
作者 Robertas DAMASEVICIUS Sanjay MISRA +1 位作者 Rytis MASKELIUNAS Anand NAYYAR 《Frontiers of Information Technology & Electronic Engineering》 SCIE EI CSCD 2024年第10期1295-1321,共27页
Internet of Things(IoT) devices are becoming increasingly ubiquitous, and their adoption is growing at an exponential rate. However, they are vulnerable to security breaches, and traditional security mechanisms are no... Internet of Things(IoT) devices are becoming increasingly ubiquitous, and their adoption is growing at an exponential rate. However, they are vulnerable to security breaches, and traditional security mechanisms are not enough to protect them. The massive amounts of data generated by IoT devices can be easily manipulated or stolen, posing significant privacy concerns. This paper is to provide a comprehensive overview of the integration of blockchain and IoT technologies and their potential to enhance the security and privacy of IoT systems. The paper examines various security issues and vulnerabilities in IoT and explores how blockchain-based solutions can be used to address them. It provides insights into the various security issues and vulnerabilities in IoT and explores how blockchain can be used to enhance security and privacy. The paper also discusses the potential applications of blockchain-based IoT(B-IoT) systems in various sectors, such as healthcare, transportation, and supply chain management. The paper reveals that the integration of blockchain and IoT has the potential to enhance the security,privacy, and trustworthiness of IoT systems. The multi-layered architecture of B-IoT, consisting of perception, network, data processing, and application layers, provides a comprehensive framework for the integration of blockchain and IoT technologies.The study identifies various security solutions for B-IoT, including smart contracts, decentralized control, immutable data storage,identity and access management(IAM), and consensus mechanisms. The study also discusses the challenges and future research directions in the field of B-IoT. 展开更多
关键词 Blockchain Internet of Things(loT) Blockchain-based IoT(B-IoT) SECURITY SCALABILITY PRIVACY
原文传递
An Improved Sparrow Search Algorithm for Node Localization in WSN 被引量:1
19
作者 RThenmozhi Abdul Wahid Nasir +4 位作者 Vijaya Krishna Sonthi TAvudaiappan Seifedine Kadry Kuntha Pin Yunyoung Nam 《Computers, Materials & Continua》 SCIE EI 2022年第4期2037-2051,共15页
Wireless sensor networks(WSN)comprise a set of numerous cheap sensors placed in the target region.A primary function of the WSN is to avail the location details of the event occurrences or the node.A major challenge i... Wireless sensor networks(WSN)comprise a set of numerous cheap sensors placed in the target region.A primary function of the WSN is to avail the location details of the event occurrences or the node.A major challenge in WSN is node localization which plays an important role in data gathering applications.Since GPS is expensive and inaccurate in indoor regions,effective node localization techniques are needed.The major intention of localization is for determining the place of node in short period with minimum computation.To achieve this,bio-inspired algorithms are used and node localization is assumed as an optimization problem in a multidimensional space.This paper introduces a new Sparrow Search Algorithm with Doppler Effect(SSA-DE)for Node Localization in Wireless Networks.The SSA is generally stimulated by the group wisdom,foraging,and anti-predation behaviors of sparrows.Besides,the Doppler Effect is incorporated into the SSA to further improve the node localization performance.In addition,the SSA-DE model defines the position of node in an iterative manner using Euclidian distance as the fitness function.The presented SSA-DE model is implanted in MATLAB R2014.An extensive set of experimentation is carried out and the results are examined under a varying number of anchor nodes and ranging error.The attained experimental outcome ensured the superior efficiency of the SSA-DE technique over the existing techniques. 展开更多
关键词 LOCALIZATION wireless networks
下载PDF
Evolutionary Algorithm Based Task Scheduling in IoT Enabled Cloud Environment
20
作者 R.Joshua Samuel Raj M.Varalatchoumy +4 位作者 V.L.Helen Josephine A.Jegatheesan Seifedine Kadry Maytham N.Meqdad Yunyoung Nam 《Computers, Materials & Continua》 SCIE EI 2022年第4期1095-1109,共15页
Internet of Things (IoT) is transforming the technical setting ofconventional systems and finds applicability in smart cities, smart healthcare, smart industry, etc. In addition, the application areas relating to theI... Internet of Things (IoT) is transforming the technical setting ofconventional systems and finds applicability in smart cities, smart healthcare, smart industry, etc. In addition, the application areas relating to theIoT enabled models are resource-limited and necessitate crisp responses, lowlatencies, and high bandwidth, which are beyond their abilities. Cloud computing (CC) is treated as a resource-rich solution to the above mentionedchallenges. But the intrinsic high latency of CC makes it nonviable. The longerlatency degrades the outcome of IoT based smart systems. CC is an emergentdispersed, inexpensive computing pattern with massive assembly of heterogeneous autonomous systems. The effective use of task scheduling minimizes theenergy utilization of the cloud infrastructure and rises the income of serviceproviders by the minimization of the processing time of the user job. Withthis motivation, this paper presents an intelligent Chaotic Artificial ImmuneOptimization Algorithm for Task Scheduling (CAIOA-RS) in IoT enabledcloud environment. The proposed CAIOA-RS algorithm solves the issue ofresource allocation in the IoT enabled cloud environment. It also satisfiesthe makespan by carrying out the optimum task scheduling process with thedistinct strategies of incoming tasks. The design of CAIOA-RS techniqueincorporates the concept of chaotic maps into the conventional AIOA toenhance its performance. A series of experiments were carried out on theCloudSim platform. The simulation results demonstrate that the CAIOA-RStechnique indicates that the proposed model outperforms the original version,as well as other heuristics and metaheuristics. 展开更多
关键词 Internet of things cloud computing task scheduling metaheuristics resource allocation
下载PDF
上一页 1 2 下一页 到第
使用帮助 返回顶部