The growing global requirement for food and the need for sustainable farming in an era of a changing climate and scarce resources have inspired substantial crop yield prediction research.Deep learning(DL)and machine l...The growing global requirement for food and the need for sustainable farming in an era of a changing climate and scarce resources have inspired substantial crop yield prediction research.Deep learning(DL)and machine learning(ML)models effectively deal with such challenges.This research paper comprehensively analyses recent advancements in crop yield prediction from January 2016 to March 2024.In addition,it analyses the effectiveness of various input parameters considered in crop yield prediction models.We conducted an in-depth search and gathered studies that employed crop modeling and AI-based methods to predict crop yield.The total number of articles reviewed for crop yield prediction using ML,meta-modeling(Crop models coupled with ML/DL),and DL-based prediction models and input parameter selection is 125.We conduct the research by setting up five objectives for this research and discussing them after analyzing the selected research papers.Each study is assessed based on the crop type,input parameters employed for prediction,the modeling techniques adopted,and the evaluation metrics used for estimatingmodel performance.We also discuss the ethical and social impacts of AI on agriculture.However,various approaches presented in the scientific literature have delivered impressive predictions,they are complicateddue to intricate,multifactorial influences oncropgrowthand theneed for accuratedata-driven models.Therefore,thorough research is required to deal with challenges in predicting agricultural output.展开更多
Tourism is a popular activity that allows individuals to escape their daily routines and explore new destinations for various reasons,including leisure,pleasure,or business.A recent study has proposed a unique mathema...Tourism is a popular activity that allows individuals to escape their daily routines and explore new destinations for various reasons,including leisure,pleasure,or business.A recent study has proposed a unique mathematical concept called a q−Rung orthopair fuzzy hypersoft set(q−ROFHS)to enhance the formal representation of human thought processes and evaluate tourism carrying capacity.This approach can capture the imprecision and ambiguity often present in human perception.With the advanced mathematical tools in this field,the study has also incorporated the Einstein aggregation operator and score function into the q−ROFHS values to supportmultiattribute decision-making algorithms.By implementing this technique,effective plans can be developed for social and economic development while avoiding detrimental effects such as overcrowding or environmental damage caused by tourism.A case study of selected tourism carrying capacity will demonstrate the proposed methodology.展开更多
In this article,multiple attribute decision-making problems are solved using the vague normal set(VNS).It is possible to generalize the vague set(VS)and q-rung fuzzy set(FS)into the q-rung vague set(VS).A log q-rung n...In this article,multiple attribute decision-making problems are solved using the vague normal set(VNS).It is possible to generalize the vague set(VS)and q-rung fuzzy set(FS)into the q-rung vague set(VS).A log q-rung normal vague weighted averaging(log q-rung NVWA),a log q-rung normal vague weighted geometric(log q-rung NVWG),a log generalized q-rung normal vague weighted averaging(log Gq-rung NVWA),and a log generalized q-rungnormal vagueweightedgeometric(logGq-rungNVWG)operator are discussed in this article.Adescription is provided of the scoring function,accuracy function and operational laws of the log q-rung VS.The algorithms underlying these functions are also described.A numerical example is provided to extend the Euclidean distance and the Humming distance.Additionally,idempotency,boundedness,commutativity,and monotonicity of the log q-rung VS are examined as they facilitate recognizing the optimal alternative more quickly and help clarify conceptualization.We chose five anemia patients with four types of symptoms including seizures,emotional shock or hysteria,brain cause,and high fever,who had either retrograde amnesia,anterograde amnesia,transient global amnesia,post-traumatic amnesia,or infantile amnesia.Natural numbers q are used to express the results of the models.To demonstrate the effectiveness and accuracy of the models we are investigating,we compare several existing models with those that have been developed.展开更多
A generalization of supervised single-label learning based on the assumption that each sample in a dataset may belong to more than one class simultaneously is called multi-label learning.The main objective of this wor...A generalization of supervised single-label learning based on the assumption that each sample in a dataset may belong to more than one class simultaneously is called multi-label learning.The main objective of this work is to create a novel framework for learning and classifying imbalancedmulti-label data.This work proposes a framework of two phases.The imbalanced distribution of themulti-label dataset is addressed through the proposed Borderline MLSMOTE resampling method in phase 1.Later,an adaptive weighted l21 norm regularized(Elastic-net)multilabel logistic regression is used to predict unseen samples in phase 2.The proposed Borderline MLSMOTE resampling method focuses on samples with concurrent high labels in contrast to conventional MLSMOTE.The minority labels in these samples are called difficult minority labels and are more prone to penalize classification performance.The concurrentmeasure is considered borderline,and labels associated with samples are regarded as borderline labels in the decision boundary.In phase II,a novel adaptive l21 norm regularized weighted multi-label logistic regression is used to handle balanced data with different weighted synthetic samples.Experimentation on various benchmark datasets shows the outperformance of the proposed method and its powerful predictive performances over existing conventional state-of-the-art multi-label methods.展开更多
Detecting brain tumours is complex due to the natural variation in their location, shape, and intensity in images. While having accurate detection and segmentation of brain tumours would be beneficial, current methods...Detecting brain tumours is complex due to the natural variation in their location, shape, and intensity in images. While having accurate detection and segmentation of brain tumours would be beneficial, current methods still need to solve this problem despite the numerous available approaches. Precise analysis of Magnetic Resonance Imaging (MRI) is crucial for detecting, segmenting, and classifying brain tumours in medical diagnostics. Magnetic Resonance Imaging is a vital component in medical diagnosis, and it requires precise, efficient, careful, efficient, and reliable image analysis techniques. The authors developed a Deep Learning (DL) fusion model to classify brain tumours reliably. Deep Learning models require large amounts of training data to achieve good results, so the researchers utilised data augmentation techniques to increase the dataset size for training models. VGG16, ResNet50, and convolutional deep belief networks networks extracted deep features from MRI images. Softmax was used as the classifier, and the training set was supplemented with intentionally created MRI images of brain tumours in addition to the genuine ones. The features of two DL models were combined in the proposed model to generate a fusion model, which significantly increased classification accuracy. An openly accessible dataset from the internet was used to test the model's performance, and the experimental results showed that the proposed fusion model achieved a classification accuracy of 98.98%. Finally, the results were compared with existing methods, and the proposed model outperformed them significantly.展开更多
Recently,computation offloading has become an effective method for overcoming the constraint of a mobile device(MD)using computationintensivemobile and offloading delay-sensitive application tasks to the remote cloud-...Recently,computation offloading has become an effective method for overcoming the constraint of a mobile device(MD)using computationintensivemobile and offloading delay-sensitive application tasks to the remote cloud-based data center.Smart city benefitted from offloading to edge point.Consider a mobile edge computing(MEC)network in multiple regions.They comprise N MDs and many access points,in which everyMDhasM independent real-time tasks.This study designs a new Task Offloading and Resource Allocation in IoT-based MEC using Deep Learning with Seagull Optimization(TORA-DLSGO)algorithm.The proposed TORA-DLSGO technique addresses the resource management issue in the MEC server,which enables an optimum offloading decision to minimize the system cost.In addition,an objective function is derived based on minimizing energy consumption subject to the latency requirements and restricted resources.The TORA-DLSGO technique uses the deep belief network(DBN)model for optimum offloading decision-making.Finally,the SGO algorithm is used for the parameter tuning of the DBN model.The simulation results exemplify that the TORA-DLSGO technique outperformed the existing model in reducing client overhead in the MEC systems with a maximum reward of 0.8967.展开更多
Wireless Sensor Networks(WSN)play a vital role in several real-time applications ranging from military to civilian.Despite the benefits of WSN,energy efficiency becomes a major part of the challenging issue in WSN,whi...Wireless Sensor Networks(WSN)play a vital role in several real-time applications ranging from military to civilian.Despite the benefits of WSN,energy efficiency becomes a major part of the challenging issue in WSN,which necessitate proper load balancing amongst the clusters and serves a wider monitoring region.The clustering technique for WSN has several benefits:lower delay,higher energy efficiency,and collision avoidance.But clustering protocol has several challenges.In a large-scale network,cluster-based protocols mainly adapt multi-hop routing to save energy,leading to hot spot problems.A hot spot problem becomes a problem where a cluster node nearer to the base station(BS)tends to drain the energy much quicker than other nodes because of the need to implement more transmission.This article introduces a Jumping Spider Optimization Based Unequal Clustering Protocol for Mitigating Hotspot Problems(JSOUCP-MHP)in WSN.The JSO algorithm is stimulated by the characteristics of spiders naturally and mathematically modelled the hunting mechanism such as search,persecution,and jumping skills to attack prey.The presented JSOUCPMHP technique mainly resolves the hot spot issue for maximizing the network lifespan.The JSOUCP-MHP technique elects a proper set of cluster heads(CHs)using average residual energy(RE)to attain this.In addition,the JSOUCP-MHP technique determines the cluster sizes based on two measures,i.e.,RE and distance to BS(DBS),showing the novelty of the work.The proposed JSOUCP-MHP technique is examined under several experiments to ensure its supremacy.The comparison study shows the significance of the JSOUCPMHP technique over other models.展开更多
In recent times,sixth generation(6G)communication technologies have become a hot research topic because of maximum throughput and low delay services for mobile users.It encompasses several heterogeneous resource and c...In recent times,sixth generation(6G)communication technologies have become a hot research topic because of maximum throughput and low delay services for mobile users.It encompasses several heterogeneous resource and communication standard in ensuring incessant availability of service.At the same time,the development of 6G enables the Unmanned Aerial Vehicles(UAVs)in offering cost and time-efficient solution to several applications like healthcare,surveillance,disaster management,etc.In UAV networks,energy efficiency and data collection are considered the major process for high quality network communication.But these procedures are found to be challenging because of maximum mobility,unstable links,dynamic topology,and energy restricted UAVs.These issues are solved by the use of artificial intelligence(AI)and energy efficient clustering techniques for UAVs in the 6G environment.With this inspiration,this work designs an artificial intelligence enabled cooperative cluster-based data collection technique for unmanned aerial vehicles(AECCDC-UAV)in 6G environment.The proposed AECCDC-UAV technique purposes for dividing the UAV network as to different clusters and allocate a cluster head(CH)to each cluster in such a way that the energy consumption(ECM)gets minimized.The presented AECCDC-UAV technique involves a quasi-oppositional shuffled shepherd optimization(QOSSO)algorithm for selecting the CHs and construct clusters.The QOSSO algorithm derives a fitness function involving three input parameters residual energy of UAVs,distance to neighboring UAVs,and degree of UAVs.The performance of the AECCDC-UAV technique is validated in many aspects and the obtained experimental values demonstration promising results over the recent state of art methods.展开更多
In recent times,pattern recognition of communication modulation signals has gained significant attention in several application areas such as military,civilian field,etc.It becomes essential to design a safe and robus...In recent times,pattern recognition of communication modulation signals has gained significant attention in several application areas such as military,civilian field,etc.It becomes essential to design a safe and robust feature extraction(FE)approach to efficiently identify the various signal modulation types in a complex platform.Several works have derived new techniques to extract the feature parameters namely instant features,fractal features,and so on.In addition,machine learning(ML)and deep learning(DL)approaches can be commonly employed for modulation signal classification.In this view,this paper designs pattern recognition of communication signal modulation using fractal features with deep neural networks(CSM-FFDNN).The goal of the CSM-FFDNN model is to classify the different types of digitally modulated signals.The proposed CSM-FFDNN model involves two major processes namely FE and classification.The proposed model uses Sevcik Fractal Dimension(SFD)technique to extract the fractal features from the digital modulated signals.Besides,the extracted features are fed into the DNN model for modulation signal classification.To improve the classification performance of the DNN model,a barnacles mating optimizer(BMO)is used for the hyperparameter tuning of the DNN model in such a way that the DNN performance can be raised.A wide range of simulations takes place to highlight the enhanced performance of the CSM-FFDNN model.The experimental outcomes pointed out the superior recognition rate of the CSM-FFDNN model over the recent state of art methods interms of different evaluation parameters.展开更多
Worldwide cotton is the most profitable cash crop.Each year the production of this crop suffers because of several diseases.At an early stage,computerized methods are used for disease detection that may reduce the los...Worldwide cotton is the most profitable cash crop.Each year the production of this crop suffers because of several diseases.At an early stage,computerized methods are used for disease detection that may reduce the loss in the production of cotton.Although several methods are proposed for the detection of cotton diseases,however,still there are limitations because of low-quality images,size,shape,variations in orientation,and complex background.Due to these factors,there is a need for novel methods for features extraction/selection for the accurate cotton disease classification.Therefore in this research,an optimized features fusion-based model is proposed,in which two pre-trained architectures called EfficientNet-b0 and Inception-v3 are utilized to extract features,each model extracts the feature vector of length N×1000.After that,the extracted features are serially concatenated having a feature vector lengthN×2000.Themost prominent features are selected usingEmperor PenguinOptimizer(EPO)method.The method is evaluated on two publically available datasets,such as Kaggle cotton disease dataset-I,and Kaggle cotton-leaf-infection-II.The EPO method returns the feature vector of length 1×755,and 1×824 using dataset-I,and dataset-II,respectively.The classification is performed using 5,7,and 10 folds cross-validation.The Quadratic Discriminant Analysis(QDA)classifier provides an accuracy of 98.9%on 5 fold,98.96%on 7 fold,and 99.07%on 10 fold using Kaggle cotton disease dataset-I while the Ensemble Subspace K Nearest Neighbor(KNN)provides 99.16%on 5 fold,98.99%on 7 fold,and 99.27%on 10 fold using Kaggle cotton-leaf-infection dataset-II.展开更多
Melanoma is a skin disease with high mortality rate while earlydiagnoses of the disease can increase the survival chances of patients. Itis challenging to automatically diagnose melanoma from dermoscopic skinsamples. ...Melanoma is a skin disease with high mortality rate while earlydiagnoses of the disease can increase the survival chances of patients. Itis challenging to automatically diagnose melanoma from dermoscopic skinsamples. Computer-Aided Diagnostic (CAD) tool saves time and effort indiagnosing melanoma compared to existing medical approaches. In this background,there is a need exists to design an automated classification modelfor melanoma that can utilize deep and rich feature datasets of an imagefor disease classification. The current study develops an Intelligent ArithmeticOptimization with Ensemble Deep Transfer Learning Based MelanomaClassification (IAOEDTT-MC) model. The proposed IAOEDTT-MC modelfocuses on identification and classification of melanoma from dermoscopicimages. To accomplish this, IAOEDTT-MC model applies image preprocessingat the initial stage in which Gabor Filtering (GF) technique is utilized.In addition, U-Net segmentation approach is employed to segment the lesionregions in dermoscopic images. Besides, an ensemble of DL models includingResNet50 and ElasticNet models is applied in this study. Moreover, AOalgorithm with Gated Recurrent Unit (GRU) method is utilized for identificationand classification of melanoma. The proposed IAOEDTT-MC methodwas experimentally validated with the help of benchmark datasets and theproposed model attained maximum accuracy of 92.09% on ISIC 2017 dataset.展开更多
Gait is a biological typical that defines the method by that people walk.Walking is the most significant performance which keeps our day-to-day life and physical condition.Surface electromyography(sEMG)is a weak bioel...Gait is a biological typical that defines the method by that people walk.Walking is the most significant performance which keeps our day-to-day life and physical condition.Surface electromyography(sEMG)is a weak bioelectric signal that portrays the functional state between the human muscles and nervous system to any extent.Gait classifiers dependent upon sEMG signals are extremely utilized in analysing muscle diseases and as a guide path for recovery treatment.Several approaches are established in the works for gait recognition utilizing conventional and deep learning(DL)approaches.This study designs an Enhanced Artificial Algae Algorithm with Hybrid Deep Learning based Human Gait Classification(EAAA-HDLGR)technique on sEMG signals.The EAAA-HDLGR technique extracts the time domain(TD)and frequency domain(FD)features from the sEMG signals and is fused.In addition,the EAAA-HDLGR technique exploits the hybrid deep learning(HDL)model for gait recognition.At last,an EAAA-based hyperparameter optimizer is applied for the HDL model,which is mainly derived from the quasi-oppositional based learning(QOBL)concept,showing the novelty of the work.A brief classifier outcome of the EAAA-HDLGR technique is examined under diverse aspects,and the results indicate improving the EAAA-HDLGR technique.The results imply that the EAAA-HDLGR technique accomplishes improved results with the inclusion of EAAA on gait recognition.展开更多
Skin cancer is one of the most dangerous cancer.Because of the high melanoma death rate,skin cancer is divided into non-melanoma and melanoma.The dermatologist finds it difficult to identify skin cancer from dermoscop...Skin cancer is one of the most dangerous cancer.Because of the high melanoma death rate,skin cancer is divided into non-melanoma and melanoma.The dermatologist finds it difficult to identify skin cancer from dermoscopy images of skin lesions.Sometimes,pathology and biopsy examinations are required for cancer diagnosis.Earlier studies have formulated computer-based systems for detecting skin cancer from skin lesion images.With recent advancements in hardware and software technologies,deep learning(DL)has developed as a potential technique for feature learning.Therefore,this study develops a new sand cat swarm optimization with a deep transfer learning method for skin cancer detection and classification(SCSODTL-SCC)technique.The major intention of the SCSODTL-SCC model lies in the recognition and classification of different types of skin cancer on dermoscopic images.Primarily,Dull razor approach-related hair removal and median filtering-based noise elimination are performed.Moreover,the U2Net segmentation approach is employed for detecting infected lesion regions in dermoscopic images.Furthermore,the NASNetLarge-based feature extractor with a hybrid deep belief network(DBN)model is used for classification.Finally,the classification performance can be improved by the SCSO algorithm for the hyperparameter tuning process,showing the novelty of the work.The simulation values of the SCSODTL-SCC model are scrutinized on the benchmark skin lesion dataset.The comparative results assured that the SCSODTL-SCC model had shown maximum skin cancer classification performance in different measures.展开更多
Precision agriculture includes the optimum and adequate use of resources depending on several variables that govern crop yield.Precision agriculture offers a novel solution utilizing a systematic technique for current...Precision agriculture includes the optimum and adequate use of resources depending on several variables that govern crop yield.Precision agriculture offers a novel solution utilizing a systematic technique for current agricultural problems like balancing production and environmental concerns.Weed control has become one of the significant problems in the agricultural sector.In traditional weed control,the entire field is treated uniformly by spraying the soil,a single herbicide dose,weed,and crops in the same way.For more precise farming,robots could accomplish targeted weed treatment if they could specifically find the location of the dispensable plant and identify the weed type.This may lessen by large margin utilization of agrochemicals on agricultural fields and favour sustainable agriculture.This study presents a Harris Hawks Optimizer with Graph Convolutional Network based Weed Detection(HHOGCN-WD)technique for Precision Agriculture.The HHOGCN-WD technique mainly focuses on identifying and classifying weeds for precision agriculture.For image pre-processing,the HHOGCN-WD model utilizes a bilateral normal filter(BNF)for noise removal.In addition,coupled convolutional neural network(CCNet)model is utilized to derive a set of feature vectors.To detect and classify weed,the GCN model is utilized with the HHO algorithm as a hyperparameter optimizer to improve the detection performance.The experimental results of the HHOGCN-WD technique are investigated under the benchmark dataset.The results indicate the promising performance of the presented HHOGCN-WD model over other recent approaches,with increased accuracy of 99.13%.展开更多
Recently,COVID-19 has posed a challenging threat to researchers,scientists,healthcare professionals,and administrations over the globe,from its diagnosis to its treatment.The researchers are making persistent efforts ...Recently,COVID-19 has posed a challenging threat to researchers,scientists,healthcare professionals,and administrations over the globe,from its diagnosis to its treatment.The researchers are making persistent efforts to derive probable solutions formanaging the pandemic in their areas.One of the widespread and effective ways to detect COVID-19 is to utilize radiological images comprising X-rays and computed tomography(CT)scans.At the same time,the recent advances in machine learning(ML)and deep learning(DL)models show promising results in medical imaging.Particularly,the convolutional neural network(CNN)model can be applied to identifying abnormalities on chest radiographs.While the epidemic of COVID-19,much research is led on processing the data compared with DL techniques,particularly CNN.This study develops an improved fruit fly optimization with a deep learning-enabled fusion(IFFO-DLEF)model for COVID-19 detection and classification.The major intention of the IFFO-DLEF model is to investigate the presence or absence of COVID-19.To do so,the presented IFFODLEF model applies image pre-processing at the initial stage.In addition,the ensemble of three DL models such as DenseNet169,EfficientNet,and ResNet50,are used for feature extraction.Moreover,the IFFO algorithm with a multilayer perceptron(MLP)classification model is utilized to identify and classify COVID-19.The parameter optimization of the MLP approach utilizing the IFFO technique helps in accomplishing enhanced classification performance.The experimental result analysis of the IFFO-DLEF model carried out on the CXR image database portrayed the better performance of the presented IFFO-DLEF model over recent approaches.展开更多
Speech emotion recognition(SER)is an important research problem in human-computer interaction systems.The representation and extraction of features are significant challenges in SER systems.Despite the promising resul...Speech emotion recognition(SER)is an important research problem in human-computer interaction systems.The representation and extraction of features are significant challenges in SER systems.Despite the promising results of recent studies,they generally do not leverage progressive fusion techniques for effective feature representation and increasing receptive fields.To mitigate this problem,this article proposes DeepCNN,which is a fusion of spectral and temporal features of emotional speech by parallelising convolutional neural networks(CNNs)and a convolution layer-based transformer.Two parallel CNNs are applied to extract the spectral features(2D-CNN)and temporal features(1D-CNN)representations.A 2D-convolution layer-based transformer module extracts spectro-temporal features and concatenates them with features from parallel CNNs.The learnt low-level concatenated features are then applied to a deep framework of convolutional blocks,which retrieves high-level feature representation and subsequently categorises the emotional states using an attention gated recurrent unit and classification layer.This fusion technique results in a deeper hierarchical feature representation at a lower computational cost while simultaneously expanding the filter depth and reducing the feature map.The Berlin Database of Emotional Speech(EMO-BD)and Interactive Emotional Dyadic Motion Capture(IEMOCAP)datasets are used in experiments to recognise distinct speech emotions.With efficient spectral and temporal feature representation,the proposed SER model achieves 94.2%accuracy for different emotions on the EMO-BD and 81.1%accuracy on the IEMOCAP dataset respectively.The proposed SER system,DeepCNN,outperforms the baseline SER systems in terms of emotion recognition accuracy on the EMO-BD and IEMOCAP datasets.展开更多
Internet of Things(IoT) devices are becoming increasingly ubiquitous, and their adoption is growing at an exponential rate. However, they are vulnerable to security breaches, and traditional security mechanisms are no...Internet of Things(IoT) devices are becoming increasingly ubiquitous, and their adoption is growing at an exponential rate. However, they are vulnerable to security breaches, and traditional security mechanisms are not enough to protect them. The massive amounts of data generated by IoT devices can be easily manipulated or stolen, posing significant privacy concerns. This paper is to provide a comprehensive overview of the integration of blockchain and IoT technologies and their potential to enhance the security and privacy of IoT systems. The paper examines various security issues and vulnerabilities in IoT and explores how blockchain-based solutions can be used to address them. It provides insights into the various security issues and vulnerabilities in IoT and explores how blockchain can be used to enhance security and privacy. The paper also discusses the potential applications of blockchain-based IoT(B-IoT) systems in various sectors, such as healthcare, transportation, and supply chain management. The paper reveals that the integration of blockchain and IoT has the potential to enhance the security,privacy, and trustworthiness of IoT systems. The multi-layered architecture of B-IoT, consisting of perception, network, data processing, and application layers, provides a comprehensive framework for the integration of blockchain and IoT technologies.The study identifies various security solutions for B-IoT, including smart contracts, decentralized control, immutable data storage,identity and access management(IAM), and consensus mechanisms. The study also discusses the challenges and future research directions in the field of B-IoT.展开更多
Wireless sensor networks(WSN)comprise a set of numerous cheap sensors placed in the target region.A primary function of the WSN is to avail the location details of the event occurrences or the node.A major challenge i...Wireless sensor networks(WSN)comprise a set of numerous cheap sensors placed in the target region.A primary function of the WSN is to avail the location details of the event occurrences or the node.A major challenge in WSN is node localization which plays an important role in data gathering applications.Since GPS is expensive and inaccurate in indoor regions,effective node localization techniques are needed.The major intention of localization is for determining the place of node in short period with minimum computation.To achieve this,bio-inspired algorithms are used and node localization is assumed as an optimization problem in a multidimensional space.This paper introduces a new Sparrow Search Algorithm with Doppler Effect(SSA-DE)for Node Localization in Wireless Networks.The SSA is generally stimulated by the group wisdom,foraging,and anti-predation behaviors of sparrows.Besides,the Doppler Effect is incorporated into the SSA to further improve the node localization performance.In addition,the SSA-DE model defines the position of node in an iterative manner using Euclidian distance as the fitness function.The presented SSA-DE model is implanted in MATLAB R2014.An extensive set of experimentation is carried out and the results are examined under a varying number of anchor nodes and ranging error.The attained experimental outcome ensured the superior efficiency of the SSA-DE technique over the existing techniques.展开更多
Internet of Things (IoT) is transforming the technical setting ofconventional systems and finds applicability in smart cities, smart healthcare, smart industry, etc. In addition, the application areas relating to theI...Internet of Things (IoT) is transforming the technical setting ofconventional systems and finds applicability in smart cities, smart healthcare, smart industry, etc. In addition, the application areas relating to theIoT enabled models are resource-limited and necessitate crisp responses, lowlatencies, and high bandwidth, which are beyond their abilities. Cloud computing (CC) is treated as a resource-rich solution to the above mentionedchallenges. But the intrinsic high latency of CC makes it nonviable. The longerlatency degrades the outcome of IoT based smart systems. CC is an emergentdispersed, inexpensive computing pattern with massive assembly of heterogeneous autonomous systems. The effective use of task scheduling minimizes theenergy utilization of the cloud infrastructure and rises the income of serviceproviders by the minimization of the processing time of the user job. Withthis motivation, this paper presents an intelligent Chaotic Artificial ImmuneOptimization Algorithm for Task Scheduling (CAIOA-RS) in IoT enabledcloud environment. The proposed CAIOA-RS algorithm solves the issue ofresource allocation in the IoT enabled cloud environment. It also satisfiesthe makespan by carrying out the optimum task scheduling process with thedistinct strategies of incoming tasks. The design of CAIOA-RS techniqueincorporates the concept of chaotic maps into the conventional AIOA toenhance its performance. A series of experiments were carried out on theCloudSim platform. The simulation results demonstrate that the CAIOA-RStechnique indicates that the proposed model outperforms the original version,as well as other heuristics and metaheuristics.展开更多
文摘The growing global requirement for food and the need for sustainable farming in an era of a changing climate and scarce resources have inspired substantial crop yield prediction research.Deep learning(DL)and machine learning(ML)models effectively deal with such challenges.This research paper comprehensively analyses recent advancements in crop yield prediction from January 2016 to March 2024.In addition,it analyses the effectiveness of various input parameters considered in crop yield prediction models.We conducted an in-depth search and gathered studies that employed crop modeling and AI-based methods to predict crop yield.The total number of articles reviewed for crop yield prediction using ML,meta-modeling(Crop models coupled with ML/DL),and DL-based prediction models and input parameter selection is 125.We conduct the research by setting up five objectives for this research and discussing them after analyzing the selected research papers.Each study is assessed based on the crop type,input parameters employed for prediction,the modeling techniques adopted,and the evaluation metrics used for estimatingmodel performance.We also discuss the ethical and social impacts of AI on agriculture.However,various approaches presented in the scientific literature have delivered impressive predictions,they are complicateddue to intricate,multifactorial influences oncropgrowthand theneed for accuratedata-driven models.Therefore,thorough research is required to deal with challenges in predicting agricultural output.
基金the National Research Foundation of Korea(NRF)grant funded by the Korea government(MSIT)(No.2021R1A4A1031509).
文摘Tourism is a popular activity that allows individuals to escape their daily routines and explore new destinations for various reasons,including leisure,pleasure,or business.A recent study has proposed a unique mathematical concept called a q−Rung orthopair fuzzy hypersoft set(q−ROFHS)to enhance the formal representation of human thought processes and evaluate tourism carrying capacity.This approach can capture the imprecision and ambiguity often present in human perception.With the advanced mathematical tools in this field,the study has also incorporated the Einstein aggregation operator and score function into the q−ROFHS values to supportmultiattribute decision-making algorithms.By implementing this technique,effective plans can be developed for social and economic development while avoiding detrimental effects such as overcrowding or environmental damage caused by tourism.A case study of selected tourism carrying capacity will demonstrate the proposed methodology.
基金supported by the National Research Foundation of Korea(NRF)Grant funded by the Korea government(MSIT)(No.RS-2023-00218176)Korea Institute for Advancement of Technology(KIAT)Grant funded by the Korea government(MOTIE)(P0012724)The Competency Development Program for Industry Specialist)and the Soonchunhyang University Research Fund.
文摘In this article,multiple attribute decision-making problems are solved using the vague normal set(VNS).It is possible to generalize the vague set(VS)and q-rung fuzzy set(FS)into the q-rung vague set(VS).A log q-rung normal vague weighted averaging(log q-rung NVWA),a log q-rung normal vague weighted geometric(log q-rung NVWG),a log generalized q-rung normal vague weighted averaging(log Gq-rung NVWA),and a log generalized q-rungnormal vagueweightedgeometric(logGq-rungNVWG)operator are discussed in this article.Adescription is provided of the scoring function,accuracy function and operational laws of the log q-rung VS.The algorithms underlying these functions are also described.A numerical example is provided to extend the Euclidean distance and the Humming distance.Additionally,idempotency,boundedness,commutativity,and monotonicity of the log q-rung VS are examined as they facilitate recognizing the optimal alternative more quickly and help clarify conceptualization.We chose five anemia patients with four types of symptoms including seizures,emotional shock or hysteria,brain cause,and high fever,who had either retrograde amnesia,anterograde amnesia,transient global amnesia,post-traumatic amnesia,or infantile amnesia.Natural numbers q are used to express the results of the models.To demonstrate the effectiveness and accuracy of the models we are investigating,we compare several existing models with those that have been developed.
基金partly supported by the Technology Development Program of MSS(No.S3033853)by the National Research Foundation of Korea(NRF)grant funded by the Korea government(MSIT)(No.2021R1A4A1031509).
文摘A generalization of supervised single-label learning based on the assumption that each sample in a dataset may belong to more than one class simultaneously is called multi-label learning.The main objective of this work is to create a novel framework for learning and classifying imbalancedmulti-label data.This work proposes a framework of two phases.The imbalanced distribution of themulti-label dataset is addressed through the proposed Borderline MLSMOTE resampling method in phase 1.Later,an adaptive weighted l21 norm regularized(Elastic-net)multilabel logistic regression is used to predict unseen samples in phase 2.The proposed Borderline MLSMOTE resampling method focuses on samples with concurrent high labels in contrast to conventional MLSMOTE.The minority labels in these samples are called difficult minority labels and are more prone to penalize classification performance.The concurrentmeasure is considered borderline,and labels associated with samples are regarded as borderline labels in the decision boundary.In phase II,a novel adaptive l21 norm regularized weighted multi-label logistic regression is used to handle balanced data with different weighted synthetic samples.Experimentation on various benchmark datasets shows the outperformance of the proposed method and its powerful predictive performances over existing conventional state-of-the-art multi-label methods.
基金Ministry of Education,Youth and Sports of the Chezk Republic,Grant/Award Numbers:SP2023/039,SP2023/042the European Union under the REFRESH,Grant/Award Number:CZ.10.03.01/00/22_003/0000048。
文摘Detecting brain tumours is complex due to the natural variation in their location, shape, and intensity in images. While having accurate detection and segmentation of brain tumours would be beneficial, current methods still need to solve this problem despite the numerous available approaches. Precise analysis of Magnetic Resonance Imaging (MRI) is crucial for detecting, segmenting, and classifying brain tumours in medical diagnostics. Magnetic Resonance Imaging is a vital component in medical diagnosis, and it requires precise, efficient, careful, efficient, and reliable image analysis techniques. The authors developed a Deep Learning (DL) fusion model to classify brain tumours reliably. Deep Learning models require large amounts of training data to achieve good results, so the researchers utilised data augmentation techniques to increase the dataset size for training models. VGG16, ResNet50, and convolutional deep belief networks networks extracted deep features from MRI images. Softmax was used as the classifier, and the training set was supplemented with intentionally created MRI images of brain tumours in addition to the genuine ones. The features of two DL models were combined in the proposed model to generate a fusion model, which significantly increased classification accuracy. An openly accessible dataset from the internet was used to test the model's performance, and the experimental results showed that the proposed fusion model achieved a classification accuracy of 98.98%. Finally, the results were compared with existing methods, and the proposed model outperformed them significantly.
基金supported by the Technology Development Program of MSS(No.S3033853).
文摘Recently,computation offloading has become an effective method for overcoming the constraint of a mobile device(MD)using computationintensivemobile and offloading delay-sensitive application tasks to the remote cloud-based data center.Smart city benefitted from offloading to edge point.Consider a mobile edge computing(MEC)network in multiple regions.They comprise N MDs and many access points,in which everyMDhasM independent real-time tasks.This study designs a new Task Offloading and Resource Allocation in IoT-based MEC using Deep Learning with Seagull Optimization(TORA-DLSGO)algorithm.The proposed TORA-DLSGO technique addresses the resource management issue in the MEC server,which enables an optimum offloading decision to minimize the system cost.In addition,an objective function is derived based on minimizing energy consumption subject to the latency requirements and restricted resources.The TORA-DLSGO technique uses the deep belief network(DBN)model for optimum offloading decision-making.Finally,the SGO algorithm is used for the parameter tuning of the DBN model.The simulation results exemplify that the TORA-DLSGO technique outperformed the existing model in reducing client overhead in the MEC systems with a maximum reward of 0.8967.
基金This research was supported by the MSIT(Ministry of Science and ICT)Korea,under the ICAN(ICT Challenge and Advanced Network of HRD)program(IITP-2022-2020-0-01832)supervised by the IITP(Institute of Information&Communications Technology Planning&Evaluation)and the Korea Technology and Information Promotion Agency(TIPA)for SMEs grant funded by the Korea government(Ministry of SMEs and Startups)(No.S3271954)and the Soonchunhyang University Research Fund。
文摘Wireless Sensor Networks(WSN)play a vital role in several real-time applications ranging from military to civilian.Despite the benefits of WSN,energy efficiency becomes a major part of the challenging issue in WSN,which necessitate proper load balancing amongst the clusters and serves a wider monitoring region.The clustering technique for WSN has several benefits:lower delay,higher energy efficiency,and collision avoidance.But clustering protocol has several challenges.In a large-scale network,cluster-based protocols mainly adapt multi-hop routing to save energy,leading to hot spot problems.A hot spot problem becomes a problem where a cluster node nearer to the base station(BS)tends to drain the energy much quicker than other nodes because of the need to implement more transmission.This article introduces a Jumping Spider Optimization Based Unequal Clustering Protocol for Mitigating Hotspot Problems(JSOUCP-MHP)in WSN.The JSO algorithm is stimulated by the characteristics of spiders naturally and mathematically modelled the hunting mechanism such as search,persecution,and jumping skills to attack prey.The presented JSOUCPMHP technique mainly resolves the hot spot issue for maximizing the network lifespan.The JSOUCP-MHP technique elects a proper set of cluster heads(CHs)using average residual energy(RE)to attain this.In addition,the JSOUCP-MHP technique determines the cluster sizes based on two measures,i.e.,RE and distance to BS(DBS),showing the novelty of the work.The proposed JSOUCP-MHP technique is examined under several experiments to ensure its supremacy.The comparison study shows the significance of the JSOUCPMHP technique over other models.
基金This work was supported by the National Research Foundation of Korea(NRF)grant funded by the Korea government(MSIT)(No.2021R1F1A1063319).
文摘In recent times,sixth generation(6G)communication technologies have become a hot research topic because of maximum throughput and low delay services for mobile users.It encompasses several heterogeneous resource and communication standard in ensuring incessant availability of service.At the same time,the development of 6G enables the Unmanned Aerial Vehicles(UAVs)in offering cost and time-efficient solution to several applications like healthcare,surveillance,disaster management,etc.In UAV networks,energy efficiency and data collection are considered the major process for high quality network communication.But these procedures are found to be challenging because of maximum mobility,unstable links,dynamic topology,and energy restricted UAVs.These issues are solved by the use of artificial intelligence(AI)and energy efficient clustering techniques for UAVs in the 6G environment.With this inspiration,this work designs an artificial intelligence enabled cooperative cluster-based data collection technique for unmanned aerial vehicles(AECCDC-UAV)in 6G environment.The proposed AECCDC-UAV technique purposes for dividing the UAV network as to different clusters and allocate a cluster head(CH)to each cluster in such a way that the energy consumption(ECM)gets minimized.The presented AECCDC-UAV technique involves a quasi-oppositional shuffled shepherd optimization(QOSSO)algorithm for selecting the CHs and construct clusters.The QOSSO algorithm derives a fitness function involving three input parameters residual energy of UAVs,distance to neighboring UAVs,and degree of UAVs.The performance of the AECCDC-UAV technique is validated in many aspects and the obtained experimental values demonstration promising results over the recent state of art methods.
基金supported by the National Research Foundation of Korea(NRF)grant funded by the Korea government(MSIT)(No.2021R1F1A1063319).
文摘In recent times,pattern recognition of communication modulation signals has gained significant attention in several application areas such as military,civilian field,etc.It becomes essential to design a safe and robust feature extraction(FE)approach to efficiently identify the various signal modulation types in a complex platform.Several works have derived new techniques to extract the feature parameters namely instant features,fractal features,and so on.In addition,machine learning(ML)and deep learning(DL)approaches can be commonly employed for modulation signal classification.In this view,this paper designs pattern recognition of communication signal modulation using fractal features with deep neural networks(CSM-FFDNN).The goal of the CSM-FFDNN model is to classify the different types of digitally modulated signals.The proposed CSM-FFDNN model involves two major processes namely FE and classification.The proposed model uses Sevcik Fractal Dimension(SFD)technique to extract the fractal features from the digital modulated signals.Besides,the extracted features are fed into the DNN model for modulation signal classification.To improve the classification performance of the DNN model,a barnacles mating optimizer(BMO)is used for the hyperparameter tuning of the DNN model in such a way that the DNN performance can be raised.A wide range of simulations takes place to highlight the enhanced performance of the CSM-FFDNN model.The experimental outcomes pointed out the superior recognition rate of the CSM-FFDNN model over the recent state of art methods interms of different evaluation parameters.
基金supported by the Technology Development Program of MSS[No.S3033853]by the National Research Foundation of Korea(NRF)grant funded by the Korea government(MSIT)(No.2021R1A4A1031509).
文摘Worldwide cotton is the most profitable cash crop.Each year the production of this crop suffers because of several diseases.At an early stage,computerized methods are used for disease detection that may reduce the loss in the production of cotton.Although several methods are proposed for the detection of cotton diseases,however,still there are limitations because of low-quality images,size,shape,variations in orientation,and complex background.Due to these factors,there is a need for novel methods for features extraction/selection for the accurate cotton disease classification.Therefore in this research,an optimized features fusion-based model is proposed,in which two pre-trained architectures called EfficientNet-b0 and Inception-v3 are utilized to extract features,each model extracts the feature vector of length N×1000.After that,the extracted features are serially concatenated having a feature vector lengthN×2000.Themost prominent features are selected usingEmperor PenguinOptimizer(EPO)method.The method is evaluated on two publically available datasets,such as Kaggle cotton disease dataset-I,and Kaggle cotton-leaf-infection-II.The EPO method returns the feature vector of length 1×755,and 1×824 using dataset-I,and dataset-II,respectively.The classification is performed using 5,7,and 10 folds cross-validation.The Quadratic Discriminant Analysis(QDA)classifier provides an accuracy of 98.9%on 5 fold,98.96%on 7 fold,and 99.07%on 10 fold using Kaggle cotton disease dataset-I while the Ensemble Subspace K Nearest Neighbor(KNN)provides 99.16%on 5 fold,98.99%on 7 fold,and 99.27%on 10 fold using Kaggle cotton-leaf-infection dataset-II.
基金supported by the MSIT (Ministry of Science and ICT),Korea,under the ICAN (ICT Challenge and Advanced Network of HRD)program (IITP-2022-2020-0-01832)supervised by the IITP (Institute of Information&Communications Technology Planning&Evaluation)and the Soonchunhyang University Research Fund.
文摘Melanoma is a skin disease with high mortality rate while earlydiagnoses of the disease can increase the survival chances of patients. Itis challenging to automatically diagnose melanoma from dermoscopic skinsamples. Computer-Aided Diagnostic (CAD) tool saves time and effort indiagnosing melanoma compared to existing medical approaches. In this background,there is a need exists to design an automated classification modelfor melanoma that can utilize deep and rich feature datasets of an imagefor disease classification. The current study develops an Intelligent ArithmeticOptimization with Ensemble Deep Transfer Learning Based MelanomaClassification (IAOEDTT-MC) model. The proposed IAOEDTT-MC modelfocuses on identification and classification of melanoma from dermoscopicimages. To accomplish this, IAOEDTT-MC model applies image preprocessingat the initial stage in which Gabor Filtering (GF) technique is utilized.In addition, U-Net segmentation approach is employed to segment the lesionregions in dermoscopic images. Besides, an ensemble of DL models includingResNet50 and ElasticNet models is applied in this study. Moreover, AOalgorithm with Gated Recurrent Unit (GRU) method is utilized for identificationand classification of melanoma. The proposed IAOEDTT-MC methodwas experimentally validated with the help of benchmark datasets and theproposed model attained maximum accuracy of 92.09% on ISIC 2017 dataset.
基金supported by a grant from the Korea Health Technology R&D Project through the KoreaHealth Industry Development Institute (KHIDI)funded by the Ministry of Health&Welfare,Republic of Korea (grant number:HI21C1831)the Soonchunhyang University Research Fund.
文摘Gait is a biological typical that defines the method by that people walk.Walking is the most significant performance which keeps our day-to-day life and physical condition.Surface electromyography(sEMG)is a weak bioelectric signal that portrays the functional state between the human muscles and nervous system to any extent.Gait classifiers dependent upon sEMG signals are extremely utilized in analysing muscle diseases and as a guide path for recovery treatment.Several approaches are established in the works for gait recognition utilizing conventional and deep learning(DL)approaches.This study designs an Enhanced Artificial Algae Algorithm with Hybrid Deep Learning based Human Gait Classification(EAAA-HDLGR)technique on sEMG signals.The EAAA-HDLGR technique extracts the time domain(TD)and frequency domain(FD)features from the sEMG signals and is fused.In addition,the EAAA-HDLGR technique exploits the hybrid deep learning(HDL)model for gait recognition.At last,an EAAA-based hyperparameter optimizer is applied for the HDL model,which is mainly derived from the quasi-oppositional based learning(QOBL)concept,showing the novelty of the work.A brief classifier outcome of the EAAA-HDLGR technique is examined under diverse aspects,and the results indicate improving the EAAA-HDLGR technique.The results imply that the EAAA-HDLGR technique accomplishes improved results with the inclusion of EAAA on gait recognition.
基金supported by the Technology Development Program of MSS [No.S3033853]by the National University Development Project by the Ministry of Education in 2022.
文摘Skin cancer is one of the most dangerous cancer.Because of the high melanoma death rate,skin cancer is divided into non-melanoma and melanoma.The dermatologist finds it difficult to identify skin cancer from dermoscopy images of skin lesions.Sometimes,pathology and biopsy examinations are required for cancer diagnosis.Earlier studies have formulated computer-based systems for detecting skin cancer from skin lesion images.With recent advancements in hardware and software technologies,deep learning(DL)has developed as a potential technique for feature learning.Therefore,this study develops a new sand cat swarm optimization with a deep transfer learning method for skin cancer detection and classification(SCSODTL-SCC)technique.The major intention of the SCSODTL-SCC model lies in the recognition and classification of different types of skin cancer on dermoscopic images.Primarily,Dull razor approach-related hair removal and median filtering-based noise elimination are performed.Moreover,the U2Net segmentation approach is employed for detecting infected lesion regions in dermoscopic images.Furthermore,the NASNetLarge-based feature extractor with a hybrid deep belief network(DBN)model is used for classification.Finally,the classification performance can be improved by the SCSO algorithm for the hyperparameter tuning process,showing the novelty of the work.The simulation values of the SCSODTL-SCC model are scrutinized on the benchmark skin lesion dataset.The comparative results assured that the SCSODTL-SCC model had shown maximum skin cancer classification performance in different measures.
基金This research was partly supported by the Technology Development Program of MSS[No.S3033853]by Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(No.2020R1I1A3069700).
文摘Precision agriculture includes the optimum and adequate use of resources depending on several variables that govern crop yield.Precision agriculture offers a novel solution utilizing a systematic technique for current agricultural problems like balancing production and environmental concerns.Weed control has become one of the significant problems in the agricultural sector.In traditional weed control,the entire field is treated uniformly by spraying the soil,a single herbicide dose,weed,and crops in the same way.For more precise farming,robots could accomplish targeted weed treatment if they could specifically find the location of the dispensable plant and identify the weed type.This may lessen by large margin utilization of agrochemicals on agricultural fields and favour sustainable agriculture.This study presents a Harris Hawks Optimizer with Graph Convolutional Network based Weed Detection(HHOGCN-WD)technique for Precision Agriculture.The HHOGCN-WD technique mainly focuses on identifying and classifying weeds for precision agriculture.For image pre-processing,the HHOGCN-WD model utilizes a bilateral normal filter(BNF)for noise removal.In addition,coupled convolutional neural network(CCNet)model is utilized to derive a set of feature vectors.To detect and classify weed,the GCN model is utilized with the HHO algorithm as a hyperparameter optimizer to improve the detection performance.The experimental results of the HHOGCN-WD technique are investigated under the benchmark dataset.The results indicate the promising performance of the presented HHOGCN-WD model over other recent approaches,with increased accuracy of 99.13%.
基金This research was partly supported by the Technology Development Program of MSS[No.S3033853]by Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(No.2020R1I1A3069700).
文摘Recently,COVID-19 has posed a challenging threat to researchers,scientists,healthcare professionals,and administrations over the globe,from its diagnosis to its treatment.The researchers are making persistent efforts to derive probable solutions formanaging the pandemic in their areas.One of the widespread and effective ways to detect COVID-19 is to utilize radiological images comprising X-rays and computed tomography(CT)scans.At the same time,the recent advances in machine learning(ML)and deep learning(DL)models show promising results in medical imaging.Particularly,the convolutional neural network(CNN)model can be applied to identifying abnormalities on chest radiographs.While the epidemic of COVID-19,much research is led on processing the data compared with DL techniques,particularly CNN.This study develops an improved fruit fly optimization with a deep learning-enabled fusion(IFFO-DLEF)model for COVID-19 detection and classification.The major intention of the IFFO-DLEF model is to investigate the presence or absence of COVID-19.To do so,the presented IFFODLEF model applies image pre-processing at the initial stage.In addition,the ensemble of three DL models such as DenseNet169,EfficientNet,and ResNet50,are used for feature extraction.Moreover,the IFFO algorithm with a multilayer perceptron(MLP)classification model is utilized to identify and classify COVID-19.The parameter optimization of the MLP approach utilizing the IFFO technique helps in accomplishing enhanced classification performance.The experimental result analysis of the IFFO-DLEF model carried out on the CXR image database portrayed the better performance of the presented IFFO-DLEF model over recent approaches.
基金Biotechnology and Biological Sciences Research Council,Grant/Award Number:RM32G0178B8MRC,Grant/Award Number:MC_PC_17171+8 种基金Royal Society,Grant/Award Number:RP202G0230BHF,Grant/Award Number:AA/18/3/34220Hope Foundation for Cancer Research,Grant/Award Number:RM60G0680GCRF,Grant/Award Number:P202PF11Sino-UK Industrial Fund,Grant/Award Number:RP202G0289LIAS,Grant/Award Numbers:P202ED10,P202RE969Data Science Enhancement Fund,Grant/Award Number:P202RE237Fight for Sight,Grant/Award Number:24NN201Sino-UK Education Fund,Grant/Award Number:OP202006。
文摘Speech emotion recognition(SER)is an important research problem in human-computer interaction systems.The representation and extraction of features are significant challenges in SER systems.Despite the promising results of recent studies,they generally do not leverage progressive fusion techniques for effective feature representation and increasing receptive fields.To mitigate this problem,this article proposes DeepCNN,which is a fusion of spectral and temporal features of emotional speech by parallelising convolutional neural networks(CNNs)and a convolution layer-based transformer.Two parallel CNNs are applied to extract the spectral features(2D-CNN)and temporal features(1D-CNN)representations.A 2D-convolution layer-based transformer module extracts spectro-temporal features and concatenates them with features from parallel CNNs.The learnt low-level concatenated features are then applied to a deep framework of convolutional blocks,which retrieves high-level feature representation and subsequently categorises the emotional states using an attention gated recurrent unit and classification layer.This fusion technique results in a deeper hierarchical feature representation at a lower computational cost while simultaneously expanding the filter depth and reducing the feature map.The Berlin Database of Emotional Speech(EMO-BD)and Interactive Emotional Dyadic Motion Capture(IEMOCAP)datasets are used in experiments to recognise distinct speech emotions.With efficient spectral and temporal feature representation,the proposed SER model achieves 94.2%accuracy for different emotions on the EMO-BD and 81.1%accuracy on the IEMOCAP dataset respectively.The proposed SER system,DeepCNN,outperforms the baseline SER systems in terms of emotion recognition accuracy on the EMO-BD and IEMOCAP datasets.
文摘Internet of Things(IoT) devices are becoming increasingly ubiquitous, and their adoption is growing at an exponential rate. However, they are vulnerable to security breaches, and traditional security mechanisms are not enough to protect them. The massive amounts of data generated by IoT devices can be easily manipulated or stolen, posing significant privacy concerns. This paper is to provide a comprehensive overview of the integration of blockchain and IoT technologies and their potential to enhance the security and privacy of IoT systems. The paper examines various security issues and vulnerabilities in IoT and explores how blockchain-based solutions can be used to address them. It provides insights into the various security issues and vulnerabilities in IoT and explores how blockchain can be used to enhance security and privacy. The paper also discusses the potential applications of blockchain-based IoT(B-IoT) systems in various sectors, such as healthcare, transportation, and supply chain management. The paper reveals that the integration of blockchain and IoT has the potential to enhance the security,privacy, and trustworthiness of IoT systems. The multi-layered architecture of B-IoT, consisting of perception, network, data processing, and application layers, provides a comprehensive framework for the integration of blockchain and IoT technologies.The study identifies various security solutions for B-IoT, including smart contracts, decentralized control, immutable data storage,identity and access management(IAM), and consensus mechanisms. The study also discusses the challenges and future research directions in the field of B-IoT.
基金This research was supported by Korea Institute for Advancement of Technology(KIAT)grant funded by the Korea Government(MOTIE)(P0012724,The Competency Development Program for Industry Specialist)and the Soonchunhyang University Research Fund.
文摘Wireless sensor networks(WSN)comprise a set of numerous cheap sensors placed in the target region.A primary function of the WSN is to avail the location details of the event occurrences or the node.A major challenge in WSN is node localization which plays an important role in data gathering applications.Since GPS is expensive and inaccurate in indoor regions,effective node localization techniques are needed.The major intention of localization is for determining the place of node in short period with minimum computation.To achieve this,bio-inspired algorithms are used and node localization is assumed as an optimization problem in a multidimensional space.This paper introduces a new Sparrow Search Algorithm with Doppler Effect(SSA-DE)for Node Localization in Wireless Networks.The SSA is generally stimulated by the group wisdom,foraging,and anti-predation behaviors of sparrows.Besides,the Doppler Effect is incorporated into the SSA to further improve the node localization performance.In addition,the SSA-DE model defines the position of node in an iterative manner using Euclidian distance as the fitness function.The presented SSA-DE model is implanted in MATLAB R2014.An extensive set of experimentation is carried out and the results are examined under a varying number of anchor nodes and ranging error.The attained experimental outcome ensured the superior efficiency of the SSA-DE technique over the existing techniques.
基金This research was supported by Korea Institute for Advancement of Technology(KIAT)grant funded by the Korea Government(MOTIE)(P0012724,The Competency Development Program for Industry Specialist)and the Soonchunhyang University Research Fund.
文摘Internet of Things (IoT) is transforming the technical setting ofconventional systems and finds applicability in smart cities, smart healthcare, smart industry, etc. In addition, the application areas relating to theIoT enabled models are resource-limited and necessitate crisp responses, lowlatencies, and high bandwidth, which are beyond their abilities. Cloud computing (CC) is treated as a resource-rich solution to the above mentionedchallenges. But the intrinsic high latency of CC makes it nonviable. The longerlatency degrades the outcome of IoT based smart systems. CC is an emergentdispersed, inexpensive computing pattern with massive assembly of heterogeneous autonomous systems. The effective use of task scheduling minimizes theenergy utilization of the cloud infrastructure and rises the income of serviceproviders by the minimization of the processing time of the user job. Withthis motivation, this paper presents an intelligent Chaotic Artificial ImmuneOptimization Algorithm for Task Scheduling (CAIOA-RS) in IoT enabledcloud environment. The proposed CAIOA-RS algorithm solves the issue ofresource allocation in the IoT enabled cloud environment. It also satisfiesthe makespan by carrying out the optimum task scheduling process with thedistinct strategies of incoming tasks. The design of CAIOA-RS techniqueincorporates the concept of chaotic maps into the conventional AIOA toenhance its performance. A series of experiments were carried out on theCloudSim platform. The simulation results demonstrate that the CAIOA-RStechnique indicates that the proposed model outperforms the original version,as well as other heuristics and metaheuristics.