期刊文献+
共找到383篇文章
< 1 2 20 >
每页显示 20 50 100
Computing of LQR Technique for Nonlinear System Using Local Approximation
1
作者 Aamir Shahzad Ali Altalbe 《Computer Systems Science & Engineering》 SCIE EI 2023年第7期853-871,共19页
The main idea behind the present research is to design a state-feedback controller for an underactuated nonlinear rotary inverted pendulum module by employing the linear quadratic regulator(LQR)technique using local a... The main idea behind the present research is to design a state-feedback controller for an underactuated nonlinear rotary inverted pendulum module by employing the linear quadratic regulator(LQR)technique using local approximation.The LQR is an excellent method for developing a controller for nonlinear systems.It provides optimal feedback to make the closed-loop system robust and stable,rejecting external disturbances.Model-based optimal controller for a nonlinear system such as a rotatory inverted pendulum has not been designed and implemented using Newton-Euler,Lagrange method,and local approximation.Therefore,implementing LQR to an underactuated nonlinear system was vital to design a stable controller.A mathematical model has been developed for the controller design by utilizing the Newton-Euler,Lagrange method.The nonlinear model has been linearized around an equilibrium point.Linear and nonlinear models have been compared to find the range in which linear and nonlinear models’behaviour is similar.MATLAB LQR function and system dynamics have been used to estimate the controller parameters.For the performance evaluation of the designed controller,Simulink has been used.Linear and nonlinear models have been simulated along with the designed controller.Simulations have been performed for the designed controller over the linear and nonlinear system under different conditions through varying system variables.The results show that the system is stable and robust enough to act against external disturbances.The controller maintains the rotary inverted pendulum in an upright position and rejects disruptions like falling under gravitational force or any external disturbance by adjusting the rotation of the horizontal link in both linear and nonlinear environments in a specific range.The controller has been practically designed and implemented.It is vivid from the results that the controller is robust enough to reject the disturbances in milliseconds and keeps the pendulum arm deflection angle to zero degrees. 展开更多
关键词 COMPUTING rotary inverted pendulum(RIP) modeling and simulation linear quadratic regulator(LQR) nonlinear system
下载PDF
Computing and Implementation of a Controlled Telepresence Robot
2
作者 Ali A.Altalbe Aamir Shahzad Muhammad Nasir Khan 《Intelligent Automation & Soft Computing》 SCIE 2023年第8期1569-1585,共17页
The development of human-robot interaction has been continu-ously increasing for the last decades.Through this development,it has become simpler and safe interactions using a remotely controlled telepresence robot in ... The development of human-robot interaction has been continu-ously increasing for the last decades.Through this development,it has become simpler and safe interactions using a remotely controlled telepresence robot in an insecure and hazardous environment.The audio-video communication connection or data transmission stability has already been well handled by fast-growing technologies such as 5G and 6G.However,the design of the phys-ical parameters,e.g.,maneuverability,controllability,and stability,still needs attention.Therefore,the paper aims to present a systematic,controlled design and implementation of a telepresence mobile robot.The primary focus of this paper is to perform the computational analysis and experimental implementa-tion design with sophisticated position control,which autonomously controls the robot’s position and speed when reaching an obstacle.A system model and a position controller design are developed with root locus points.The design robot results are verified experimentally,showing the robot’s agreement and control in the desired position.The robot was tested by considering various parameters:driving straight ahead,right turn,self-localization and complex path.The results prove that the proposed approach is flexible and adaptable and gives a better alternative.The experimental results show that the proposed method significantly minimizes the obstacle hits. 展开更多
关键词 COMPUTING TELEPRESENCE healthcare system position controller mobile robot
下载PDF
A Systematic Literature Review of Machine Learning and Deep Learning Approaches for Spectral Image Classification in Agricultural Applications Using Aerial Photography
3
作者 Usman Khan Muhammad Khalid Khan +4 位作者 Muhammad Ayub Latif Muhammad Naveed Muhammad Mansoor Alam Salman A.Khan Mazliham Mohd Su’ud 《Computers, Materials & Continua》 SCIE EI 2024年第3期2967-3000,共34页
Recently,there has been a notable surge of interest in scientific research regarding spectral images.The potential of these images to revolutionize the digital photography industry,like aerial photography through Unma... Recently,there has been a notable surge of interest in scientific research regarding spectral images.The potential of these images to revolutionize the digital photography industry,like aerial photography through Unmanned Aerial Vehicles(UAVs),has captured considerable attention.One encouraging aspect is their combination with machine learning and deep learning algorithms,which have demonstrated remarkable outcomes in image classification.As a result of this powerful amalgamation,the adoption of spectral images has experienced exponential growth across various domains,with agriculture being one of the prominent beneficiaries.This paper presents an extensive survey encompassing multispectral and hyperspectral images,focusing on their applications for classification challenges in diverse agricultural areas,including plants,grains,fruits,and vegetables.By meticulously examining primary studies,we delve into the specific agricultural domains where multispectral and hyperspectral images have found practical use.Additionally,our attention is directed towards utilizing machine learning techniques for effectively classifying hyperspectral images within the agricultural context.The findings of our investigation reveal that deep learning and support vector machines have emerged as widely employed methods for hyperspectral image classification in agriculture.Nevertheless,we also shed light on the various issues and limitations of working with spectral images.This comprehensive analysis aims to provide valuable insights into the current state of spectral imaging in agriculture and its potential for future advancements. 展开更多
关键词 Machine learning deep learning unmanned aerial vehicles multi-spectral images image recognition object detection hyperspectral images aerial photography
下载PDF
A Deep Learning Approach for Landmines Detection Based on Airborne Magnetometry Imaging and Edge Computing
4
作者 Ahmed Barnawi Krishan Kumar +2 位作者 Neeraj Kumar Bander Alzahrani Amal Almansour 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第5期2117-2137,共21页
Landmines continue to pose an ongoing threat in various regions around the world,with countless buried landmines affecting numerous human lives.The detonation of these landmines results in thousands of casualties repo... Landmines continue to pose an ongoing threat in various regions around the world,with countless buried landmines affecting numerous human lives.The detonation of these landmines results in thousands of casualties reported worldwide annually.Therefore,there is a pressing need to employ diverse landmine detection techniques for their removal.One effective approach for landmine detection is UAV(Unmanned Aerial Vehicle)based AirborneMagnetometry,which identifies magnetic anomalies in the local terrestrial magnetic field.It can generate a contour plot or heat map that visually represents the magnetic field strength.Despite the effectiveness of this approach,landmine removal remains a challenging and resource-intensive task,fraughtwith risks.Edge computing,on the other hand,can play a crucial role in critical drone monitoring applications like landmine detection.By processing data locally on a nearby edge server,edge computing can reduce communication latency and bandwidth requirements,allowing real-time analysis of magnetic field data.It enables faster decision-making and more efficient landmine detection,potentially saving lives and minimizing the risks involved in the process.Furthermore,edge computing can provide enhanced security and privacy by keeping sensitive data close to the source,reducing the chances of data exposure during transmission.This paper introduces the MAGnetometry Imaging based Classification System(MAGICS),a fully automated UAV-based system designed for landmine and buried object detection and localization.We have developed an efficient deep learning-based strategy for automatic image classification using magnetometry dataset traces.By simulating the proposal in various network scenarios,we have successfully detected landmine signatures present in themagnetometry images.The trained models exhibit significant performance improvements,achieving a maximum mean average precision value of 97.8%. 展开更多
关键词 CNN deep learning landmine detection MAGNETOMETER mean average precision UAV
下载PDF
ThyroidNet:A Deep Learning Network for Localization and Classification of Thyroid Nodules
5
作者 Lu Chen Huaqiang Chen +6 位作者 Zhikai Pan Sheng Xu Guangsheng Lai Shuwen Chen Shuihua Wang Xiaodong Gu Yudong Zhang 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第4期361-382,共22页
Aim:This study aims to establish an artificial intelligence model,ThyroidNet,to diagnose thyroid nodules using deep learning techniques accurately.Methods:A novel method,ThyroidNet,is introduced and evaluated based on... Aim:This study aims to establish an artificial intelligence model,ThyroidNet,to diagnose thyroid nodules using deep learning techniques accurately.Methods:A novel method,ThyroidNet,is introduced and evaluated based on deep learning for the localization and classification of thyroid nodules.First,we propose the multitask TransUnet,which combines the TransUnet encoder and decoder with multitask learning.Second,we propose the DualLoss function,tailored to the thyroid nodule localization and classification tasks.It balances the learning of the localization and classification tasks to help improve the model’s generalization ability.Third,we introduce strategies for augmenting the data.Finally,we submit a novel deep learning model,ThyroidNet,to accurately detect thyroid nodules.Results:ThyroidNet was evaluated on private datasets and was comparable to other existing methods,including U-Net and TransUnet.Experimental results show that ThyroidNet outperformed these methods in localizing and classifying thyroid nodules.It achieved improved accuracy of 3.9%and 1.5%,respectively.Conclusion:ThyroidNet significantly improves the clinical diagnosis of thyroid nodules and supports medical image analysis tasks.Future research directions include optimization of the model structure,expansion of the dataset size,reduction of computational complexity and memory requirements,and exploration of additional applications of ThyroidNet in medical image analysis. 展开更多
关键词 ThyroidNet deep learning TransUnet multitask learning medical image analysis
下载PDF
A Robust Method of Bipolar Mental Illness Detection from Facial Micro Expressions Using Machine Learning Methods
6
作者 Ghulam Gilanie Sana Cheema +4 位作者 Akkasha Latif AnumSaher Muhammad Ahsan Hafeez Ullah Diya Oommen 《Intelligent Automation & Soft Computing》 2024年第1期57-71,共15页
Bipolar disorder is a serious mental condition that may be caused by any kind of stress or emotional upset experienced by the patient.It affects a large percentage of people globally,who fluctuate between depression a... Bipolar disorder is a serious mental condition that may be caused by any kind of stress or emotional upset experienced by the patient.It affects a large percentage of people globally,who fluctuate between depression and mania,or vice versa.A pleasant or unpleasant mood is more than a reflection of a state of mind.Normally,it is a difficult task to analyze through physical examination due to a large patient-psychiatrist ratio,so automated procedures are the best options to diagnose and verify the severity of bipolar.In this research work,facial microexpressions have been used for bipolar detection using the proposed Convolutional Neural Network(CNN)-based model.Facial Action Coding System(FACS)is used to extract micro-expressions called Action Units(AUs)connected with sad,happy,and angry emotions.Experiments have been conducted on a dataset collected from Bahawal Victoria Hospital,Bahawalpur,Pakistan,Using the Patient Health Questionnaire-15(PHQ-15)to infer a patient’s mental state.The experimental results showed a validation accuracy of 98.99%for the proposed CNN modelwhile classification through extracted featuresUsing SupportVectorMachines(SVM),K-NearestNeighbour(KNN),and Decision Tree(DT)obtained 99.9%,98.7%,and 98.9%accuracy,respectively.Overall,the outcomes demonstrated the stated method’s superiority over the current best practices. 展开更多
关键词 Bipolar mental illness detection facial micro-expressions facial landmarked images
下载PDF
A Lightweight Deep Autoencoder Scheme for Cyberattack Detection in the Internet of Things 被引量:1
7
作者 Maha Sabir Jawad Ahmad Daniyal Alghazzawi 《Computer Systems Science & Engineering》 SCIE EI 2023年第7期57-72,共16页
The Internet of things(IoT)is an emerging paradigm that integrates devices and services to collect real-time data from surroundings and process the information at a very high speed to make a decision.Despite several a... The Internet of things(IoT)is an emerging paradigm that integrates devices and services to collect real-time data from surroundings and process the information at a very high speed to make a decision.Despite several advantages,the resource-constrained and heterogeneous nature of IoT networks makes them a favorite target for cybercriminals.A single successful attempt of network intrusion can compromise the complete IoT network which can lead to unauthorized access to the valuable information of consumers and industries.To overcome the security challenges of IoT networks,this article proposes a lightweight deep autoencoder(DAE)based cyberattack detection framework.The proposed approach learns the normal and anomalous data patterns to identify the various types of network intrusions.The most significant feature of the proposed technique is its lower complexity which is attained by reducing the number of operations.To optimally train the proposed DAE,a range of hyperparameters was determined through extensive experiments that ensure higher attack detection accuracy.The efficacy of the suggested framework is evaluated via two standard and open-source datasets.The proposed DAE achieved the accuracies of 98.86%,and 98.26%for NSL-KDD,99.32%,and 98.79%for the UNSW-NB15 dataset in binary class and multi-class scenarios.The performance of the suggested attack detection framework is also compared with several state-of-the-art intrusion detection schemes.Experimental outcomes proved the promising performance of the proposed scheme for cyberattack detection in IoT networks. 展开更多
关键词 Autoencoder CYBERSECURITY deep learning intrusion detection IoT
下载PDF
Impact of Coronavirus Pandemic Crisis on Technologies and Cloud Computing Applications
8
作者 Ziyad R.Alashhab Mohammed Anbar +3 位作者 Manmeet Mahinderjit Singh Yu-Beng Leau Zaher Ali Al-Sai Sami Abu Alhayja’a 《Journal of Electronic Science and Technology》 CAS CSCD 2021年第1期25-40,共16页
In light of the coronavirus disease 2019(COVID-19)outbreak caused by the novel coronavirus,companies and institutions have instructed their employees to work from home as a precautionary measure to reduce the risk of ... In light of the coronavirus disease 2019(COVID-19)outbreak caused by the novel coronavirus,companies and institutions have instructed their employees to work from home as a precautionary measure to reduce the risk of contagion.Employees,however,have been exposed to different security risks because of working from home.Moreover,the rapid global spread of COVID-19 has increased the volume of data generated from various sources.Working from home depends mainly on cloud computing(CC)applications that help employees to efficiently accomplish their tasks.The cloud computing environment(CCE)is an unsung hero in the COVID-19 pandemic crisis.It consists of the fast-paced practices for services that reflect the trend of rapidly deployable applications for maintaining data.Despite the increase in the use of CC applications,there is an ongoing research challenge in the domains of CCE concerning data,guaranteeing security,and the availability of CC applications.This paper,to the best of our knowledge,is the first paper that thoroughly explains the impact of the COVID-19 pandemic on CCE.Additionally,this paper also highlights the security risks of working from home during the COVID-19 pandemic. 展开更多
关键词 Big data privacy cloud computing(CC)applications COVID-19 digital transformation security challenge work from home
下载PDF
A Review and Analysis of Localization Techniques in Underwater Wireless Sensor Networks 被引量:1
9
作者 Seema Rani Anju +6 位作者 Anupma Sangwan Krishna Kumar Kashif Nisar Tariq Rahim Soomro Ag.Asri Ag.Ibrahim Manoj Gupta Laxmi Chandand Sadiq Ali Khan 《Computers, Materials & Continua》 SCIE EI 2023年第6期5697-5715,共19页
In recent years,there has been a rapid growth in Underwater Wireless Sensor Networks(UWSNs).The focus of research in this area is now on solving the problems associated with large-scale UWSN.One of the major issues in... In recent years,there has been a rapid growth in Underwater Wireless Sensor Networks(UWSNs).The focus of research in this area is now on solving the problems associated with large-scale UWSN.One of the major issues in such a network is the localization of underwater nodes.Localization is required for tracking objects and detecting the target.It is also considered tagging of data where sensed contents are not found of any use without localization.This is useless for application until the position of sensed content is confirmed.This article’s major goal is to review and analyze underwater node localization to solve the localization issues in UWSN.The present paper describes various existing localization schemes and broadly categorizes these schemes as Centralized and Distributed localization schemes underwater.Also,a detailed subdivision of these localization schemes is given.Further,these localization schemes are compared from different perspectives.The detailed analysis of these schemes in terms of certain performance metrics has been discussed in this paper.At the end,the paper addresses several future directions for potential research in improving localization problems of UWSN. 展开更多
关键词 Underwater wireless sensor networks localization schemes node localization ranging algorithms estimation based prediction based
下载PDF
Soft Computing Based Metaheuristic Algorithms for Resource Management in Edge Computing Environment
10
作者 Nawaf Alhebaishi Abdulrhman M.Alshareef +4 位作者 Tawfiq Hasanin Raed Alsini Gyanendra Prasad Joshi Seongsoo Cho Doo Ill Chul 《Computers, Materials & Continua》 SCIE EI 2022年第9期5233-5250,共18页
In recent times,internet of things(IoT)applications on the cloud might not be the effective solution for every IoT scenario,particularly for time sensitive applications.A significant alternative to use is edge computi... In recent times,internet of things(IoT)applications on the cloud might not be the effective solution for every IoT scenario,particularly for time sensitive applications.A significant alternative to use is edge computing that resolves the problem of requiring high bandwidth by end devices.Edge computing is considered a method of forwarding the processing and communication resources in the cloud towards the edge.One of the considerations of the edge computing environment is resource management that involves resource scheduling,load balancing,task scheduling,and quality of service(QoS)to accomplish improved performance.With this motivation,this paper presents new soft computing based metaheuristic algorithms for resource scheduling(RS)in the edge computing environment.The SCBMARS model involves the hybridization of the Group Teaching Optimization Algorithm(GTOA)with rat swarm optimizer(RSO)algorithm for optimal resource allocation.The goal of the SCBMA-RS model is to identify and allocate resources to every incoming user request in such a way,that the client’s necessities are satisfied with the minimum number of possible resources and optimal energy consumption.The problem is formulated based on the availability of VMs,task characteristics,and queue dynamics.The integration of GTOA and RSO algorithms assist to improve the allocation of resources among VMs in the data center.For experimental validation,a comprehensive set of simulations were performed using the CloudSim tool.The experimental results showcased the superior performance of the SCBMA-RS model interms of different measures. 展开更多
关键词 Resource scheduling edge computing soft computing fitness function virtual machines
下载PDF
Traffic Management in Internet of Vehicles Using Improved Ant Colony Optimization 被引量:1
11
作者 Abida Sharif Imran Sharif +6 位作者 Muhammad Asim Saleem Muhammad Attique Khan Majed Alhaisoni Marriam Nawaz Abdullah Alqahtani Ye Jin Kim Byoungchol Chang 《Computers, Materials & Continua》 SCIE EI 2023年第6期5379-5393,共15页
The Internet of Vehicles(IoV)is a networking paradigm related to the intercommunication of vehicles using a network.In a dynamic network,one of the key challenges in IoV is traffic management under increasing vehicles... The Internet of Vehicles(IoV)is a networking paradigm related to the intercommunication of vehicles using a network.In a dynamic network,one of the key challenges in IoV is traffic management under increasing vehicles to avoid congestion.Therefore,optimal path selection to route traffic between the origin and destination is vital.This research proposed a realistic strategy to reduce traffic management service response time by enabling real-time content distribution in IoV systems using heterogeneous network access.Firstly,this work proposed a novel use of the Ant Colony Optimization(ACO)algorithm and formulated the path planning optimization problem as an Integer Linear Program(ILP).This integrates the future estimation metric to predict the future arrivals of the vehicles,searching the optimal routes.Considering the mobile nature of IOV,fuzzy logic is used for congestion level estimation along with the ACO to determine the optimal path.The model results indicate that the suggested scheme outperforms the existing state-of-the-art methods by identifying the shortest and most cost-effective path.Thus,this work strongly supports its use in applications having stringent Quality of Service(QoS)requirements for the vehicles. 展开更多
关键词 Internet of vehicles internet of things fuzzy logic OPTIMIZATION path planning
下载PDF
Semantic Document Layout Analysis of Handwritten Manuscripts
12
作者 Emad Sami Jaha 《Computers, Materials & Continua》 SCIE EI 2023年第5期2805-2831,共27页
A document layout can be more informative than merely a document’s visual and structural appearance.Thus,document layout analysis(DLA)is considered a necessary prerequisite for advanced processing and detailed docume... A document layout can be more informative than merely a document’s visual and structural appearance.Thus,document layout analysis(DLA)is considered a necessary prerequisite for advanced processing and detailed document image analysis to be further used in several applications and different objectives.This research extends the traditional approaches of DLA and introduces the concept of semantic document layout analysis(SDLA)by proposing a novel framework for semantic layout analysis and characterization of handwritten manuscripts.The proposed SDLA approach enables the derivation of implicit information and semantic characteristics,which can be effectively utilized in dozens of practical applications for various purposes,in a way bridging the semantic gap and providingmore understandable high-level document image analysis and more invariant characterization via absolute and relative labeling.This approach is validated and evaluated on a large dataset ofArabic handwrittenmanuscripts comprising complex layouts.The experimental work shows promising results in terms of accurate and effective semantic characteristic-based clustering and retrieval of handwritten manuscripts.It also indicates the expected efficacy of using the capabilities of the proposed approach in automating and facilitating many functional,reallife tasks such as effort estimation and pricing of transcription or typing of such complex manuscripts. 展开更多
关键词 Semantic characteristics semantic labeling document layout analysis semantic document layout analysis handwritten manuscripts clustering RETRIEVAL image processing computer vision machine learning
下载PDF
Fusion of Hash-Based Hard and Soft Biometrics for Enhancing Face Image Database Search and Retrieval
13
作者 Ameerah Abdullah Alshahrani Emad Sami Jaha Nahed Alowidi 《Computers, Materials & Continua》 SCIE EI 2023年第12期3489-3509,共21页
The utilization of digital picture search and retrieval has grown substantially in numerous fields for different purposes during the last decade,owing to the continuing advances in image processing and computer vision... The utilization of digital picture search and retrieval has grown substantially in numerous fields for different purposes during the last decade,owing to the continuing advances in image processing and computer vision approaches.In multiple real-life applications,for example,social media,content-based face picture retrieval is a well-invested technique for large-scale databases,where there is a significant necessity for reliable retrieval capabilities enabling quick search in a vast number of pictures.Humans widely employ faces for recognizing and identifying people.Thus,face recognition through formal or personal pictures is increasingly used in various real-life applications,such as helping crime investigators retrieve matching images from face image databases to identify victims and criminals.However,such face image retrieval becomes more challenging in large-scale databases,where traditional vision-based face analysis requires ample additional storage space than the raw face images already occupied to store extracted lengthy feature vectors and takes much longer to process and match thousands of face images.This work mainly contributes to enhancing face image retrieval performance in large-scale databases using hash codes inferred by locality-sensitive hashing(LSH)for facial hard and soft biometrics as(Hard BioHash)and(Soft BioHash),respectively,to be used as a search input for retrieving the top-k matching faces.Moreover,we propose the multi-biometric score-level fusion of both face hard and soft BioHashes(Hard-Soft BioHash Fusion)for further augmented face image retrieval.The experimental outcomes applied on the Labeled Faces in the Wild(LFW)dataset and the related attributes dataset(LFW-attributes),demonstrate that the retrieval performance of the suggested fusion approach(Hard-Soft BioHash Fusion)significantly improved the retrieval performance compared to solely using Hard BioHash or Soft BioHash in isolation,where the suggested method provides an augmented accuracy of 87%when executed on 1000 specimens and 77%on 5743 samples.These results remarkably outperform the results of the Hard BioHash method by(50%on the 1000 samples and 30%on the 5743 samples),and the Soft BioHash method by(78%on the 1000 samples and 63%on the 5743 samples). 展开更多
关键词 Face image retrieval soft biometrics similar pictures HASHING database search large databases score-level fusion multimodal fusion
下载PDF
Optimized Identification with Severity Factors of Gastric Cancer for Internet of Medical Things
14
作者 Kamalrulnizam Bin Abu Bakar Fatima Tul Zuhra +1 位作者 Babangida Isyaku Fuad A.Ghaleb 《Computers, Materials & Continua》 SCIE EI 2023年第4期785-798,共14页
The Internet of Medical Things (IoMT) emerges with the visionof the Wireless Body Sensor Network (WBSN) to improve the health monitoringsystems and has an enormous impact on the healthcare system forrecognizing the le... The Internet of Medical Things (IoMT) emerges with the visionof the Wireless Body Sensor Network (WBSN) to improve the health monitoringsystems and has an enormous impact on the healthcare system forrecognizing the levels of risk/severity factors (premature diagnosis, treatment,and supervision of chronic disease i.e., cancer) via wearable/electronic healthsensor i.e., wireless endoscopic capsule. However, AI-assisted endoscopy playsa very significant role in the detection of gastric cancer. Convolutional NeuralNetwork (CNN) has been widely used to diagnose gastric cancer based onvarious feature extraction models, consequently, limiting the identificationand categorization performance in terms of cancerous stages and gradesassociated with each type of gastric cancer. This paper proposed an optimizedAI-based approach to diagnose and assess the risk factor of gastric cancerbased on its type, stage, and grade in the endoscopic images for smarthealthcare applications. The proposed method is categorized into five phasessuch as image pre-processing, Four-Dimensional (4D) image conversion,image segmentation, K-Nearest Neighbour (K-NN) classification, and multigradingand staging of image intensities. Moreover, the performance of theproposed method has experimented on two different datasets consisting ofcolor and black and white endoscopic images. The simulation results verifiedthat the proposed approach is capable of perceiving gastric cancer with 88.09%sensitivity, 95.77% specificity, and 96.55% overall accuracy respectively. 展开更多
关键词 Artificial intelligence internet of things internet of medical things wireless body sensor network wireless endoscopic capsule gastric cancer
下载PDF
A Parallel Hybrid Testing Technique for Tri-Programming Model-Based Software Systems
15
作者 Huda Basloom Mohamed Dahab +3 位作者 Abdullah Saad AL-Ghamdi Fathy Eassa Ahmed Mohammed Alghamdi Seif Haridi 《Computers, Materials & Continua》 SCIE EI 2023年第2期4501-4530,共30页
Recently,researchers have shown increasing interest in combining more than one programming model into systems running on high performance computing systems(HPCs)to achieve exascale by applying parallelism at multiple ... Recently,researchers have shown increasing interest in combining more than one programming model into systems running on high performance computing systems(HPCs)to achieve exascale by applying parallelism at multiple levels.Combining different programming paradigms,such as Message Passing Interface(MPI),Open Multiple Processing(OpenMP),and Open Accelerators(OpenACC),can increase computation speed and improve performance.During the integration of multiple models,the probability of runtime errors increases,making their detection difficult,especially in the absence of testing techniques that can detect these errors.Numerous studies have been conducted to identify these errors,but no technique exists for detecting errors in three-level programming models.Despite the increasing research that integrates the three programming models,MPI,OpenMP,and OpenACC,a testing technology to detect runtime errors,such as deadlocks and race conditions,which can arise from this integration has not been developed.Therefore,this paper begins with a definition and explanation of runtime errors that result fromintegrating the three programming models that compilers cannot detect.For the first time,this paper presents a classification of operational errors that can result from the integration of the three models.This paper also proposes a parallel hybrid testing technique for detecting runtime errors in systems built in the C++programming language that uses the triple programming models MPI,OpenMP,and OpenACC.This hybrid technology combines static technology and dynamic technology,given that some errors can be detected using static techniques,whereas others can be detected using dynamic technology.The hybrid technique can detect more errors because it combines two distinct technologies.The proposed static technology detects a wide range of error types in less time,whereas a portion of the potential errors that may or may not occur depending on the 4502 CMC,2023,vol.74,no.2 operating environment are left to the dynamic technology,which completes the validation. 展开更多
关键词 Software testing hybrid testing technique OpenACC OPENMP MPI tri-programming model exascale computing
下载PDF
Fine-Grained Soft Ear Biometrics for Augmenting Human Recognition
16
作者 Ghoroub Talal Bostaji Emad Sami Jaha 《Computer Systems Science & Engineering》 SCIE EI 2023年第11期1571-1591,共21页
Human recognition technology based on biometrics has become a fundamental requirement in all aspects of life due to increased concerns about security and privacy issues.Therefore,biometric systems have emerged as a te... Human recognition technology based on biometrics has become a fundamental requirement in all aspects of life due to increased concerns about security and privacy issues.Therefore,biometric systems have emerged as a technology with the capability to identify or authenticate individuals based on their physiological and behavioral characteristics.Among different viable biometric modalities,the human ear structure can offer unique and valuable discriminative characteristics for human recognition systems.In recent years,most existing traditional ear recognition systems have been designed based on computer vision models and have achieved successful results.Nevertheless,such traditional models can be sensitive to several unconstrained environmental factors.As such,some traits may be difficult to extract automatically but can still be semantically perceived as soft biometrics.This research proposes a new group of semantic features to be used as soft ear biometrics,mainly inspired by conventional descriptive traits used naturally by humans when identifying or describing each other.Hence,the research study is focused on the fusion of the soft ear biometric traits with traditional(hard)ear biometric features to investigate their validity and efficacy in augmenting human identification performance.The proposed framework has two subsystems:first,a computer vision-based subsystem,extracting traditional(hard)ear biometric traits using principal component analysis(PCA)and local binary patterns(LBP),and second,a crowdsourcing-based subsystem,deriving semantic(soft)ear biometric traits.Several feature-level fusion experiments were conducted using the AMI database to evaluate the proposed algorithm’s performance.The obtained results for both identification and verification showed that the proposed soft ear biometric information significantly improved the recognition performance of traditional ear biometrics,reaching up to 12%for LBP and 5%for PCA descriptors;when fusing all three capacities PCA,LBP,and soft traits using k-nearest neighbors(KNN)classifier. 展开更多
关键词 Ear biometrics soft biometrics human ear recognition semantic features feature-level fusion computer vision machine learning
下载PDF
Quantum Cat Swarm Optimization Based Clustering with Intrusion Detection Technique for Future Internet of Things Environment
17
作者 Mohammed Basheri Mahmoud Ragab 《Computer Systems Science & Engineering》 SCIE EI 2023年第9期3783-3798,共16页
The Internet of Things(IoT)is one of the emergent technologies with advanced developments in several applications like creating smart environments,enabling Industry 4.0,etc.As IoT devices operate via an inbuilt and li... The Internet of Things(IoT)is one of the emergent technologies with advanced developments in several applications like creating smart environments,enabling Industry 4.0,etc.As IoT devices operate via an inbuilt and limited power supply,the effective utilization of available energy plays a vital role in designing the IoT environment.At the same time,the communication of IoT devices in wireless mediums poses security as a challenging issue.Recently,intrusion detection systems(IDS)have paved the way to detect the presence of intrusions in the IoT environment.With this motivation,this article introduces a novel QuantumCat SwarmOptimization based Clustering with Intrusion Detection Technique(QCSOBC-IDT)for IoT environment.The QCSOBC-IDT model aims to achieve energy efficiency by clustering the nodes and security by intrusion detection.Primarily,the QCSOBC-IDT model presents a new QCSO algorithm for effectively choosing cluster heads(CHs)and organizing a set of clusters in the IoT environment.Besides,the QCSO algorithm computes a fitness function involving four parameters,namely energy efficiency,inter-cluster distance,intra-cluster distance,and node density.A harmony search algorithm(HSA)with a cascaded recurrent neural network(CRNN)model can be used for an effective intrusion detection process.The design of HSA assists in the optimal selection of hyperparameters related to the CRNN model.A detailed experimental analysis of the QCSOBC-IDT model ensured its promising efficiency compared to existing models. 展开更多
关键词 Internet of things energy efficiency CLUSTERING intrusion detection deep learning security
下载PDF
A Survey on the Role of Complex Networks in IoT and Brain Communication
18
作者 Vijey Thayananthan Aiiad Albeshri +2 位作者 Hassan A.Alamri Muhammad Bilal Qureshi Muhammad Shuaib Qureshi 《Computers, Materials & Continua》 SCIE EI 2023年第9期2573-2595,共23页
Complex networks on the Internet of Things(IoT)and brain communication are the main focus of this paper.The benefits of complex networks may be applicable in the future research directions of 6G,photonic,IoT,brain,etc... Complex networks on the Internet of Things(IoT)and brain communication are the main focus of this paper.The benefits of complex networks may be applicable in the future research directions of 6G,photonic,IoT,brain,etc.,communication technologies.Heavy data traffic,huge capacity,minimal level of dynamic latency,etc.are some of the future requirements in 5G+and 6G communication systems.In emerging communication,technologies such as 5G+/6G-based photonic sensor communication and complex networks play an important role in improving future requirements of IoT and brain communication.In this paper,the state of the complex system considered as a complex network(the connection between the brain cells,neurons,etc.)needs measurement for analyzing the functions of the neurons during brain communication.Here,we measure the state of the complex system through observability.Using 5G+/6G-based photonic sensor nodes,finding observability influenced by the concept of contraction provides the stability of neurons.When IoT or any sensors fail to measure the state of the connectivity in the 5G+or 6G communication due to external noise and attacks,some information about the sensor nodes during the communication will be lost.Similarly,neurons considered sing the complex networks concept neuron sensors in the brain lose communication and connections.Therefore,affected sensor nodes in a contraction are equivalent to compensate for maintaining stability conditions.In this compensation,loss of observability depends on the contraction size which is a key factor for employing a complex network.To analyze the observability recovery,we can use a contraction detection algorithm with complex network properties.Our survey paper shows that contraction size will allow us to improve the performance of brain communication,stability of neurons,etc.,through the clustering coefficient considered in the contraction detection algorithm.In addition,we discuss the scalability of IoT communication using 5G+/6G-based photonic technology. 展开更多
关键词 Complex networks emerging communication IoT based on 6G systems NEUROSCIENCE photonic technology
下载PDF
Managing Smart Technologies with Software-Defined Networks for Routing and Security Challenges: A Survey
19
作者 Babangida Isyaku Kamalrulnizam Bin Abu Bakar 《Computer Systems Science & Engineering》 SCIE EI 2023年第11期1839-1879,共41页
Smart environments offer various services,including smart cities,ehealthcare,transportation,and wearable devices,generating multiple traffic flows with different Quality of Service(QoS)demands.Achieving the desired Qo... Smart environments offer various services,including smart cities,ehealthcare,transportation,and wearable devices,generating multiple traffic flows with different Quality of Service(QoS)demands.Achieving the desired QoS with security in this heterogeneous environment can be challenging due to traffic flows and device management,unoptimized routing with resource awareness,and security threats.Software Defined Networks(SDN)can help manage these devices through centralized SDN controllers and address these challenges.Various schemes have been proposed to integrate SDN with emerging technologies for better resource utilization and security.Software Defined Wireless Body Area Networks(SDWBAN)and Software Defined Internet of Things(SDIoT)are the recently introduced frameworks to overcome these challenges.This study surveys the existing SDWBAN and SDIoT routing and security challenges.The paper discusses each solution in detail and analyses its weaknesses.It covers SDWBAN frameworks for efficient management of WBAN networks,management of IoT devices,and proposed security mechanisms for IoT and data security in WBAN.The survey provides insights into the state-of-the-art in SDWBAN and SDIoT routing with resource awareness and security threats.Finally,this study highlights potential areas for future research. 展开更多
关键词 SDN WBAN IoT ROUTING SECURITY
下载PDF
Influences of double diffusion upon radiative flow of thin film Maxwell fluid through a stretching channel
20
作者 Arshad Khan Ishtiaq Ali +2 位作者 Musawa Yahya Almusawa Taza Gul Wajdi Alghamdi 《Chinese Physics B》 SCIE EI CAS CSCD 2023年第8期327-335,共9页
This work explores the influence of double diffusion over thermally radiative flow of thin film hybrid nanofluid and irreversibility generation through a stretching channel.The nanoparticles of silver and alumina have... This work explores the influence of double diffusion over thermally radiative flow of thin film hybrid nanofluid and irreversibility generation through a stretching channel.The nanoparticles of silver and alumina have mixed in the Maxwell fluid(base fluid).Magnetic field influence has been employed to channel in normal direction.Equations that are going to administer the fluid flow have been converted to dimension-free notations by using appropriate variables.Homotopy analysis method is used for the solution of the resultant equations.In this investigation it has pointed out that motion of fluid has declined with growth in magnetic effects,thin film thickness,and unsteadiness factor.Temperature of fluid has grown up with upsurge in Brownian motion,radiation factor,and thermophoresis effects,while it has declined with greater values of thermal Maxwell factor and thickness factor of the thin film.Concentration distribution has grown up with higher values of thermophoresis effects and has declined for augmentation in Brownian motion. 展开更多
关键词 Maxwell fluid flow magnetohydrodynamic(MHD) hybrid nano fluid flow stretching channel double diffusion entropy generation HAM technique
原文传递
上一页 1 2 20 下一页 到第
使用帮助 返回顶部