Landmines continue to pose an ongoing threat in various regions around the world,with countless buried landmines affecting numerous human lives.The detonation of these landmines results in thousands of casualties repo...Landmines continue to pose an ongoing threat in various regions around the world,with countless buried landmines affecting numerous human lives.The detonation of these landmines results in thousands of casualties reported worldwide annually.Therefore,there is a pressing need to employ diverse landmine detection techniques for their removal.One effective approach for landmine detection is UAV(Unmanned Aerial Vehicle)based AirborneMagnetometry,which identifies magnetic anomalies in the local terrestrial magnetic field.It can generate a contour plot or heat map that visually represents the magnetic field strength.Despite the effectiveness of this approach,landmine removal remains a challenging and resource-intensive task,fraughtwith risks.Edge computing,on the other hand,can play a crucial role in critical drone monitoring applications like landmine detection.By processing data locally on a nearby edge server,edge computing can reduce communication latency and bandwidth requirements,allowing real-time analysis of magnetic field data.It enables faster decision-making and more efficient landmine detection,potentially saving lives and minimizing the risks involved in the process.Furthermore,edge computing can provide enhanced security and privacy by keeping sensitive data close to the source,reducing the chances of data exposure during transmission.This paper introduces the MAGnetometry Imaging based Classification System(MAGICS),a fully automated UAV-based system designed for landmine and buried object detection and localization.We have developed an efficient deep learning-based strategy for automatic image classification using magnetometry dataset traces.By simulating the proposal in various network scenarios,we have successfully detected landmine signatures present in themagnetometry images.The trained models exhibit significant performance improvements,achieving a maximum mean average precision value of 97.8%.展开更多
Aim:This study aims to establish an artificial intelligence model,ThyroidNet,to diagnose thyroid nodules using deep learning techniques accurately.Methods:A novel method,ThyroidNet,is introduced and evaluated based on...Aim:This study aims to establish an artificial intelligence model,ThyroidNet,to diagnose thyroid nodules using deep learning techniques accurately.Methods:A novel method,ThyroidNet,is introduced and evaluated based on deep learning for the localization and classification of thyroid nodules.First,we propose the multitask TransUnet,which combines the TransUnet encoder and decoder with multitask learning.Second,we propose the DualLoss function,tailored to the thyroid nodule localization and classification tasks.It balances the learning of the localization and classification tasks to help improve the model’s generalization ability.Third,we introduce strategies for augmenting the data.Finally,we submit a novel deep learning model,ThyroidNet,to accurately detect thyroid nodules.Results:ThyroidNet was evaluated on private datasets and was comparable to other existing methods,including U-Net and TransUnet.Experimental results show that ThyroidNet outperformed these methods in localizing and classifying thyroid nodules.It achieved improved accuracy of 3.9%and 1.5%,respectively.Conclusion:ThyroidNet significantly improves the clinical diagnosis of thyroid nodules and supports medical image analysis tasks.Future research directions include optimization of the model structure,expansion of the dataset size,reduction of computational complexity and memory requirements,and exploration of additional applications of ThyroidNet in medical image analysis.展开更多
The concept of smart houses has grown in prominence in recent years.Major challenges linked to smart homes are identification theft,data safety,automated decision-making for IoT-based devices,and the security of the d...The concept of smart houses has grown in prominence in recent years.Major challenges linked to smart homes are identification theft,data safety,automated decision-making for IoT-based devices,and the security of the device itself.Current home automation systems try to address these issues but there is still an urgent need for a dependable and secure smart home solution that includes automatic decision-making systems and methodical features.This paper proposes a smart home system based on ensemble learning of random forest(RF)and convolutional neural networks(CNN)for programmed decision-making tasks,such as categorizing gadgets as“OFF”or“ON”based on their normal routine in homes.We have integrated emerging blockchain technology to provide secure,decentralized,and trustworthy authentication and recognition of IoT devices.Our system consists of a 5V relay circuit,various sensors,and a Raspberry Pi server and database for managing devices.We have also developed an Android app that communicates with the server interface through an HTTP web interface and an Apache server.The feasibility and efficacy of the proposed smart home automation system have been evaluated in both laboratory and real-time settings.It is essential to use inexpensive,scalable,and readily available components and technologies in smart home automation systems.Additionally,we must incorporate a comprehensive security and privacy-centric design that emphasizes risk assessments,such as cyberattacks,hardware security,and other cyber threats.The trial results support the proposed system and demonstrate its potential for use in everyday life.展开更多
The development of human-robot interaction has been continu-ously increasing for the last decades.Through this development,it has become simpler and safe interactions using a remotely controlled telepresence robot in ...The development of human-robot interaction has been continu-ously increasing for the last decades.Through this development,it has become simpler and safe interactions using a remotely controlled telepresence robot in an insecure and hazardous environment.The audio-video communication connection or data transmission stability has already been well handled by fast-growing technologies such as 5G and 6G.However,the design of the phys-ical parameters,e.g.,maneuverability,controllability,and stability,still needs attention.Therefore,the paper aims to present a systematic,controlled design and implementation of a telepresence mobile robot.The primary focus of this paper is to perform the computational analysis and experimental implementa-tion design with sophisticated position control,which autonomously controls the robot’s position and speed when reaching an obstacle.A system model and a position controller design are developed with root locus points.The design robot results are verified experimentally,showing the robot’s agreement and control in the desired position.The robot was tested by considering various parameters:driving straight ahead,right turn,self-localization and complex path.The results prove that the proposed approach is flexible and adaptable and gives a better alternative.The experimental results show that the proposed method significantly minimizes the obstacle hits.展开更多
A document layout can be more informative than merely a document’s visual and structural appearance.Thus,document layout analysis(DLA)is considered a necessary prerequisite for advanced processing and detailed docume...A document layout can be more informative than merely a document’s visual and structural appearance.Thus,document layout analysis(DLA)is considered a necessary prerequisite for advanced processing and detailed document image analysis to be further used in several applications and different objectives.This research extends the traditional approaches of DLA and introduces the concept of semantic document layout analysis(SDLA)by proposing a novel framework for semantic layout analysis and characterization of handwritten manuscripts.The proposed SDLA approach enables the derivation of implicit information and semantic characteristics,which can be effectively utilized in dozens of practical applications for various purposes,in a way bridging the semantic gap and providingmore understandable high-level document image analysis and more invariant characterization via absolute and relative labeling.This approach is validated and evaluated on a large dataset ofArabic handwrittenmanuscripts comprising complex layouts.The experimental work shows promising results in terms of accurate and effective semantic characteristic-based clustering and retrieval of handwritten manuscripts.It also indicates the expected efficacy of using the capabilities of the proposed approach in automating and facilitating many functional,reallife tasks such as effort estimation and pricing of transcription or typing of such complex manuscripts.展开更多
The utilization of digital picture search and retrieval has grown substantially in numerous fields for different purposes during the last decade,owing to the continuing advances in image processing and computer vision...The utilization of digital picture search and retrieval has grown substantially in numerous fields for different purposes during the last decade,owing to the continuing advances in image processing and computer vision approaches.In multiple real-life applications,for example,social media,content-based face picture retrieval is a well-invested technique for large-scale databases,where there is a significant necessity for reliable retrieval capabilities enabling quick search in a vast number of pictures.Humans widely employ faces for recognizing and identifying people.Thus,face recognition through formal or personal pictures is increasingly used in various real-life applications,such as helping crime investigators retrieve matching images from face image databases to identify victims and criminals.However,such face image retrieval becomes more challenging in large-scale databases,where traditional vision-based face analysis requires ample additional storage space than the raw face images already occupied to store extracted lengthy feature vectors and takes much longer to process and match thousands of face images.This work mainly contributes to enhancing face image retrieval performance in large-scale databases using hash codes inferred by locality-sensitive hashing(LSH)for facial hard and soft biometrics as(Hard BioHash)and(Soft BioHash),respectively,to be used as a search input for retrieving the top-k matching faces.Moreover,we propose the multi-biometric score-level fusion of both face hard and soft BioHashes(Hard-Soft BioHash Fusion)for further augmented face image retrieval.The experimental outcomes applied on the Labeled Faces in the Wild(LFW)dataset and the related attributes dataset(LFW-attributes),demonstrate that the retrieval performance of the suggested fusion approach(Hard-Soft BioHash Fusion)significantly improved the retrieval performance compared to solely using Hard BioHash or Soft BioHash in isolation,where the suggested method provides an augmented accuracy of 87%when executed on 1000 specimens and 77%on 5743 samples.These results remarkably outperform the results of the Hard BioHash method by(50%on the 1000 samples and 30%on the 5743 samples),and the Soft BioHash method by(78%on the 1000 samples and 63%on the 5743 samples).展开更多
Increasing renewable energy targets globally has raised the requirement for the efficient and profitable operation of solar photovoltaic(PV)systems.In light of this requirement,this paper provides a path for evaluatin...Increasing renewable energy targets globally has raised the requirement for the efficient and profitable operation of solar photovoltaic(PV)systems.In light of this requirement,this paper provides a path for evaluating the operating condition and improving the power output of the PV system in a grid integrated environment.To achieve this,different types of faults in grid-connected PV systems(GCPVs)and their impact on the energy loss associated with the electrical network are analyzed.A data-driven approach using neural networks(NNs)is proposed to achieve root cause analysis and localize the fault to the component level in the system.The localized fault condition is combined with a parallel operation of adaptive neurofuzzy inference units(ANFIUs)to develop a power mismatch-based control unit(PMCU)for improving the power output of the GCPV.To develop the proposed framework,a 10-kW single-phase GCPV is simulated for training the NN-based anomaly detection approach with 14 deviation signals.Further,the developed algorithm is combined with the PMCU implemented with the experimental setup of GCPV.The results identified 98.2%training accuracy and 43000 observations/sec prediction speed for the trained classifier,and improved power output with reduced voltage and current harmonics for the grid-connected PV operation.展开更多
The main idea behind the present research is to design a state-feedback controller for an underactuated nonlinear rotary inverted pendulum module by employing the linear quadratic regulator(LQR)technique using local a...The main idea behind the present research is to design a state-feedback controller for an underactuated nonlinear rotary inverted pendulum module by employing the linear quadratic regulator(LQR)technique using local approximation.The LQR is an excellent method for developing a controller for nonlinear systems.It provides optimal feedback to make the closed-loop system robust and stable,rejecting external disturbances.Model-based optimal controller for a nonlinear system such as a rotatory inverted pendulum has not been designed and implemented using Newton-Euler,Lagrange method,and local approximation.Therefore,implementing LQR to an underactuated nonlinear system was vital to design a stable controller.A mathematical model has been developed for the controller design by utilizing the Newton-Euler,Lagrange method.The nonlinear model has been linearized around an equilibrium point.Linear and nonlinear models have been compared to find the range in which linear and nonlinear models’behaviour is similar.MATLAB LQR function and system dynamics have been used to estimate the controller parameters.For the performance evaluation of the designed controller,Simulink has been used.Linear and nonlinear models have been simulated along with the designed controller.Simulations have been performed for the designed controller over the linear and nonlinear system under different conditions through varying system variables.The results show that the system is stable and robust enough to act against external disturbances.The controller maintains the rotary inverted pendulum in an upright position and rejects disruptions like falling under gravitational force or any external disturbance by adjusting the rotation of the horizontal link in both linear and nonlinear environments in a specific range.The controller has been practically designed and implemented.It is vivid from the results that the controller is robust enough to reject the disturbances in milliseconds and keeps the pendulum arm deflection angle to zero degrees.展开更多
Rapid technological advancement has enabled modern healthcare systems to provide more sophisticated and real-time services on the Internet of Medical Things(IoMT).The existing cloud-based,centralized IoMT architecture...Rapid technological advancement has enabled modern healthcare systems to provide more sophisticated and real-time services on the Internet of Medical Things(IoMT).The existing cloud-based,centralized IoMT architectures are vulnerable to multiple security and privacy problems.The blockchain-enabled IoMT is an emerging paradigm that can ensure the security and trustworthiness of medical data sharing in the IoMT networks.This article presents a private and easily expandable blockchain-based framework for the IoMT.The proposed framework contains several participants,including private blockchain,hospitalmanagement systems,cloud service providers,doctors,and patients.Data security is ensured by incorporating an attributebased encryption scheme.Furthermore,an IoT-friendly consensus algorithm is deployed to ensure fast block validation and high scalability in the IoMT network.The proposed framework can perform multiple healthcare-related services in a secure and trustworthy manner.The performance of blockchain read/write operations is evaluated in terms of transaction throughput and latency.Experimental outcomes indicate that the proposed scheme achieved an average throughput of 857 TPS and 151 TPS for read and write operations.The average latency is 61 ms and 16 ms for read and write operations,respectively.展开更多
Complex networks on the Internet of Things(IoT)and brain communication are the main focus of this paper.The benefits of complex networks may be applicable in the future research directions of 6G,photonic,IoT,brain,etc...Complex networks on the Internet of Things(IoT)and brain communication are the main focus of this paper.The benefits of complex networks may be applicable in the future research directions of 6G,photonic,IoT,brain,etc.,communication technologies.Heavy data traffic,huge capacity,minimal level of dynamic latency,etc.are some of the future requirements in 5G+and 6G communication systems.In emerging communication,technologies such as 5G+/6G-based photonic sensor communication and complex networks play an important role in improving future requirements of IoT and brain communication.In this paper,the state of the complex system considered as a complex network(the connection between the brain cells,neurons,etc.)needs measurement for analyzing the functions of the neurons during brain communication.Here,we measure the state of the complex system through observability.Using 5G+/6G-based photonic sensor nodes,finding observability influenced by the concept of contraction provides the stability of neurons.When IoT or any sensors fail to measure the state of the connectivity in the 5G+or 6G communication due to external noise and attacks,some information about the sensor nodes during the communication will be lost.Similarly,neurons considered sing the complex networks concept neuron sensors in the brain lose communication and connections.Therefore,affected sensor nodes in a contraction are equivalent to compensate for maintaining stability conditions.In this compensation,loss of observability depends on the contraction size which is a key factor for employing a complex network.To analyze the observability recovery,we can use a contraction detection algorithm with complex network properties.Our survey paper shows that contraction size will allow us to improve the performance of brain communication,stability of neurons,etc.,through the clustering coefficient considered in the contraction detection algorithm.In addition,we discuss the scalability of IoT communication using 5G+/6G-based photonic technology.展开更多
Smart environments offer various services,including smart cities,ehealthcare,transportation,and wearable devices,generating multiple traffic flows with different Quality of Service(QoS)demands.Achieving the desired Qo...Smart environments offer various services,including smart cities,ehealthcare,transportation,and wearable devices,generating multiple traffic flows with different Quality of Service(QoS)demands.Achieving the desired QoS with security in this heterogeneous environment can be challenging due to traffic flows and device management,unoptimized routing with resource awareness,and security threats.Software Defined Networks(SDN)can help manage these devices through centralized SDN controllers and address these challenges.Various schemes have been proposed to integrate SDN with emerging technologies for better resource utilization and security.Software Defined Wireless Body Area Networks(SDWBAN)and Software Defined Internet of Things(SDIoT)are the recently introduced frameworks to overcome these challenges.This study surveys the existing SDWBAN and SDIoT routing and security challenges.The paper discusses each solution in detail and analyses its weaknesses.It covers SDWBAN frameworks for efficient management of WBAN networks,management of IoT devices,and proposed security mechanisms for IoT and data security in WBAN.The survey provides insights into the state-of-the-art in SDWBAN and SDIoT routing with resource awareness and security threats.Finally,this study highlights potential areas for future research.展开更多
The rapid growth of smart technologies and services has intensified the challenges surrounding identity authenti-cation techniques.Biometric credentials are increasingly being used for verification due to their advant...The rapid growth of smart technologies and services has intensified the challenges surrounding identity authenti-cation techniques.Biometric credentials are increasingly being used for verification due to their advantages over traditional methods,making it crucial to safeguard the privacy of people’s biometric data in various scenarios.This paper offers an in-depth exploration for privacy-preserving techniques and potential threats to biometric systems.It proposes a noble and thorough taxonomy survey for privacy-preserving techniques,as well as a systematic framework for categorizing the field’s existing literature.We review the state-of-the-art methods and address their advantages and limitations in the context of various biometric modalities,such as face,fingerprint,and eye detection.The survey encompasses various categories of privacy-preserving mechanisms and examines the trade-offs between security,privacy,and recognition performance,as well as the issues and future research directions.It aims to provide researchers,professionals,and decision-makers with a thorough understanding of the existing privacy-preserving solutions in biometric recognition systems and serves as the foundation of the development of more secure and privacy-preserving biometric technologies.展开更多
The basic unit in life is cell.?It contains many protein molecules located at its different organelles. The growth and reproduction of a cell as well as most of its other biological functions are performed via these p...The basic unit in life is cell.?It contains many protein molecules located at its different organelles. The growth and reproduction of a cell as well as most of its other biological functions are performed via these proteins. But proteins in different organelles or subcellular locations have different functions. Facing?the avalanche of protein sequences generated in the postgenomic age, we are challenged to develop high throughput tools for identifying the subcellular localization of proteins based on their sequence information alone. Although considerable efforts have been made in this regard, the problem is far apart from being solved yet. Most existing methods can be used to deal with single-location proteins only. Actually, proteins with multi-locations may have some special biological functions that are particularly important for drug targets. Using the ML-GKR (Multi-Label Gaussian Kernel Regression) method,?we developed a new predictor called “pLoc-mGpos” by in-depth extracting the key information from GO (Gene Ontology) into the Chou’s general PseAAC (Pseudo Amino Acid Composition)?for predicting the subcellular localization of Gram-positive bacterial proteins with both single and multiple location sites. Rigorous cross-validation on a same stringent benchmark dataset indicated that the proposed pLoc-mGpos predictor is remarkably superior to “iLoc-Gpos”, the state-of-the-art predictor for the same purpose.?To maximize the convenience of most experimental scientists, a user-friendly web-server for the new powerful predictor has been established at http://www.jci-bioinfo.cn/pLoc-mGpos/, by which users can easily get their desired results without the need to go through the complicated mathematics involved.展开更多
The massive technological advancements around the world have created significant challenging competition among companies where each of the companies tries to attract the customers using different techniques. One of th...The massive technological advancements around the world have created significant challenging competition among companies where each of the companies tries to attract the customers using different techniques. One of the recent tech- niques is Augmented Reality (AR). The AR is a new technology which is capable of presenting possibilities that are difficult for other technologies to offer and meet. Nowadays, numerous augmented reality applications have been used in the industry of different kinds and disseminated all over the world. AR will really alter the way individuals view the world. The AR is yet in its initial phases of research and development at different colleges and high-tech institutes. Throughout the last years, AR apps became transportable and generally available on various devices. Besides, AR be- gins to occupy its place in our audio-visual media and to be used in various fields in our life in tangible and exciting ways such as news, sports and is used in many domains in our life such as electronic commerce, promotion, design, and business. In addition, AR is used to facilitate the learning whereas it enables students to access location-specific infor- mation provided through various sources. Such growth and spread of AR applications pushes organizations to compete one another, every one of them exerts its best to gain the customers. This paper provides a comprehensive study of AR including its history, architecture, applications, current challenges and future trends.展开更多
The blockchain technology plays a significant role in the present era of information technology.In the last few years,this technology has been used effectively in several domains.It has already made significant differ...The blockchain technology plays a significant role in the present era of information technology.In the last few years,this technology has been used effectively in several domains.It has already made significant differences in human life,as well as is intended to have noticeable impact in many other domains in the forthcoming years.The rapid growth in blockchain technology has created numerous new possibilities for use,especially for healthcare applications.The digital healthcare services require highly effective security methodologies that can integrate data security with the availablemanagement strategies.To test and understand this goal of security management in Saudi Arabian perspective,the authors performed a numerical analysis and simulation through a multi criteria decision making approach in this study.The authors adopted the fuzzy Analytical Hierarchy Process(AHP)for evaluating the effectiveness and then applied the fuzzy Technique forOrder of Preference by Similarity to Ideal Solution(TOPSIS)technique to simulate the validation of results.For eliciting highly corroborative and conclusive results,the study referred to a real time project of diabetes patients’management application of Kingdom of Saudi Arabia(KSA).The results discussed in this paper are scientifically proven and validated through various analysis approaches.Hence the present study can be a credible basis for other similar endeavours being undertaken in the domain of blockchain research.展开更多
Distributed denial of service (DDoS) attacks continues to grow as a threat to organizations worldwide. From the first known attack in 1999 to the highly publicized Operation Ababil, the DDoS attacks have a history of ...Distributed denial of service (DDoS) attacks continues to grow as a threat to organizations worldwide. From the first known attack in 1999 to the highly publicized Operation Ababil, the DDoS attacks have a history of flooding the victim network with an enormous number of packets, hence exhausting the resources and preventing the legitimate users to access them. After having standard DDoS defense mechanism, still attackers are able to launch an attack. These inadequate defense mechanisms need to be improved and integrated with other solutions. The purpose of this paper is to study the characteristics of DDoS attacks, various models involved in attacks and to provide a timeline of defense mechanism with their improvements to combat DDoS attacks. In addition to this, a novel scheme is proposed to detect DDoS attack efficiently by using MapReduce programming model.展开更多
The Internet of things(IoT)is an emerging paradigm that integrates devices and services to collect real-time data from surroundings and process the information at a very high speed to make a decision.Despite several a...The Internet of things(IoT)is an emerging paradigm that integrates devices and services to collect real-time data from surroundings and process the information at a very high speed to make a decision.Despite several advantages,the resource-constrained and heterogeneous nature of IoT networks makes them a favorite target for cybercriminals.A single successful attempt of network intrusion can compromise the complete IoT network which can lead to unauthorized access to the valuable information of consumers and industries.To overcome the security challenges of IoT networks,this article proposes a lightweight deep autoencoder(DAE)based cyberattack detection framework.The proposed approach learns the normal and anomalous data patterns to identify the various types of network intrusions.The most significant feature of the proposed technique is its lower complexity which is attained by reducing the number of operations.To optimally train the proposed DAE,a range of hyperparameters was determined through extensive experiments that ensure higher attack detection accuracy.The efficacy of the suggested framework is evaluated via two standard and open-source datasets.The proposed DAE achieved the accuracies of 98.86%,and 98.26%for NSL-KDD,99.32%,and 98.79%for the UNSW-NB15 dataset in binary class and multi-class scenarios.The performance of the suggested attack detection framework is also compared with several state-of-the-art intrusion detection schemes.Experimental outcomes proved the promising performance of the proposed scheme for cyberattack detection in IoT networks.展开更多
Ever since its outbreak inWuhan,COVID-19 has cloaked the entireworld in a pall of despondency and uncertainty.The present study describes the exploratory analysis of all COVID cases in Saudi Arabia.Besides,the study h...Ever since its outbreak inWuhan,COVID-19 has cloaked the entireworld in a pall of despondency and uncertainty.The present study describes the exploratory analysis of all COVID cases in Saudi Arabia.Besides,the study has executed the forecastingmodel for predicting the possible number of COVID-19 cases in Saudi Arabia till a defined period.Towards this intent,the study analyzed different age groups of patients(child,adult,elderly)who were affected by COVID-19.The analysis was done city-wise and also included the number of recoveries recorded in different cities.Furthermore,the study also discusses the impact of COVID-19 on the economy.For conducting the stated analysis,the authors have created a list of factors that are known to cause the spread of COVID-19.As an effective countermeasure to contain the spread of Coronavirus in Saudi Arabia,this study also proposes to identify the most effective Computer Science technique that can be used by healthcare professionals.For this,the study employs the Fuzzy-Analytic Hierarchy Process integrated with the Technique for Order Performance by Similar to Ideal Solution(F.AHP.TOPSIS).After prioritizing the various Computer Science techniques,the ranking order that was obtained for the different techniques/tools to contain COVID-19 was:A4>A1>A2>A5>A3.Since the Blockchain technique obtained the highest priority,the study recommends that it must be used extensively as an efficacious and accurate means to combat COVID-19.展开更多
Ever since its outbreak in the Wuhan city of China,COVID-19 pandemic has engulfed more than 211 countries in the world,leaving a trail of unprecedented fatalities.Even more debilitating than the infection itself,were ...Ever since its outbreak in the Wuhan city of China,COVID-19 pandemic has engulfed more than 211 countries in the world,leaving a trail of unprecedented fatalities.Even more debilitating than the infection itself,were the restrictions like lockdowns and quarantine measures taken to contain the spread of Coronavirus.Such enforced alienation affected both the mental and social condition of people significantly.Social interactions and congregations are not only integral part of work life but also form the basis of human evolvement.However,COVID-19 brought all such communication to a grinding halt.Digital interactions have failed to enthuse the fervor that one enjoys in face-to-face meets.The pandemic has shoved the entire planet into an unstable state.The main focus and aim of the proposed study is to assess the impact of the pandemic on different aspects of the society in Saudi Arabia.To achieve this objective,the study analyzes two perspectives:the early approach,and the late approach of COVID-19 and the consequent effects on different aspects of the society.We used a Machine Learning based framework for the prediction of the impact of COVID-19 on the key aspects of society.Findings of this research study indicate that financial resources were the worst affected.Several countries are facing economic upheavals due to the pandemic and COVID-19 has had a considerable impact on the lives as well as the livelihoods of people.Yet the damage is not irretrievable and the world’s societies can emerge out of this setback through concerted efforts in all facets of life.展开更多
Routing protocols in Mobile Ad Hoc Networks(MANETs)operate with Expanding Ring Search(ERS)mechanism to avoid ooding in the network while tracing step.ERS mechanism searches the network with discerning Time to Live(TTL...Routing protocols in Mobile Ad Hoc Networks(MANETs)operate with Expanding Ring Search(ERS)mechanism to avoid ooding in the network while tracing step.ERS mechanism searches the network with discerning Time to Live(TTL)values described by respective routing protocol that save both energy and time.This work exploits the relation between the TTL value of a packet,trafc on a node and ERS mechanism for routing in MANETs and achieves an Adaptive ERS based Per Hop Behavior(AERSPHB)rendition of requests handling.Each search request is classied based on ERS attributes and then processed for routing while monitoring the node trafc.Two algorithms are designed and examined for performance under exhaustive parametric setup and employed on adaptive premises to enhance the performance of the network.The network is tested under congestion scenario that is based on buffer utilization at node level and link utilization via back-off stage of Carrier Sense Multiple Access with Collision Avoidance(CSMA/CA).Both the link and node level congestion is handled through retransmission and rerouting the packets based on ERS parameters.The aim is to drop the packets that are exhausting the network energy whereas forward the packets nearer to the destination with priority.Extensive simulations are carried out for network scalability,node speed and network terrain size.Our results show that the proposed models attain evident performance enhancement.展开更多
基金funded by Institutional Fund Projects under Grant No(IFPNC-001-611-2020).
文摘Landmines continue to pose an ongoing threat in various regions around the world,with countless buried landmines affecting numerous human lives.The detonation of these landmines results in thousands of casualties reported worldwide annually.Therefore,there is a pressing need to employ diverse landmine detection techniques for their removal.One effective approach for landmine detection is UAV(Unmanned Aerial Vehicle)based AirborneMagnetometry,which identifies magnetic anomalies in the local terrestrial magnetic field.It can generate a contour plot or heat map that visually represents the magnetic field strength.Despite the effectiveness of this approach,landmine removal remains a challenging and resource-intensive task,fraughtwith risks.Edge computing,on the other hand,can play a crucial role in critical drone monitoring applications like landmine detection.By processing data locally on a nearby edge server,edge computing can reduce communication latency and bandwidth requirements,allowing real-time analysis of magnetic field data.It enables faster decision-making and more efficient landmine detection,potentially saving lives and minimizing the risks involved in the process.Furthermore,edge computing can provide enhanced security and privacy by keeping sensitive data close to the source,reducing the chances of data exposure during transmission.This paper introduces the MAGnetometry Imaging based Classification System(MAGICS),a fully automated UAV-based system designed for landmine and buried object detection and localization.We have developed an efficient deep learning-based strategy for automatic image classification using magnetometry dataset traces.By simulating the proposal in various network scenarios,we have successfully detected landmine signatures present in themagnetometry images.The trained models exhibit significant performance improvements,achieving a maximum mean average precision value of 97.8%.
基金supported by MRC,UK (MC_PC_17171)Royal Society,UK (RP202G0230)+8 种基金BHF,UK (AA/18/3/34220)Hope Foundation for Cancer Research,UK (RM60G0680)GCRF,UK (P202PF11)Sino-UK Industrial Fund,UK (RP202G0289)LIAS,UK (P202ED10,P202RE969)Data Science Enhancement Fund,UK (P202RE237)Fight for Sight,UK (24NN201)Sino-UK Education Fund,UK (OP202006)BBSRC,UK (RM32G0178B8).
文摘Aim:This study aims to establish an artificial intelligence model,ThyroidNet,to diagnose thyroid nodules using deep learning techniques accurately.Methods:A novel method,ThyroidNet,is introduced and evaluated based on deep learning for the localization and classification of thyroid nodules.First,we propose the multitask TransUnet,which combines the TransUnet encoder and decoder with multitask learning.Second,we propose the DualLoss function,tailored to the thyroid nodule localization and classification tasks.It balances the learning of the localization and classification tasks to help improve the model’s generalization ability.Third,we introduce strategies for augmenting the data.Finally,we submit a novel deep learning model,ThyroidNet,to accurately detect thyroid nodules.Results:ThyroidNet was evaluated on private datasets and was comparable to other existing methods,including U-Net and TransUnet.Experimental results show that ThyroidNet outperformed these methods in localizing and classifying thyroid nodules.It achieved improved accuracy of 3.9%and 1.5%,respectively.Conclusion:ThyroidNet significantly improves the clinical diagnosis of thyroid nodules and supports medical image analysis tasks.Future research directions include optimization of the model structure,expansion of the dataset size,reduction of computational complexity and memory requirements,and exploration of additional applications of ThyroidNet in medical image analysis.
基金funded by Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2024R333)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘The concept of smart houses has grown in prominence in recent years.Major challenges linked to smart homes are identification theft,data safety,automated decision-making for IoT-based devices,and the security of the device itself.Current home automation systems try to address these issues but there is still an urgent need for a dependable and secure smart home solution that includes automatic decision-making systems and methodical features.This paper proposes a smart home system based on ensemble learning of random forest(RF)and convolutional neural networks(CNN)for programmed decision-making tasks,such as categorizing gadgets as“OFF”or“ON”based on their normal routine in homes.We have integrated emerging blockchain technology to provide secure,decentralized,and trustworthy authentication and recognition of IoT devices.Our system consists of a 5V relay circuit,various sensors,and a Raspberry Pi server and database for managing devices.We have also developed an Android app that communicates with the server interface through an HTTP web interface and an Apache server.The feasibility and efficacy of the proposed smart home automation system have been evaluated in both laboratory and real-time settings.It is essential to use inexpensive,scalable,and readily available components and technologies in smart home automation systems.Additionally,we must incorporate a comprehensive security and privacy-centric design that emphasizes risk assessments,such as cyberattacks,hardware security,and other cyber threats.The trial results support the proposed system and demonstrate its potential for use in everyday life.
基金supported by the Deanship of Scientific Research at Prince Sattam bin Abdulaziz University under the research project (PSAU/2023/01/23001).
文摘The development of human-robot interaction has been continu-ously increasing for the last decades.Through this development,it has become simpler and safe interactions using a remotely controlled telepresence robot in an insecure and hazardous environment.The audio-video communication connection or data transmission stability has already been well handled by fast-growing technologies such as 5G and 6G.However,the design of the phys-ical parameters,e.g.,maneuverability,controllability,and stability,still needs attention.Therefore,the paper aims to present a systematic,controlled design and implementation of a telepresence mobile robot.The primary focus of this paper is to perform the computational analysis and experimental implementa-tion design with sophisticated position control,which autonomously controls the robot’s position and speed when reaching an obstacle.A system model and a position controller design are developed with root locus points.The design robot results are verified experimentally,showing the robot’s agreement and control in the desired position.The robot was tested by considering various parameters:driving straight ahead,right turn,self-localization and complex path.The results prove that the proposed approach is flexible and adaptable and gives a better alternative.The experimental results show that the proposed method significantly minimizes the obstacle hits.
基金This research was supported and funded by KAU Scientific Endowment,King Abdulaziz University,Jeddah,Saudi Arabia.
文摘A document layout can be more informative than merely a document’s visual and structural appearance.Thus,document layout analysis(DLA)is considered a necessary prerequisite for advanced processing and detailed document image analysis to be further used in several applications and different objectives.This research extends the traditional approaches of DLA and introduces the concept of semantic document layout analysis(SDLA)by proposing a novel framework for semantic layout analysis and characterization of handwritten manuscripts.The proposed SDLA approach enables the derivation of implicit information and semantic characteristics,which can be effectively utilized in dozens of practical applications for various purposes,in a way bridging the semantic gap and providingmore understandable high-level document image analysis and more invariant characterization via absolute and relative labeling.This approach is validated and evaluated on a large dataset ofArabic handwrittenmanuscripts comprising complex layouts.The experimental work shows promising results in terms of accurate and effective semantic characteristic-based clustering and retrieval of handwritten manuscripts.It also indicates the expected efficacy of using the capabilities of the proposed approach in automating and facilitating many functional,reallife tasks such as effort estimation and pricing of transcription or typing of such complex manuscripts.
基金supported and funded by KAU Scientific Endowment,King Abdulaziz University,Jeddah,Saudi Arabia,grant number 077416-04.
文摘The utilization of digital picture search and retrieval has grown substantially in numerous fields for different purposes during the last decade,owing to the continuing advances in image processing and computer vision approaches.In multiple real-life applications,for example,social media,content-based face picture retrieval is a well-invested technique for large-scale databases,where there is a significant necessity for reliable retrieval capabilities enabling quick search in a vast number of pictures.Humans widely employ faces for recognizing and identifying people.Thus,face recognition through formal or personal pictures is increasingly used in various real-life applications,such as helping crime investigators retrieve matching images from face image databases to identify victims and criminals.However,such face image retrieval becomes more challenging in large-scale databases,where traditional vision-based face analysis requires ample additional storage space than the raw face images already occupied to store extracted lengthy feature vectors and takes much longer to process and match thousands of face images.This work mainly contributes to enhancing face image retrieval performance in large-scale databases using hash codes inferred by locality-sensitive hashing(LSH)for facial hard and soft biometrics as(Hard BioHash)and(Soft BioHash),respectively,to be used as a search input for retrieving the top-k matching faces.Moreover,we propose the multi-biometric score-level fusion of both face hard and soft BioHashes(Hard-Soft BioHash Fusion)for further augmented face image retrieval.The experimental outcomes applied on the Labeled Faces in the Wild(LFW)dataset and the related attributes dataset(LFW-attributes),demonstrate that the retrieval performance of the suggested fusion approach(Hard-Soft BioHash Fusion)significantly improved the retrieval performance compared to solely using Hard BioHash or Soft BioHash in isolation,where the suggested method provides an augmented accuracy of 87%when executed on 1000 specimens and 77%on 5743 samples.These results remarkably outperform the results of the Hard BioHash method by(50%on the 1000 samples and 30%on the 5743 samples),and the Soft BioHash method by(78%on the 1000 samples and 63%on the 5743 samples).
基金Funding for this study was received from the Deputyship for Research&Innovation,Ministry of Education in Saudi Arabia through the project number“IFPHI-021–135–2020”and King Abdulaziz University,DSR,Jeddah,Saudi Arabia.
文摘Increasing renewable energy targets globally has raised the requirement for the efficient and profitable operation of solar photovoltaic(PV)systems.In light of this requirement,this paper provides a path for evaluating the operating condition and improving the power output of the PV system in a grid integrated environment.To achieve this,different types of faults in grid-connected PV systems(GCPVs)and their impact on the energy loss associated with the electrical network are analyzed.A data-driven approach using neural networks(NNs)is proposed to achieve root cause analysis and localize the fault to the component level in the system.The localized fault condition is combined with a parallel operation of adaptive neurofuzzy inference units(ANFIUs)to develop a power mismatch-based control unit(PMCU)for improving the power output of the GCPV.To develop the proposed framework,a 10-kW single-phase GCPV is simulated for training the NN-based anomaly detection approach with 14 deviation signals.Further,the developed algorithm is combined with the PMCU implemented with the experimental setup of GCPV.The results identified 98.2%training accuracy and 43000 observations/sec prediction speed for the trained classifier,and improved power output with reduced voltage and current harmonics for the grid-connected PV operation.
文摘The main idea behind the present research is to design a state-feedback controller for an underactuated nonlinear rotary inverted pendulum module by employing the linear quadratic regulator(LQR)technique using local approximation.The LQR is an excellent method for developing a controller for nonlinear systems.It provides optimal feedback to make the closed-loop system robust and stable,rejecting external disturbances.Model-based optimal controller for a nonlinear system such as a rotatory inverted pendulum has not been designed and implemented using Newton-Euler,Lagrange method,and local approximation.Therefore,implementing LQR to an underactuated nonlinear system was vital to design a stable controller.A mathematical model has been developed for the controller design by utilizing the Newton-Euler,Lagrange method.The nonlinear model has been linearized around an equilibrium point.Linear and nonlinear models have been compared to find the range in which linear and nonlinear models’behaviour is similar.MATLAB LQR function and system dynamics have been used to estimate the controller parameters.For the performance evaluation of the designed controller,Simulink has been used.Linear and nonlinear models have been simulated along with the designed controller.Simulations have been performed for the designed controller over the linear and nonlinear system under different conditions through varying system variables.The results show that the system is stable and robust enough to act against external disturbances.The controller maintains the rotary inverted pendulum in an upright position and rejects disruptions like falling under gravitational force or any external disturbance by adjusting the rotation of the horizontal link in both linear and nonlinear environments in a specific range.The controller has been practically designed and implemented.It is vivid from the results that the controller is robust enough to reject the disturbances in milliseconds and keeps the pendulum arm deflection angle to zero degrees.
基金The Deanship of Scientific Research(DSR)at King Abdulaziz University(KAU),Jeddah,Saudi Arabia has funded this project,under grant no.(RG-91-611-42).
文摘Rapid technological advancement has enabled modern healthcare systems to provide more sophisticated and real-time services on the Internet of Medical Things(IoMT).The existing cloud-based,centralized IoMT architectures are vulnerable to multiple security and privacy problems.The blockchain-enabled IoMT is an emerging paradigm that can ensure the security and trustworthiness of medical data sharing in the IoMT networks.This article presents a private and easily expandable blockchain-based framework for the IoMT.The proposed framework contains several participants,including private blockchain,hospitalmanagement systems,cloud service providers,doctors,and patients.Data security is ensured by incorporating an attributebased encryption scheme.Furthermore,an IoT-friendly consensus algorithm is deployed to ensure fast block validation and high scalability in the IoMT network.The proposed framework can perform multiple healthcare-related services in a secure and trustworthy manner.The performance of blockchain read/write operations is evaluated in terms of transaction throughput and latency.Experimental outcomes indicate that the proposed scheme achieved an average throughput of 857 TPS and 151 TPS for read and write operations.The average latency is 61 ms and 16 ms for read and write operations,respectively.
基金support from the USA-based research group(Computing and Engineering,Indiana University)the KSA-based research group(Department of Computer Science,King Abdulaziz University).
文摘Complex networks on the Internet of Things(IoT)and brain communication are the main focus of this paper.The benefits of complex networks may be applicable in the future research directions of 6G,photonic,IoT,brain,etc.,communication technologies.Heavy data traffic,huge capacity,minimal level of dynamic latency,etc.are some of the future requirements in 5G+and 6G communication systems.In emerging communication,technologies such as 5G+/6G-based photonic sensor communication and complex networks play an important role in improving future requirements of IoT and brain communication.In this paper,the state of the complex system considered as a complex network(the connection between the brain cells,neurons,etc.)needs measurement for analyzing the functions of the neurons during brain communication.Here,we measure the state of the complex system through observability.Using 5G+/6G-based photonic sensor nodes,finding observability influenced by the concept of contraction provides the stability of neurons.When IoT or any sensors fail to measure the state of the connectivity in the 5G+or 6G communication due to external noise and attacks,some information about the sensor nodes during the communication will be lost.Similarly,neurons considered sing the complex networks concept neuron sensors in the brain lose communication and connections.Therefore,affected sensor nodes in a contraction are equivalent to compensate for maintaining stability conditions.In this compensation,loss of observability depends on the contraction size which is a key factor for employing a complex network.To analyze the observability recovery,we can use a contraction detection algorithm with complex network properties.Our survey paper shows that contraction size will allow us to improve the performance of brain communication,stability of neurons,etc.,through the clustering coefficient considered in the contraction detection algorithm.In addition,we discuss the scalability of IoT communication using 5G+/6G-based photonic technology.
基金supporting this research through the Post-Doctoral Fellowship Scheme under Grant Q.J130000.21A2.06E03 and Q.J130000.2409.08G77.
文摘Smart environments offer various services,including smart cities,ehealthcare,transportation,and wearable devices,generating multiple traffic flows with different Quality of Service(QoS)demands.Achieving the desired QoS with security in this heterogeneous environment can be challenging due to traffic flows and device management,unoptimized routing with resource awareness,and security threats.Software Defined Networks(SDN)can help manage these devices through centralized SDN controllers and address these challenges.Various schemes have been proposed to integrate SDN with emerging technologies for better resource utilization and security.Software Defined Wireless Body Area Networks(SDWBAN)and Software Defined Internet of Things(SDIoT)are the recently introduced frameworks to overcome these challenges.This study surveys the existing SDWBAN and SDIoT routing and security challenges.The paper discusses each solution in detail and analyses its weaknesses.It covers SDWBAN frameworks for efficient management of WBAN networks,management of IoT devices,and proposed security mechanisms for IoT and data security in WBAN.The survey provides insights into the state-of-the-art in SDWBAN and SDIoT routing with resource awareness and security threats.Finally,this study highlights potential areas for future research.
基金The research is supported by Nature Science Foundation of Zhejiang Province(LQ20F020008)“Pioneer”and“Leading Goose”R&D Program of Zhejiang(Grant Nos.2023C03203,2023C01150).
文摘The rapid growth of smart technologies and services has intensified the challenges surrounding identity authenti-cation techniques.Biometric credentials are increasingly being used for verification due to their advantages over traditional methods,making it crucial to safeguard the privacy of people’s biometric data in various scenarios.This paper offers an in-depth exploration for privacy-preserving techniques and potential threats to biometric systems.It proposes a noble and thorough taxonomy survey for privacy-preserving techniques,as well as a systematic framework for categorizing the field’s existing literature.We review the state-of-the-art methods and address their advantages and limitations in the context of various biometric modalities,such as face,fingerprint,and eye detection.The survey encompasses various categories of privacy-preserving mechanisms and examines the trade-offs between security,privacy,and recognition performance,as well as the issues and future research directions.It aims to provide researchers,professionals,and decision-makers with a thorough understanding of the existing privacy-preserving solutions in biometric recognition systems and serves as the foundation of the development of more secure and privacy-preserving biometric technologies.
文摘The basic unit in life is cell.?It contains many protein molecules located at its different organelles. The growth and reproduction of a cell as well as most of its other biological functions are performed via these proteins. But proteins in different organelles or subcellular locations have different functions. Facing?the avalanche of protein sequences generated in the postgenomic age, we are challenged to develop high throughput tools for identifying the subcellular localization of proteins based on their sequence information alone. Although considerable efforts have been made in this regard, the problem is far apart from being solved yet. Most existing methods can be used to deal with single-location proteins only. Actually, proteins with multi-locations may have some special biological functions that are particularly important for drug targets. Using the ML-GKR (Multi-Label Gaussian Kernel Regression) method,?we developed a new predictor called “pLoc-mGpos” by in-depth extracting the key information from GO (Gene Ontology) into the Chou’s general PseAAC (Pseudo Amino Acid Composition)?for predicting the subcellular localization of Gram-positive bacterial proteins with both single and multiple location sites. Rigorous cross-validation on a same stringent benchmark dataset indicated that the proposed pLoc-mGpos predictor is remarkably superior to “iLoc-Gpos”, the state-of-the-art predictor for the same purpose.?To maximize the convenience of most experimental scientists, a user-friendly web-server for the new powerful predictor has been established at http://www.jci-bioinfo.cn/pLoc-mGpos/, by which users can easily get their desired results without the need to go through the complicated mathematics involved.
文摘The massive technological advancements around the world have created significant challenging competition among companies where each of the companies tries to attract the customers using different techniques. One of the recent tech- niques is Augmented Reality (AR). The AR is a new technology which is capable of presenting possibilities that are difficult for other technologies to offer and meet. Nowadays, numerous augmented reality applications have been used in the industry of different kinds and disseminated all over the world. AR will really alter the way individuals view the world. The AR is yet in its initial phases of research and development at different colleges and high-tech institutes. Throughout the last years, AR apps became transportable and generally available on various devices. Besides, AR be- gins to occupy its place in our audio-visual media and to be used in various fields in our life in tangible and exciting ways such as news, sports and is used in many domains in our life such as electronic commerce, promotion, design, and business. In addition, AR is used to facilitate the learning whereas it enables students to access location-specific infor- mation provided through various sources. Such growth and spread of AR applications pushes organizations to compete one another, every one of them exerts its best to gain the customers. This paper provides a comprehensive study of AR including its history, architecture, applications, current challenges and future trends.
基金Funding for this study was received from the Ministry of Education and Deanship of Scientific Research at King Abdulaziz University,Kingdom of Saudi Arabia under the Grant No.IFPHI-264-611-2020.
文摘The blockchain technology plays a significant role in the present era of information technology.In the last few years,this technology has been used effectively in several domains.It has already made significant differences in human life,as well as is intended to have noticeable impact in many other domains in the forthcoming years.The rapid growth in blockchain technology has created numerous new possibilities for use,especially for healthcare applications.The digital healthcare services require highly effective security methodologies that can integrate data security with the availablemanagement strategies.To test and understand this goal of security management in Saudi Arabian perspective,the authors performed a numerical analysis and simulation through a multi criteria decision making approach in this study.The authors adopted the fuzzy Analytical Hierarchy Process(AHP)for evaluating the effectiveness and then applied the fuzzy Technique forOrder of Preference by Similarity to Ideal Solution(TOPSIS)technique to simulate the validation of results.For eliciting highly corroborative and conclusive results,the study referred to a real time project of diabetes patients’management application of Kingdom of Saudi Arabia(KSA).The results discussed in this paper are scientifically proven and validated through various analysis approaches.Hence the present study can be a credible basis for other similar endeavours being undertaken in the domain of blockchain research.
文摘Distributed denial of service (DDoS) attacks continues to grow as a threat to organizations worldwide. From the first known attack in 1999 to the highly publicized Operation Ababil, the DDoS attacks have a history of flooding the victim network with an enormous number of packets, hence exhausting the resources and preventing the legitimate users to access them. After having standard DDoS defense mechanism, still attackers are able to launch an attack. These inadequate defense mechanisms need to be improved and integrated with other solutions. The purpose of this paper is to study the characteristics of DDoS attacks, various models involved in attacks and to provide a timeline of defense mechanism with their improvements to combat DDoS attacks. In addition to this, a novel scheme is proposed to detect DDoS attack efficiently by using MapReduce programming model.
基金The Deanship of Scientific Research(DSR)at King Abdulaziz University(KAU),Jeddah,Saudi Arabia has funded this project,under Grant No.(IFPDP-279-22).
文摘The Internet of things(IoT)is an emerging paradigm that integrates devices and services to collect real-time data from surroundings and process the information at a very high speed to make a decision.Despite several advantages,the resource-constrained and heterogeneous nature of IoT networks makes them a favorite target for cybercriminals.A single successful attempt of network intrusion can compromise the complete IoT network which can lead to unauthorized access to the valuable information of consumers and industries.To overcome the security challenges of IoT networks,this article proposes a lightweight deep autoencoder(DAE)based cyberattack detection framework.The proposed approach learns the normal and anomalous data patterns to identify the various types of network intrusions.The most significant feature of the proposed technique is its lower complexity which is attained by reducing the number of operations.To optimally train the proposed DAE,a range of hyperparameters was determined through extensive experiments that ensure higher attack detection accuracy.The efficacy of the suggested framework is evaluated via two standard and open-source datasets.The proposed DAE achieved the accuracies of 98.86%,and 98.26%for NSL-KDD,99.32%,and 98.79%for the UNSW-NB15 dataset in binary class and multi-class scenarios.The performance of the suggested attack detection framework is also compared with several state-of-the-art intrusion detection schemes.Experimental outcomes proved the promising performance of the proposed scheme for cyberattack detection in IoT networks.
文摘Ever since its outbreak inWuhan,COVID-19 has cloaked the entireworld in a pall of despondency and uncertainty.The present study describes the exploratory analysis of all COVID cases in Saudi Arabia.Besides,the study has executed the forecastingmodel for predicting the possible number of COVID-19 cases in Saudi Arabia till a defined period.Towards this intent,the study analyzed different age groups of patients(child,adult,elderly)who were affected by COVID-19.The analysis was done city-wise and also included the number of recoveries recorded in different cities.Furthermore,the study also discusses the impact of COVID-19 on the economy.For conducting the stated analysis,the authors have created a list of factors that are known to cause the spread of COVID-19.As an effective countermeasure to contain the spread of Coronavirus in Saudi Arabia,this study also proposes to identify the most effective Computer Science technique that can be used by healthcare professionals.For this,the study employs the Fuzzy-Analytic Hierarchy Process integrated with the Technique for Order Performance by Similar to Ideal Solution(F.AHP.TOPSIS).After prioritizing the various Computer Science techniques,the ranking order that was obtained for the different techniques/tools to contain COVID-19 was:A4>A1>A2>A5>A3.Since the Blockchain technique obtained the highest priority,the study recommends that it must be used extensively as an efficacious and accurate means to combat COVID-19.
基金Funding for this study was received from the Ministry of Education andDeanship of Scientific Research at King Abdulaziz University, Kingdom of Saudi Arabia underthe Grant No. IFPHI-267-611-2020.
文摘Ever since its outbreak in the Wuhan city of China,COVID-19 pandemic has engulfed more than 211 countries in the world,leaving a trail of unprecedented fatalities.Even more debilitating than the infection itself,were the restrictions like lockdowns and quarantine measures taken to contain the spread of Coronavirus.Such enforced alienation affected both the mental and social condition of people significantly.Social interactions and congregations are not only integral part of work life but also form the basis of human evolvement.However,COVID-19 brought all such communication to a grinding halt.Digital interactions have failed to enthuse the fervor that one enjoys in face-to-face meets.The pandemic has shoved the entire planet into an unstable state.The main focus and aim of the proposed study is to assess the impact of the pandemic on different aspects of the society in Saudi Arabia.To achieve this objective,the study analyzes two perspectives:the early approach,and the late approach of COVID-19 and the consequent effects on different aspects of the society.We used a Machine Learning based framework for the prediction of the impact of COVID-19 on the key aspects of society.Findings of this research study indicate that financial resources were the worst affected.Several countries are facing economic upheavals due to the pandemic and COVID-19 has had a considerable impact on the lives as well as the livelihoods of people.Yet the damage is not irretrievable and the world’s societies can emerge out of this setback through concerted efforts in all facets of life.
文摘Routing protocols in Mobile Ad Hoc Networks(MANETs)operate with Expanding Ring Search(ERS)mechanism to avoid ooding in the network while tracing step.ERS mechanism searches the network with discerning Time to Live(TTL)values described by respective routing protocol that save both energy and time.This work exploits the relation between the TTL value of a packet,trafc on a node and ERS mechanism for routing in MANETs and achieves an Adaptive ERS based Per Hop Behavior(AERSPHB)rendition of requests handling.Each search request is classied based on ERS attributes and then processed for routing while monitoring the node trafc.Two algorithms are designed and examined for performance under exhaustive parametric setup and employed on adaptive premises to enhance the performance of the network.The network is tested under congestion scenario that is based on buffer utilization at node level and link utilization via back-off stage of Carrier Sense Multiple Access with Collision Avoidance(CSMA/CA).Both the link and node level congestion is handled through retransmission and rerouting the packets based on ERS parameters.The aim is to drop the packets that are exhausting the network energy whereas forward the packets nearer to the destination with priority.Extensive simulations are carried out for network scalability,node speed and network terrain size.Our results show that the proposed models attain evident performance enhancement.