Emerging mobile edge computing(MEC)is considered a feasible solution for offloading the computation-intensive request tasks generated from mobile wireless equipment(MWE)with limited computational resources and energy....Emerging mobile edge computing(MEC)is considered a feasible solution for offloading the computation-intensive request tasks generated from mobile wireless equipment(MWE)with limited computational resources and energy.Due to the homogeneity of request tasks from one MWE during a longterm time period,it is vital to predeploy the particular service cachings required by the request tasks at the MEC server.In this paper,we model a service caching-assisted MEC framework that takes into account the constraint on the number of service cachings hosted by each edge server and the migration of request tasks from the current edge server to another edge server with service caching required by tasks.Furthermore,we propose a multiagent deep reinforcement learning-based computation offloading and task migrating decision-making scheme(MBOMS)to minimize the long-term average weighted cost.The proposed MBOMS can learn the near-optimal offloading and migrating decision-making policy by centralized training and decentralized execution.Systematic and comprehensive simulation results reveal that our proposed MBOMS can converge well after training and outperforms the other five baseline algorithms.展开更多
Chronic kidney disease(CKD)is a major health concern today,requiring early and accurate diagnosis.Machine learning has emerged as a powerful tool for disease detection,and medical professionals are increasingly using ...Chronic kidney disease(CKD)is a major health concern today,requiring early and accurate diagnosis.Machine learning has emerged as a powerful tool for disease detection,and medical professionals are increasingly using ML classifier algorithms to identify CKD early.This study explores the application of advanced machine learning techniques on a CKD dataset obtained from the University of California,UC Irvine Machine Learning repository.The research introduces TrioNet,an ensemble model combining extreme gradient boosting,random forest,and extra tree classifier,which excels in providing highly accurate predictions for CKD.Furthermore,K nearest neighbor(KNN)imputer is utilized to deal withmissing values while synthetic minority oversampling(SMOTE)is used for class-imbalance problems.To ascertain the efficacy of the proposed model,a comprehensive comparative analysis is conducted with various machine learning models.The proposed TrioNet using KNN imputer and SMOTE outperformed other models with 98.97%accuracy for detectingCKD.This in-depth analysis demonstrates the model’s capabilities and underscores its potential as a valuable tool in the diagnosis of CKD.展开更多
Internet of Things (IoT) among of all the technology revolutions has been considered the next evolution of the internet. IoT has become a far more popular area in the computing world. IoT combined a huge number of thi...Internet of Things (IoT) among of all the technology revolutions has been considered the next evolution of the internet. IoT has become a far more popular area in the computing world. IoT combined a huge number of things (devices) that can be connected through the internet. The purpose: this paper aims to explore the concept of the Internet of Things (IoT) generally and outline the main definitions of IoT. The paper also aims to examine and discuss the obstacles and potential benefits of IoT in Saudi universities. Methodology: the researchers reviewed the previous literature and focused on several databases to use the recent studies and research related to the IoT. Then, the researchers also used quantitative methodology to examine the factors affecting the obstacles and potential benefits of IoT. The data were collected by using a questionnaire distributed online among academic staff and a total of 150 participants completed the survey. Finding: the result of this study reveals there are twelve factors that affect the potential benefits of using IoT such as reducing human errors, increasing business income and worker’s productivity. It also shows the eighteen factors which affect obstacles the IoT use, for example sensors’ cost, data privacy, and data security. These factors have the most influence on using IoT in Saudi universities.展开更多
Health care is an important part of human life and is a right for everyone. One of the most basic human rights is to receive health care whenever they need it. However, this is simply not an option for everyone due to...Health care is an important part of human life and is a right for everyone. One of the most basic human rights is to receive health care whenever they need it. However, this is simply not an option for everyone due to the social conditions in which some communities live and not everyone has access to it. This paper aims to serve as a reference point and guide for users who are interested in monitoring their health, particularly their blood analysis to be aware of their health condition in an easy way. This study introduces an algorithmic approach for extracting and analyzing Complete Blood Count (CBC) parameters from scanned images. The algorithm employs Optical Character Recognition (OCR) technology to process images containing tabular data, specifically targeting CBC parameter tables. Upon image processing, the algorithm extracts data and identifies CBC parameters and their corresponding values. It evaluates the status (High, Low, or Normal) of each parameter and subsequently presents evaluations, and any potential diagnoses. The primary objective is to automate the extraction and evaluation of CBC parameters, aiding healthcare professionals in swiftly assessing blood analysis results. The algorithmic framework aims to streamline the interpretation of CBC tests, potentially improving efficiency and accuracy in clinical diagnostics.展开更多
With the development of virtual reality (VR) technology, more and more industries are beginning to integrate with VR technology. In response to the problem of not being able to directly render the lighting effect of C...With the development of virtual reality (VR) technology, more and more industries are beginning to integrate with VR technology. In response to the problem of not being able to directly render the lighting effect of Caideng in digital Caideng scenes, this article analyzes the lighting model. It combines it with the lighting effect of Caideng scenes to design an optimized lighting model algorithm that fuses the bidirectional transmission distribution function (BTDF) model. This algorithm can efficiently render the lighting effect of Caideng models in a virtual environment. And using image optimization processing methods, the immersive experience effect on the VR is enhanced. Finally, a Caideng roaming interactive system was designed based on this method. The results show that the frame rate of the system is stable during operation, maintained above 60 fps, and has a good immersive experience.展开更多
The detection of rice leaf disease is significant because,as an agricultural and rice exporter country,Pakistan needs to advance in production and lower the risk of diseases.In this rapid globalization era,information...The detection of rice leaf disease is significant because,as an agricultural and rice exporter country,Pakistan needs to advance in production and lower the risk of diseases.In this rapid globalization era,information technology has increased.A sensing system is mandatory to detect rice diseases using Artificial Intelligence(AI).It is being adopted in all medical and plant sciences fields to access and measure the accuracy of results and detection while lowering the risk of diseases.Deep Neural Network(DNN)is a novel technique that will help detect disease present on a rice leave because DNN is also considered a state-of-the-art solution in image detection using sensing nodes.Further in this paper,the adoption of the mixed-method approach Deep Convolutional Neural Network(Deep CNN)has assisted the research in increasing the effectiveness of the proposed method.Deep CNN is used for image recognition and is a class of deep-learning neural networks.CNN is popular and mostly used in the field of image recognition.A dataset of images with three main leaf diseases is selected for training and testing the proposed model.After the image acquisition and preprocessing process,the Deep CNN model was trained to detect and classify three rice diseases(Brown spot,bacterial blight,and blast disease).The proposed model achieved 98.3%accuracy in comparison with similar state-of-the-art techniques.展开更多
In various fields,different networks are used,most of the time not of a single kind;but rather a mix of at least two networks.These kinds of networks are called bridge networks which are utilized in interconnection ne...In various fields,different networks are used,most of the time not of a single kind;but rather a mix of at least two networks.These kinds of networks are called bridge networks which are utilized in interconnection networks of PC,portable networks,spine of internet,networks engaged with advanced mechanics,power generation interconnection,bio-informatics and substance intensify structures.Any number that can be entirely calculated by a graph is called graph invariants.Countless mathematical graph invariants have been portrayed and utilized for connection investigation during the latest twenty years.Nevertheless,no trustworthy evaluation has been embraced to pick,how much these invariants are associated with a network graph or subatomic graph.In this paper,it will discuss three unmistakable varieties of bridge networks with an incredible capacity of assumption in the field of computer science,chemistry,physics,drug industry,informatics and arithmetic in setting with physical and manufactured developments and networks,since Contraharmonic-quadratic invariants(CQIs)are recently presented and have different figure qualities for different varieties of bridge graphs or networks.The study settled the geography of bridge graphs/networks of three novel sorts with two kinds of CQI and Quadratic-Contraharmonic Indices(QCIs).The deduced results can be used for the modeling of the above-mentioned networks.展开更多
Autism spectrum disorder(ASD),classified as a developmental disability,is now more common in children than ever.A drastic increase in the rate of autism spectrum disorder in children worldwide demands early detection ...Autism spectrum disorder(ASD),classified as a developmental disability,is now more common in children than ever.A drastic increase in the rate of autism spectrum disorder in children worldwide demands early detection of autism in children.Parents can seek professional help for a better prognosis of the child’s therapy when ASD is diagnosed under five years.This research study aims to develop an automated tool for diagnosing autism in children.The computer-aided diagnosis tool for ASD detection is designed and developed by a novel methodology that includes data acquisition,feature selection,and classification phases.The most deterministic features are selected from the self-acquired dataset by novel feature selection methods before classification.The Imperialistic competitive algorithm(ICA)based on empires conquering colonies performs feature selection in this study.The performance of Logistic Regression(LR),Decision tree,K-Nearest Neighbor(KNN),and Random Forest(RF)classifiers are experimentally studied in this research work.The experimental results prove that the Logistic regression classifier exhibits the highest accuracy for the self-acquired dataset.The ASD detection is evaluated experimentally with the Least Absolute Shrinkage and Selection Operator(LASSO)feature selection method and different classifiers.The Exploratory Data Analysis(EDA)phase has uncovered crucial facts about the data,like the correlation of the features in the dataset with the class variable.展开更多
In this paper,the Internet ofMedical Things(IoMT)is identified as a promising solution,which integrates with the cloud computing environment to provide remote health monitoring solutions and improve the quality of ser...In this paper,the Internet ofMedical Things(IoMT)is identified as a promising solution,which integrates with the cloud computing environment to provide remote health monitoring solutions and improve the quality of service(QoS)in the healthcare sector.However,problems with the present architectural models such as those related to energy consumption,service latency,execution cost,and resource usage,remain a major concern for adopting IoMT applications.To address these problems,this work presents a four-tier IoMT-edge-fog-cloud architecture along with an optimization model formulated using Mixed Integer Linear Programming(MILP),with the objective of efficiently processing and placing IoMT applications in the edge-fog-cloud computing environment,while maintaining certain quality standards(e.g.,energy consumption,service latency,network utilization).A modeling environment is used to assess and validate the proposed model by considering different traffic loads and processing requirements.In comparison to the other existing models,the performance analysis of the proposed approach shows a maximum saving of 38%in energy consumption and a 73%reduction in service latency.The results also highlight that offloading the IoMT application to the edge and fog nodes compared to the cloud is highly dependent on the tradeoff between the network journey time saved vs.the extra power consumed by edge or fog resources.展开更多
Diagnosing gastrointestinal cancer by classical means is a hazardous procedure.Years have witnessed several computerized solutions for stomach disease detection and classification.However,the existing techniques faced...Diagnosing gastrointestinal cancer by classical means is a hazardous procedure.Years have witnessed several computerized solutions for stomach disease detection and classification.However,the existing techniques faced challenges,such as irrelevant feature extraction,high similarity among different disease symptoms,and the least-important features from a single source.This paper designed a new deep learning-based architecture based on the fusion of two models,Residual blocks and Auto Encoder.First,the Hyper-Kvasir dataset was employed to evaluate the proposed work.The research selected a pre-trained convolutional neural network(CNN)model and improved it with several residual blocks.This process aims to improve the learning capability of deep models and lessen the number of parameters.Besides,this article designed an Auto-Encoder-based network consisting of five convolutional layers in the encoder stage and five in the decoder phase.The research selected the global average pooling and convolutional layers for the feature extraction optimized by a hybrid Marine Predator optimization and Slime Mould optimization algorithm.These features of both models are fused using a novel fusion technique that is later classified using the Artificial Neural Network classifier.The experiment worked on the HyperKvasir dataset,which consists of 23 stomach-infected classes.At last,the proposed method obtained an improved accuracy of 93.90%on this dataset.Comparison is also conducted with some recent techniques and shows that the proposed method’s accuracy is improved.展开更多
Every application in a smart city environment like the smart grid,health monitoring, security, and surveillance generates non-stationary datastreams. Due to such nature, the statistical properties of data changes over...Every application in a smart city environment like the smart grid,health monitoring, security, and surveillance generates non-stationary datastreams. Due to such nature, the statistical properties of data changes overtime, leading to class imbalance and concept drift issues. Both these issuescause model performance degradation. Most of the current work has beenfocused on developing an ensemble strategy by training a new classifier on thelatest data to resolve the issue. These techniques suffer while training the newclassifier if the data is imbalanced. Also, the class imbalance ratio may changegreatly from one input stream to another, making the problem more complex.The existing solutions proposed for addressing the combined issue of classimbalance and concept drift are lacking in understating of correlation of oneproblem with the other. This work studies the association between conceptdrift and class imbalance ratio and then demonstrates how changes in classimbalance ratio along with concept drift affect the classifier’s performance.We analyzed the effect of both the issues on minority and majority classesindividually. To do this, we conducted experiments on benchmark datasetsusing state-of-the-art classifiers especially designed for data stream classification.Precision, recall, F1 score, and geometric mean were used to measure theperformance. Our findings show that when both class imbalance and conceptdrift problems occur together the performance can decrease up to 15%. Ourresults also show that the increase in the imbalance ratio can cause a 10% to15% decrease in the precision scores of both minority and majority classes.The study findings may help in designing intelligent and adaptive solutionsthat can cope with the challenges of non-stationary data streams like conceptdrift and class imbalance.展开更多
In this paper, we propose a SAR image ship detection model SSE-Ship that combines image context to extend the detection field of view domain and effectively enhance feature extraction information. This method aims to ...In this paper, we propose a SAR image ship detection model SSE-Ship that combines image context to extend the detection field of view domain and effectively enhance feature extraction information. This method aims to solve the problem of low detection rate in SAR images with ship combination and ship fusion scenes. Firstly, we propose STCSPB network to solve the problem of ship and non-ship object fusion by combining image contextual feature information to distinguish ship and non-ship objects. Secondly, we combine SE Attention to enhance the effective feature information and effectively improve the detection accuracy in combined ship driving scenes. Finally, we conducted extensive experiments on two standard base datasets, SAR-Ship and SSDD, to verify the effectiveness and stability of our proposed method. The experimental results show that the SSE-Ship model has P = 0.950, R = 0.946, mAP_0.5:0.95 = 0.656 and FPS = 50 on the SAR-Ship dataset and mAP_0.5 = 0.964 and R = 0.940 on the SSDD dataset.展开更多
Security risk assessment refers to the process of identifying, analyzing, and evaluating potential security risks for an organization. An organization’s assets, personnel, and operations are protected through it as p...Security risk assessment refers to the process of identifying, analyzing, and evaluating potential security risks for an organization. An organization’s assets, personnel, and operations are protected through it as part of a comprehensive security program. Various security assessments models have been published in the literature to protect the Saudi organization’s assets, personnel, and operations. However, these models are redundant and were developed for specific purposes. Hence, the comprehensive security risk assessment model used to safeguard Saudi organizations’ assets, personnel, and operations is still omitted. Using a design science methodology, the objective of this study is to develop a comprehensive security risk assessment model called CSRAM to assess security risks in Saudi Arabian organizations based on the International Organization for Standardization and the International Electrotechnical Commission/Information security risk management (ISO/IEC 27005 ISRM) standard. CSRAM is made up of six stages: threat identification, vulnerability assessment, risk analysis, risk evaluation, risk treatment, and monitoring and review of the risk. The stages have many activities and tasks that need to be accomplished at each stage. Based on the results of the validation of the completeness of the CSRAM, we can say that the CSRAM covers the whole ISO/IEC 27005 ISRM standard, and it is complete.展开更多
The carbon tradingmarket can promote“carbon peaking”and“carbon neutrality”at low cost,but carbon emission quotas face attacks such as data forgery,tampering,counterfeiting,and replay in the electricity trading mar...The carbon tradingmarket can promote“carbon peaking”and“carbon neutrality”at low cost,but carbon emission quotas face attacks such as data forgery,tampering,counterfeiting,and replay in the electricity trading market.Certificateless signatures are a new cryptographic technology that can address traditional cryptography’s general essential certificate requirements and avoid the problem of crucial escrowbased on identity cryptography.However,most certificateless signatures still suffer fromvarious security flaws.We present a secure and efficient certificateless signing scheme by examining the security of existing certificateless signature schemes.To ensure the integrity and verifiability of electricity carbon quota trading,we propose an electricity carbon quota trading scheme based on a certificateless signature and blockchain.Our scheme utilizes certificateless signatures to ensure the validity and nonrepudiation of transactions and adopts blockchain technology to achieve immutability and traceability in electricity carbon quota transactions.In addition,validating electricity carbon quota transactions does not require time-consuming bilinear pairing operations.The results of the analysis indicate that our scheme meets existential unforgeability under adaptive selective message attacks,offers conditional identity privacy protection,resists replay attacks,and demonstrates high computing and communication performance.展开更多
The Gannet Optimization Algorithm (GOA) and the Whale Optimization Algorithm (WOA) demonstrate strong performance;however, there remains room for improvement in convergence and practical applications. This study intro...The Gannet Optimization Algorithm (GOA) and the Whale Optimization Algorithm (WOA) demonstrate strong performance;however, there remains room for improvement in convergence and practical applications. This study introduces a hybrid optimization algorithm, named the adaptive inertia weight whale optimization algorithm and gannet optimization algorithm (AIWGOA), which addresses challenges in enhancing handwritten documents. The hybrid strategy integrates the strengths of both algorithms, significantly enhancing their capabilities, whereas the adaptive parameter strategy mitigates the need for manual parameter setting. By amalgamating the hybrid strategy and parameter-adaptive approach, the Gannet Optimization Algorithm was refined to yield the AIWGOA. Through a performance analysis of the CEC2013 benchmark, the AIWGOA demonstrates notable advantages across various metrics. Subsequently, an evaluation index was employed to assess the enhanced handwritten documents and images, affirming the superior practical application of the AIWGOA compared with other algorithms.展开更多
In this paper, an improved Fast-R-CNN nuclear power cold source disaster biological image recognition algorithm is proposed to improve the safety operation of nuclear power plants. Firstly, the image data sets of the ...In this paper, an improved Fast-R-CNN nuclear power cold source disaster biological image recognition algorithm is proposed to improve the safety operation of nuclear power plants. Firstly, the image data sets of the disaster-causing creatures hairy shrimp and jellyfish were established. Then, in order to solve the problems of low recognition accuracy and unrecognizable small entities in disaster biometrics, Gamma correction algorithm was used to optimize the image of the data set, improve the image quality and reduce the noise interference. Transposed convolution is introduced into the convolution layer to increase the recognition accuracy of small targets. The experimental results show that the recognition rate of this algorithm is 6.75%, 7.5%, 9.8% and 9.03% higher than that of ResNet-50, MobileNetv1, GoogleNet and VGG16, respectively. The actual test results show that the accuracy of this algorithm is obviously better than other algorithms, and the recognition efficiency is higher, which basically meets the preset requirements of this paper.展开更多
Studying the topology of infrastructure communication networks(e.g., the Internet) has become a means to understand and develop complex systems. Therefore, investigating the evolution of Internet network topology migh...Studying the topology of infrastructure communication networks(e.g., the Internet) has become a means to understand and develop complex systems. Therefore, investigating the evolution of Internet network topology might elucidate disciplines governing the dynamic process of complex systems. It may also contribute to a more intelligent communication network framework based on its autonomous behavior. In this paper, the Internet Autonomous Systems(ASes) topology from 1998 to 2013 was studied by deconstructing and analysing topological entities on three different scales(i.e., nodes,edges and 3 network components: single-edge component M1, binary component M2 and triangle component M3). The results indicate that: a) 95% of the Internet edges are internal edges(as opposed to external and boundary edges); b) the Internet network consists mainly of internal components, particularly M2 internal components; c) in most cases, a node initially connects with multiple nodes to form an M2 component to take part in the network; d) the Internet network evolves to lower entropy. Furthermore, we find that, as a complex system, the evolution of the Internet exhibits a behavioral series,which is similar to the biological phenomena concerned with the study on metabolism and replication. To the best of our knowledge, this is the first study of the evolution of the Internet network through analysis of dynamic features of its nodes,edges and components, and therefore our study represents an innovative approach to the subject.展开更多
Process discovery, as one of the most challenging process analysis techniques, aims to uncover business process models from event logs. Many process discovery approaches were invented in the past twenty years;however,...Process discovery, as one of the most challenging process analysis techniques, aims to uncover business process models from event logs. Many process discovery approaches were invented in the past twenty years;however, most of them have difficulties in handling multi-instance sub-processes. To address this challenge, we first introduce a multi-instance business process model(MBPM) to support the modeling of processes with multiple sub-process instantiations. Formal semantics of MBPMs are precisely defined by using multi-instance Petri nets(MPNs)that are an extension of Petri nets with distinguishable tokens.Then, a novel process discovery technique is developed to support the discovery of MBPMs from event logs with sub-process multi-instantiation information. In addition, we propose to measure the quality of the discovered MBPMs against the input event logs by transforming an MBPM to a classical Petri net such that existing quality metrics, e.g., fitness and precision, can be used.The proposed discovery approach is properly implemented as plugins in the Pro M toolkit. Based on a cloud resource management case study, we compare our approach with the state-of-theart process discovery techniques. The results demonstrate that our approach outperforms existing approaches to discover process models with multi-instance sub-processes.展开更多
An automated system is proposed for the detection and classification of GI abnormalities.The proposed method operates under two pipeline procedures:(a)segmentation of the bleeding infection region and(b)classification...An automated system is proposed for the detection and classification of GI abnormalities.The proposed method operates under two pipeline procedures:(a)segmentation of the bleeding infection region and(b)classification of GI abnormalities by deep learning.The first bleeding region is segmented using a hybrid approach.The threshold is applied to each channel extracted from the original RGB image.Later,all channels are merged through mutual information and pixel-based techniques.As a result,the image is segmented.Texture and deep learning features are extracted in the proposed classification task.The transfer learning(TL)approach is used for the extraction of deep features.The Local Binary Pattern(LBP)method is used for texture features.Later,an entropy-based feature selection approach is implemented to select the best features of both deep learning and texture vectors.The selected optimal features are combined with a serial-based technique and the resulting vector is fed to the Ensemble Learning Classifier.The experimental process is evaluated on the basis of two datasets:Private and KVASIR.The accuracy achieved is 99.8 per cent for the private data set and 86.4 percent for the KVASIR data set.It can be confirmed that the proposed method is effective in detecting and classifying GI abnormalities and exceeds other methods of comparison.展开更多
基金supported by Jilin Provincial Science and Technology Department Natural Science Foundation of China(20210101415JC)Jilin Provincial Science and Technology Department Free exploration research project of China(YDZJ202201ZYTS642).
文摘Emerging mobile edge computing(MEC)is considered a feasible solution for offloading the computation-intensive request tasks generated from mobile wireless equipment(MWE)with limited computational resources and energy.Due to the homogeneity of request tasks from one MWE during a longterm time period,it is vital to predeploy the particular service cachings required by the request tasks at the MEC server.In this paper,we model a service caching-assisted MEC framework that takes into account the constraint on the number of service cachings hosted by each edge server and the migration of request tasks from the current edge server to another edge server with service caching required by tasks.Furthermore,we propose a multiagent deep reinforcement learning-based computation offloading and task migrating decision-making scheme(MBOMS)to minimize the long-term average weighted cost.The proposed MBOMS can learn the near-optimal offloading and migrating decision-making policy by centralized training and decentralized execution.Systematic and comprehensive simulation results reveal that our proposed MBOMS can converge well after training and outperforms the other five baseline algorithms.
基金funded by Princess Nourah bint Abdulrahman University Researchers Supporting Project Number PNURSP2024R333,Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Chronic kidney disease(CKD)is a major health concern today,requiring early and accurate diagnosis.Machine learning has emerged as a powerful tool for disease detection,and medical professionals are increasingly using ML classifier algorithms to identify CKD early.This study explores the application of advanced machine learning techniques on a CKD dataset obtained from the University of California,UC Irvine Machine Learning repository.The research introduces TrioNet,an ensemble model combining extreme gradient boosting,random forest,and extra tree classifier,which excels in providing highly accurate predictions for CKD.Furthermore,K nearest neighbor(KNN)imputer is utilized to deal withmissing values while synthetic minority oversampling(SMOTE)is used for class-imbalance problems.To ascertain the efficacy of the proposed model,a comprehensive comparative analysis is conducted with various machine learning models.The proposed TrioNet using KNN imputer and SMOTE outperformed other models with 98.97%accuracy for detectingCKD.This in-depth analysis demonstrates the model’s capabilities and underscores its potential as a valuable tool in the diagnosis of CKD.
文摘Internet of Things (IoT) among of all the technology revolutions has been considered the next evolution of the internet. IoT has become a far more popular area in the computing world. IoT combined a huge number of things (devices) that can be connected through the internet. The purpose: this paper aims to explore the concept of the Internet of Things (IoT) generally and outline the main definitions of IoT. The paper also aims to examine and discuss the obstacles and potential benefits of IoT in Saudi universities. Methodology: the researchers reviewed the previous literature and focused on several databases to use the recent studies and research related to the IoT. Then, the researchers also used quantitative methodology to examine the factors affecting the obstacles and potential benefits of IoT. The data were collected by using a questionnaire distributed online among academic staff and a total of 150 participants completed the survey. Finding: the result of this study reveals there are twelve factors that affect the potential benefits of using IoT such as reducing human errors, increasing business income and worker’s productivity. It also shows the eighteen factors which affect obstacles the IoT use, for example sensors’ cost, data privacy, and data security. These factors have the most influence on using IoT in Saudi universities.
文摘Health care is an important part of human life and is a right for everyone. One of the most basic human rights is to receive health care whenever they need it. However, this is simply not an option for everyone due to the social conditions in which some communities live and not everyone has access to it. This paper aims to serve as a reference point and guide for users who are interested in monitoring their health, particularly their blood analysis to be aware of their health condition in an easy way. This study introduces an algorithmic approach for extracting and analyzing Complete Blood Count (CBC) parameters from scanned images. The algorithm employs Optical Character Recognition (OCR) technology to process images containing tabular data, specifically targeting CBC parameter tables. Upon image processing, the algorithm extracts data and identifies CBC parameters and their corresponding values. It evaluates the status (High, Low, or Normal) of each parameter and subsequently presents evaluations, and any potential diagnoses. The primary objective is to automate the extraction and evaluation of CBC parameters, aiding healthcare professionals in swiftly assessing blood analysis results. The algorithmic framework aims to streamline the interpretation of CBC tests, potentially improving efficiency and accuracy in clinical diagnostics.
文摘With the development of virtual reality (VR) technology, more and more industries are beginning to integrate with VR technology. In response to the problem of not being able to directly render the lighting effect of Caideng in digital Caideng scenes, this article analyzes the lighting model. It combines it with the lighting effect of Caideng scenes to design an optimized lighting model algorithm that fuses the bidirectional transmission distribution function (BTDF) model. This algorithm can efficiently render the lighting effect of Caideng models in a virtual environment. And using image optimization processing methods, the immersive experience effect on the VR is enhanced. Finally, a Caideng roaming interactive system was designed based on this method. The results show that the frame rate of the system is stable during operation, maintained above 60 fps, and has a good immersive experience.
基金funded by the University of Haripur,KP Pakistan Researchers Supporting Project number (PKURFL2324L33)。
文摘The detection of rice leaf disease is significant because,as an agricultural and rice exporter country,Pakistan needs to advance in production and lower the risk of diseases.In this rapid globalization era,information technology has increased.A sensing system is mandatory to detect rice diseases using Artificial Intelligence(AI).It is being adopted in all medical and plant sciences fields to access and measure the accuracy of results and detection while lowering the risk of diseases.Deep Neural Network(DNN)is a novel technique that will help detect disease present on a rice leave because DNN is also considered a state-of-the-art solution in image detection using sensing nodes.Further in this paper,the adoption of the mixed-method approach Deep Convolutional Neural Network(Deep CNN)has assisted the research in increasing the effectiveness of the proposed method.Deep CNN is used for image recognition and is a class of deep-learning neural networks.CNN is popular and mostly used in the field of image recognition.A dataset of images with three main leaf diseases is selected for training and testing the proposed model.After the image acquisition and preprocessing process,the Deep CNN model was trained to detect and classify three rice diseases(Brown spot,bacterial blight,and blast disease).The proposed model achieved 98.3%accuracy in comparison with similar state-of-the-art techniques.
基金the University of Jeddah,Jeddah,Saudi Arabia,under Grant No.(UJ-22-DR-14).
文摘In various fields,different networks are used,most of the time not of a single kind;but rather a mix of at least two networks.These kinds of networks are called bridge networks which are utilized in interconnection networks of PC,portable networks,spine of internet,networks engaged with advanced mechanics,power generation interconnection,bio-informatics and substance intensify structures.Any number that can be entirely calculated by a graph is called graph invariants.Countless mathematical graph invariants have been portrayed and utilized for connection investigation during the latest twenty years.Nevertheless,no trustworthy evaluation has been embraced to pick,how much these invariants are associated with a network graph or subatomic graph.In this paper,it will discuss three unmistakable varieties of bridge networks with an incredible capacity of assumption in the field of computer science,chemistry,physics,drug industry,informatics and arithmetic in setting with physical and manufactured developments and networks,since Contraharmonic-quadratic invariants(CQIs)are recently presented and have different figure qualities for different varieties of bridge graphs or networks.The study settled the geography of bridge graphs/networks of three novel sorts with two kinds of CQI and Quadratic-Contraharmonic Indices(QCIs).The deduced results can be used for the modeling of the above-mentioned networks.
基金The authors extend their appreciation to the Deputyship for Research&Innovation,Ministry of Education in Saudi Arabia for funding this research work through the Project Number(IF2-PSAU-2022/01/22043)。
文摘Autism spectrum disorder(ASD),classified as a developmental disability,is now more common in children than ever.A drastic increase in the rate of autism spectrum disorder in children worldwide demands early detection of autism in children.Parents can seek professional help for a better prognosis of the child’s therapy when ASD is diagnosed under five years.This research study aims to develop an automated tool for diagnosing autism in children.The computer-aided diagnosis tool for ASD detection is designed and developed by a novel methodology that includes data acquisition,feature selection,and classification phases.The most deterministic features are selected from the self-acquired dataset by novel feature selection methods before classification.The Imperialistic competitive algorithm(ICA)based on empires conquering colonies performs feature selection in this study.The performance of Logistic Regression(LR),Decision tree,K-Nearest Neighbor(KNN),and Random Forest(RF)classifiers are experimentally studied in this research work.The experimental results prove that the Logistic regression classifier exhibits the highest accuracy for the self-acquired dataset.The ASD detection is evaluated experimentally with the Least Absolute Shrinkage and Selection Operator(LASSO)feature selection method and different classifiers.The Exploratory Data Analysis(EDA)phase has uncovered crucial facts about the data,like the correlation of the features in the dataset with the class variable.
基金The authors extend their appreciation to the Deputyship for Research and Innovation,Ministry of Education in Saudi Arabia for funding this research work the project number(442/204).
文摘In this paper,the Internet ofMedical Things(IoMT)is identified as a promising solution,which integrates with the cloud computing environment to provide remote health monitoring solutions and improve the quality of service(QoS)in the healthcare sector.However,problems with the present architectural models such as those related to energy consumption,service latency,execution cost,and resource usage,remain a major concern for adopting IoMT applications.To address these problems,this work presents a four-tier IoMT-edge-fog-cloud architecture along with an optimization model formulated using Mixed Integer Linear Programming(MILP),with the objective of efficiently processing and placing IoMT applications in the edge-fog-cloud computing environment,while maintaining certain quality standards(e.g.,energy consumption,service latency,network utilization).A modeling environment is used to assess and validate the proposed model by considering different traffic loads and processing requirements.In comparison to the other existing models,the performance analysis of the proposed approach shows a maximum saving of 38%in energy consumption and a 73%reduction in service latency.The results also highlight that offloading the IoMT application to the edge and fog nodes compared to the cloud is highly dependent on the tradeoff between the network journey time saved vs.the extra power consumed by edge or fog resources.
基金supported by“Human Resources Program in Energy Technology”of the Korea Institute of Energy Technology Evaluation and Planning(KETEP),granted financial resources from the Ministry of Trade,Industry&Energy,Republic of Korea(No.20204010600090)Supporting Project Number(PNURSP2023R387),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Diagnosing gastrointestinal cancer by classical means is a hazardous procedure.Years have witnessed several computerized solutions for stomach disease detection and classification.However,the existing techniques faced challenges,such as irrelevant feature extraction,high similarity among different disease symptoms,and the least-important features from a single source.This paper designed a new deep learning-based architecture based on the fusion of two models,Residual blocks and Auto Encoder.First,the Hyper-Kvasir dataset was employed to evaluate the proposed work.The research selected a pre-trained convolutional neural network(CNN)model and improved it with several residual blocks.This process aims to improve the learning capability of deep models and lessen the number of parameters.Besides,this article designed an Auto-Encoder-based network consisting of five convolutional layers in the encoder stage and five in the decoder phase.The research selected the global average pooling and convolutional layers for the feature extraction optimized by a hybrid Marine Predator optimization and Slime Mould optimization algorithm.These features of both models are fused using a novel fusion technique that is later classified using the Artificial Neural Network classifier.The experiment worked on the HyperKvasir dataset,which consists of 23 stomach-infected classes.At last,the proposed method obtained an improved accuracy of 93.90%on this dataset.Comparison is also conducted with some recent techniques and shows that the proposed method’s accuracy is improved.
基金The authors would like to extend their gratitude to Universiti Teknologi PETRONAS (Malaysia)for funding this research through grant number (015LA0-037).
文摘Every application in a smart city environment like the smart grid,health monitoring, security, and surveillance generates non-stationary datastreams. Due to such nature, the statistical properties of data changes overtime, leading to class imbalance and concept drift issues. Both these issuescause model performance degradation. Most of the current work has beenfocused on developing an ensemble strategy by training a new classifier on thelatest data to resolve the issue. These techniques suffer while training the newclassifier if the data is imbalanced. Also, the class imbalance ratio may changegreatly from one input stream to another, making the problem more complex.The existing solutions proposed for addressing the combined issue of classimbalance and concept drift are lacking in understating of correlation of oneproblem with the other. This work studies the association between conceptdrift and class imbalance ratio and then demonstrates how changes in classimbalance ratio along with concept drift affect the classifier’s performance.We analyzed the effect of both the issues on minority and majority classesindividually. To do this, we conducted experiments on benchmark datasetsusing state-of-the-art classifiers especially designed for data stream classification.Precision, recall, F1 score, and geometric mean were used to measure theperformance. Our findings show that when both class imbalance and conceptdrift problems occur together the performance can decrease up to 15%. Ourresults also show that the increase in the imbalance ratio can cause a 10% to15% decrease in the precision scores of both minority and majority classes.The study findings may help in designing intelligent and adaptive solutionsthat can cope with the challenges of non-stationary data streams like conceptdrift and class imbalance.
文摘In this paper, we propose a SAR image ship detection model SSE-Ship that combines image context to extend the detection field of view domain and effectively enhance feature extraction information. This method aims to solve the problem of low detection rate in SAR images with ship combination and ship fusion scenes. Firstly, we propose STCSPB network to solve the problem of ship and non-ship object fusion by combining image contextual feature information to distinguish ship and non-ship objects. Secondly, we combine SE Attention to enhance the effective feature information and effectively improve the detection accuracy in combined ship driving scenes. Finally, we conducted extensive experiments on two standard base datasets, SAR-Ship and SSDD, to verify the effectiveness and stability of our proposed method. The experimental results show that the SSE-Ship model has P = 0.950, R = 0.946, mAP_0.5:0.95 = 0.656 and FPS = 50 on the SAR-Ship dataset and mAP_0.5 = 0.964 and R = 0.940 on the SSDD dataset.
文摘Security risk assessment refers to the process of identifying, analyzing, and evaluating potential security risks for an organization. An organization’s assets, personnel, and operations are protected through it as part of a comprehensive security program. Various security assessments models have been published in the literature to protect the Saudi organization’s assets, personnel, and operations. However, these models are redundant and were developed for specific purposes. Hence, the comprehensive security risk assessment model used to safeguard Saudi organizations’ assets, personnel, and operations is still omitted. Using a design science methodology, the objective of this study is to develop a comprehensive security risk assessment model called CSRAM to assess security risks in Saudi Arabian organizations based on the International Organization for Standardization and the International Electrotechnical Commission/Information security risk management (ISO/IEC 27005 ISRM) standard. CSRAM is made up of six stages: threat identification, vulnerability assessment, risk analysis, risk evaluation, risk treatment, and monitoring and review of the risk. The stages have many activities and tasks that need to be accomplished at each stage. Based on the results of the validation of the completeness of the CSRAM, we can say that the CSRAM covers the whole ISO/IEC 27005 ISRM standard, and it is complete.
基金the National Fund Project No.62172337National Natural Science Foundation of China(No.61662069)China Postdoctoral Science Foundation(No.2017M610817).
文摘The carbon tradingmarket can promote“carbon peaking”and“carbon neutrality”at low cost,but carbon emission quotas face attacks such as data forgery,tampering,counterfeiting,and replay in the electricity trading market.Certificateless signatures are a new cryptographic technology that can address traditional cryptography’s general essential certificate requirements and avoid the problem of crucial escrowbased on identity cryptography.However,most certificateless signatures still suffer fromvarious security flaws.We present a secure and efficient certificateless signing scheme by examining the security of existing certificateless signature schemes.To ensure the integrity and verifiability of electricity carbon quota trading,we propose an electricity carbon quota trading scheme based on a certificateless signature and blockchain.Our scheme utilizes certificateless signatures to ensure the validity and nonrepudiation of transactions and adopts blockchain technology to achieve immutability and traceability in electricity carbon quota transactions.In addition,validating electricity carbon quota transactions does not require time-consuming bilinear pairing operations.The results of the analysis indicate that our scheme meets existential unforgeability under adaptive selective message attacks,offers conditional identity privacy protection,resists replay attacks,and demonstrates high computing and communication performance.
文摘The Gannet Optimization Algorithm (GOA) and the Whale Optimization Algorithm (WOA) demonstrate strong performance;however, there remains room for improvement in convergence and practical applications. This study introduces a hybrid optimization algorithm, named the adaptive inertia weight whale optimization algorithm and gannet optimization algorithm (AIWGOA), which addresses challenges in enhancing handwritten documents. The hybrid strategy integrates the strengths of both algorithms, significantly enhancing their capabilities, whereas the adaptive parameter strategy mitigates the need for manual parameter setting. By amalgamating the hybrid strategy and parameter-adaptive approach, the Gannet Optimization Algorithm was refined to yield the AIWGOA. Through a performance analysis of the CEC2013 benchmark, the AIWGOA demonstrates notable advantages across various metrics. Subsequently, an evaluation index was employed to assess the enhanced handwritten documents and images, affirming the superior practical application of the AIWGOA compared with other algorithms.
文摘In this paper, an improved Fast-R-CNN nuclear power cold source disaster biological image recognition algorithm is proposed to improve the safety operation of nuclear power plants. Firstly, the image data sets of the disaster-causing creatures hairy shrimp and jellyfish were established. Then, in order to solve the problems of low recognition accuracy and unrecognizable small entities in disaster biometrics, Gamma correction algorithm was used to optimize the image of the data set, improve the image quality and reduce the noise interference. Transposed convolution is introduced into the convolution layer to increase the recognition accuracy of small targets. The experimental results show that the recognition rate of this algorithm is 6.75%, 7.5%, 9.8% and 9.03% higher than that of ResNet-50, MobileNetv1, GoogleNet and VGG16, respectively. The actual test results show that the accuracy of this algorithm is obviously better than other algorithms, and the recognition efficiency is higher, which basically meets the preset requirements of this paper.
基金This work was supported by the National Key Research and Development program of China (No. 2016YFC0801406), Shandong Key Research and Development program (Nos. 2016ZDJS02A05 and 2018GGX 109013) and Shandong Provincial Natural Science Foundation (No. ZR2018MEE008).
基金Project supported by the National Natural Science Foundation of China(Grant No.61671142)
文摘Studying the topology of infrastructure communication networks(e.g., the Internet) has become a means to understand and develop complex systems. Therefore, investigating the evolution of Internet network topology might elucidate disciplines governing the dynamic process of complex systems. It may also contribute to a more intelligent communication network framework based on its autonomous behavior. In this paper, the Internet Autonomous Systems(ASes) topology from 1998 to 2013 was studied by deconstructing and analysing topological entities on three different scales(i.e., nodes,edges and 3 network components: single-edge component M1, binary component M2 and triangle component M3). The results indicate that: a) 95% of the Internet edges are internal edges(as opposed to external and boundary edges); b) the Internet network consists mainly of internal components, particularly M2 internal components; c) in most cases, a node initially connects with multiple nodes to form an M2 component to take part in the network; d) the Internet network evolves to lower entropy. Furthermore, we find that, as a complex system, the evolution of the Internet exhibits a behavioral series,which is similar to the biological phenomena concerned with the study on metabolism and replication. To the best of our knowledge, this is the first study of the evolution of the Internet network through analysis of dynamic features of its nodes,edges and components, and therefore our study represents an innovative approach to the subject.
基金supported by the National Natural Science Foundation of China(61902222)the Taishan Scholars Program of Shandong Province(tsqn201909109)+1 种基金the Natural Science Excellent Youth Foundation of Shandong Province(ZR2021YQ45)the Youth Innovation Science and Technology Team Foundation of Shandong Higher School(2021KJ031)。
文摘Process discovery, as one of the most challenging process analysis techniques, aims to uncover business process models from event logs. Many process discovery approaches were invented in the past twenty years;however, most of them have difficulties in handling multi-instance sub-processes. To address this challenge, we first introduce a multi-instance business process model(MBPM) to support the modeling of processes with multiple sub-process instantiations. Formal semantics of MBPMs are precisely defined by using multi-instance Petri nets(MPNs)that are an extension of Petri nets with distinguishable tokens.Then, a novel process discovery technique is developed to support the discovery of MBPMs from event logs with sub-process multi-instantiation information. In addition, we propose to measure the quality of the discovered MBPMs against the input event logs by transforming an MBPM to a classical Petri net such that existing quality metrics, e.g., fitness and precision, can be used.The proposed discovery approach is properly implemented as plugins in the Pro M toolkit. Based on a cloud resource management case study, we compare our approach with the state-of-theart process discovery techniques. The results demonstrate that our approach outperforms existing approaches to discover process models with multi-instance sub-processes.
基金This research was financially supported in part by the Ministry of Trade,Industry and Energy(MOTIE)and Korea Institute for Advancement of Technology(KIAT)through the International Cooperative R&D program.(Project No.P0016038)in part by the MSIT(Ministry of Science and ICT),Korea,under the ITRC(Information Technology Research Center)support program(IITP-2021-2016-0-00312)supervised by the IITP(Institute for Information&communications Technology Planning&Evaluation).
文摘An automated system is proposed for the detection and classification of GI abnormalities.The proposed method operates under two pipeline procedures:(a)segmentation of the bleeding infection region and(b)classification of GI abnormalities by deep learning.The first bleeding region is segmented using a hybrid approach.The threshold is applied to each channel extracted from the original RGB image.Later,all channels are merged through mutual information and pixel-based techniques.As a result,the image is segmented.Texture and deep learning features are extracted in the proposed classification task.The transfer learning(TL)approach is used for the extraction of deep features.The Local Binary Pattern(LBP)method is used for texture features.Later,an entropy-based feature selection approach is implemented to select the best features of both deep learning and texture vectors.The selected optimal features are combined with a serial-based technique and the resulting vector is fed to the Ensemble Learning Classifier.The experimental process is evaluated on the basis of two datasets:Private and KVASIR.The accuracy achieved is 99.8 per cent for the private data set and 86.4 percent for the KVASIR data set.It can be confirmed that the proposed method is effective in detecting and classifying GI abnormalities and exceeds other methods of comparison.